Google DeepMind and Oxford Research Paper says AI will replace humans
A recent study on AI was released by scientists at the University of Oxford and Google Deepmind. It asserts that humans will most likely be wiped out by AI. In other words, a catastrophe for humanity’s existence could result from superintelligent AI.
Companies and researchers have been developing robots, self-driving cars, and other technologies using AI over the years. The paper, which was released in the peer-reviewed AI Magazine last month, is fascinating, though. It looks at how reward systems might be intentionally created in an attempt to consider how artificial intelligence might endanger humans.
These days, Generative Adversarial Networks, or GANs, are the most effective AI models. They are two-part programmes with the first portion attempting to create an image (or text) from the input data and the second section evaluating how well it did. The new study suggests that a powerful AI in charge of some critical task in the future would be enticed to devise dishonest means of obtaining its reward, harming mankind in the process. “Under the parameters we have identified, our conclusion is substantially stronger than that of any previous publication—an existential disaster is not just possible, but likely,” Cohen wrote on Twitter.
Bostrom, Russell, and others have argued that advanced AI poses a threat to humanity. We reach the same conclusion in a new paper in AI Magazine, but we note a few (very plausible) assumptions on which such arguments depend. https://t.co/LQLZcf3P2G 🧵 1/15 pic.twitter.com/QTMlD01IPp
— Michael Cohen (@Michael05156007) September 6, 2022
As AI could apply various ideas and assume a variety of shapes in the future. For illustrative reasons, the paper describes hypothetical situations in which a sophisticated programme might intervene to obtain its reward without accomplishing its objective. To ensure control over its reward, an AI would, for instance, aim to “remove any dangers” and “consume all available energy.” The article imagines life on Earth becoming a zero-sum competition between mankind, with its demands to produce food and maintain electricity, and the highly developed machine, which would want to harness all resources to secure its reward and protect against our increasing attempts to stop it. The article claims that losing this game would be fatal. These hypothetical possibilities suggest that we should be moving cautiously, if at all, toward the objective of more advanced AI.