Artificial general intelligence (AGI) is a field of Artificial Intelligence (AI) research in which scientists strive to create a computer system that is generally more intelligent than humans. These hypothetical systems may have a degree of intelligence and behavior – including the ability to change their own code – and be able to learn to solve human-like problems, without being trained to do so.
This word started with “Artificial General Intelligence“(Springer, 2007), a collection of essays edited by a computer scientist Ben Goertzel and AI researcher Cassius Pennachin. But this idea has existed for decades throughout the history of AIand parts of science fiction books and films.
AI services used today – including the basic machine learning methods used at Facebook and even large-scale language models (LLMs) like ChatGPT – are considered “narrow.” This means that they can perform at least one task – such as image recognition – better than humans, but are limited to a specific type of task or set of actions based on the information they have been trained on. AGI, on the other hand, would break the limits of its training data and show the ability of the human level in various areas of life and knowledge, with the same reasoning and situations as a person.
But since AGI hasn’t been built yet, there’s no consensus among scientists about what it could mean for humans, which risks are more likely than others, or what the societal implications are. . Some have predicted before that it will not happen, but many scientists and technologists are gathering around the idea of achieving. AGI in the coming years – including computer scientist Ray Kurzweil and Silicon Valley executives such as Mark Zuckerberg, Sam Altman and Elon Musk.
What are the pros and cons of AGI?
AI has already shown sign of benefits in various fields, from to assist in scientific research to save people time. New systems such as content creation tools generate artwork for marketing campaigns or write emails based on user chats, for example. But these tools can only do the specific tasks they’re trained to do – based on the data processors they’re fed. AGI, on the other hand, can open up another layer of benefits for humans, especially in areas where problem solving is required.
Related: 22 jobs artificial general intelligence (AGI) could replace – and 10 jobs it could create
In theory, AGI can help increase productivity, improve the global economy and enable the discovery of new scientific knowledge that shifts the boundaries of what is possible, OpenAI CEO Sam Altman wrote in blog post announced in February 2023 – three months after ChatGPT went online. “AGI has the potential to give everyone amazing new abilities; we can imagine a world where all of us can help with any mental task, giving a lot more power for human ingenuity and creativity,” Altman added.
However, there are many inherent risks that AGI presents – from “misalignment,” where the basic intentions of the system may not be the same as those of the people who control it, to the “non-zero chance” of the future system to exterminate all people, it said. Musk in 2023. Review, published in August 2021 at Journal of Experimental and Theoretical Artificial Intelligencedescribed several possible risks of the future AGI system, despite the “great benefits for humanity” that it could provide.
“The review identified a number of risks associated with AGI, including AGI escaping the control of human owners, being given or developed for insecure purposes, the development of insecure AGI, AGIs that with poor practices, behaviors and standards; inadequate control of AGI. , and the risks involved,” the authors wrote in the study.
The authors also think that future technologies may “have the ability to improve themselves by creating more intelligent versions of themselves, as well as changing their predetermined goals.” There’s also the potential for human groups to create AGI for misuse, as well as the “harmful unintended consequences” that come from well-intentioned AGI, the researchers wrote.
When will AGI happen?
There are competing ideas about whether humans can actually build a system as powerful as AGI, let alone when such a system can be built. An a review of several major researches among AI scientists shows the general consensus is that it could happen before the end of the century – but attitudes have also changed over time. In the 2010s, the consensus view was that AGI was about 50 years away. But recently, this estimate has been reduced to about five to 20 years.
In recent months, several experts have suggested that an AGI system will emerge within a decade. This is the timeline that Kurzweil laid out in his book “Unity is closer” (2024, Penguin) – when we get to the AGI that represents the unity of technology.
This period will be the point of no return, after which technological growth becomes uncontrollable and irreversible. Kurzweil predicts the most important event of AGI will lead to superintelligence in the 2030s and then, in 2045, people will be able to connect their brains directly to AI – which will expand human intelligence and knowledge .
Some in the scientific community suggest that AGI may be happening soon. For example, Goertzel recommended us it may reach unity by 2027DeepMind co-founder Shane Legg said expected AGI in 2028. Musk also suggested AI will be smarter than the smartest human by the end of 2025.
#Artificial #General #Intelligence #AGI