On Sunday, OpenAI CEO Sam Altman offered two eye-catching predictions about the near-future of artificial intelligence. In a post titled "Reflections" on his personal blog, Altman wrote, "We are now confident we know how to build AGI as we have traditionally understood it." He added, "We believe that, in 2025, we may see the first AI agents 'join the workforce' and materially change the output of companies."
Both statements are notable coming from Altman, who has served as the leader of OpenAI during the rise of mainstream generative AI products such as ChatGPT. AI agents are the latest marketing trend in AI, allowing AI models to take action on a user's behalf. However, critics of the company and Altman immediately took aim at the statements on social media.
"We are now confident that we can spin bullshit at unprecedented levels, and get away with it," wrote frequent OpenAI critic Gary Marcus in response to Altman's post. "So we now aspire to aim beyond that, to hype in purest sense of that word. We love our products, but we are here for the glorious next rounds of funding. With infinite funding, we can control the universe."
AGI, short for "artificial general intelligence," is a nebulous term that OpenAI typically defines as "highly autonomous systems that outperform humans at most economically valuable work." Elsewhere in the field, AGI typically means an adaptable AI model that can generalize (apply existing knowledge to novel situations) beyond specific examples found in its training data, similar to how some humans can do almost any kind of work after having been shown few examples of how to do a task.
According to a longstanding investment rule at OpenAI, the rights over developed AGI technology are excluded from its IP investment contracts with companies such as Microsoft. In a recently revealed financial agreement between the two companies, the firms clarified that "AGI" will have been achieved at OpenAI when one of its AI models generates at least $100 billion in profits.
Tech companies don't say this out loud very often, but AGI would be useful for them because it could replace many human employees with software, automating information jobs and reducing labor costs while also boosting productivity. The potential societal downsides of this could be considerable, and those implications extend far beyond the scope of this article. But the potential economic shock of inventing artificial knowledge workers has not escaped Altman, who has forecast the need for universal basic income as a potential antidote for what he sees coming.
Criticism of predictions of impending AGI
Artificial workers or not, some people have already been calling "BS" on Altman's optimism. It's nothing new. Marcus, a professor emeritus of psychology and neural science at New York University, often serves as a public foil to Altman's pronouncements, a trend that largely began when Marcus appeared before the US Senate in a May 2023 hearing as a skeptical counterpoint to Altman's testimony during the same session.
On Sunday, Marcus laid out his most recent criticisms of OpenAI's prediction of achieving AGI soon in a series of posts where he detailed how current language models sometimes fail at basic tasks like math problems, "commonsense reasoning," and maintaining accuracy when faced with novel problems.
OpenAI's current "best" released AI model, o1-pro, what you might call a "simulated reasoning" or SR model, reportedly performs well on some mathematical and scientific tasks but still shares weaknesses with OpenAI's GPT-4o large language model, such as failing to generalize well beyond its training data. And it may not be as strong as OpenAI claims in some cases.
For example, Marcus cited a recent benchmark conducted by All Hands AI that reportedly shows that OpenAI's o1 model scored only 30 percent on SWE-Bench verified problems (a set of GitHub-based problems), which is below OpenAI's claimed 48.9 percent performance rate, while Anthropic's Claude Sonnet (which is not purported to be an SR model) achieved 53 percent on the same benchmark.
Even so, OpenAI claims further progress on its AI model capabilities over time. In December, OpenAI announced o3, its latest SR model that impressed some AI experts by reportedly performing well on very difficult math benchmarks, but it has not yet been released for public examination.
Superintelligence as well?
Altman's post follows his September prediction that the AI industry may develop superintelligence "in a few thousand days." Superintelligence is an industry term for a hypothetical AI model that could far surpass human intelligence. Former OpenAI Chief Scientist Ilya Sutskever founded a company around the pursuit of the technology last year.
Altman addressed the topic in his latest post as well.
"We are beginning to turn our aim beyond [AGI], to superintelligence in the true sense of the word," he wrote. "We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own and in turn massively increase abundance and prosperity."
Despite frequent and necessary skepticism from critics, Altman has been responsible for at least one verifiable tech catalyst: the release of ChatGPT, which served as an unexpected tipping point, he says, that brought AI to the masses and launched our current AI-obsessed tech era. Even if OpenAI doesn't get to AGI as soon as Altman thinks, there's no doubt that OpenAI has taken the technology to unexpected places and spurred wide-ranging research on AI models in the tech industry.
"We started OpenAI almost nine years ago because we believed that AGI was possible and that it could be the most impactful technology in human history," he reflected in his post. "At the time, very few people cared, and if they did, it was mostly because they thought we had no chance of success."