Tech execs say a type of AI that can outdo humans is coming, but have no idea what it looks like
Sam Altman, CEO of OpenAI, during a panel session at the World Economic Forum in Davos, Switzerland, on Jan. 18, 2024.
Bloomberg | Bloomberg | Getty Images
Executives at some of the world’s leading artificial intelligence labs are expecting a form of AI on a par with — or even exceeding — human intelligence to arrive sometime in the near future. But what it will eventually look like and how it will be applied remain a mystery.
Leaders from the likes of OpenAI, Cohere, Google’s DeepMind, and major tech companies like Microsoft and Salesforce weighed the risks and opportunities presented by AI at the World Economic Forum in Davos, Switzerland.
AI has become the talk of the business world over the past year or so, thanks in no small part to the success of ChatGPT, OpenAI’s popular generative AI chatbot. Generative AI tools like ChatGPT are powered large language models, algorithms trained on vast quantities of data.
That has stoked concern among governments, corporations and advocacy groups worldwide, owing to an onslaught of risks around the lack of transparency and explainability of AI systems; job losses resulting from increased automation; social manipulation through computer algorithms; surveillance; and data privacy.
AGI a ‘super vaguely defined term’
OpenAI’s CEO and co-founder Sam Altman said he believes artificial general intelligence might not be far from becoming a reality and could be developed in the “reasonably close-ish future.”
However, he noted that fears that it will dramatically reshape and disrupt the world are overblown.
“It will change the world much less than we all think and it will change jobs much less than we all think,” Altman said at a conversation organized by Bloomberg at the World Economic Forum in Davos, Switzerland.
Altman, whose company burst into the mainstream after the public launch of ChatGPT chatbot in late 2022, has changed his tune on the subject of AI’s dangers since his company was thrown into the regulatory spotlight last year, with governments from the United States, U.K., European Union, and beyond seeking to rein in tech companies over the risks their technologies pose.
In a May 2023 interview with ABC News, Altman said he and his company are “scared” of the downsides of a super-intelligent AI.
“We’ve got to be careful here,” said Altman told ABC. “I think people should be happy that we are a little bit scared of this.”
AGI is a super vaguely defined term. If we just term it as ‘better than humans at pretty much whatever humans can do,’ I agree, it’s going to be pretty soon that we can get systems that do that.
Aidan Gomez
CEO, Cohere
Then, Altman said that he’s scared about the potential for AI to be used for “large-scale disinformation,” adding, “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.”
Altman was temporarily booted from OpenAI in November in a shock move that laid bare concerns around the governance of the companies behind the most powerful AI systems.
In a discussion at the World Economic Forum in Davos, Altman said his ouster was a “microcosm” of the stresses faced by OpenAI and other AI labs internally. “As the world gets closer to AGI, the stakes, the stress, the level of tension. That’s all going to go up.”
Aidan Gomez, the CEO and co-founder of artificial intelligence startup Cohere, echoed Altman’s point that AI will likely be a real outcome in the near future.
“I think we will have that technology quite soon,” Gomez told CNBC’s Arjun Kharpal in a fireside chat at the World Economic Forum.
But he said a key issue with AGI is that it’s still ill-defined as a technology. “First off, AGI is a super vaguely defined term,” Cohere’s boss added. “If we just term it as ‘better than humans at pretty much whatever humans can do,’ I agree, it’s going to be pretty soon that we can get systems that do that.”
Hinton left his role as a Google vice president and engineering fellow last year, raising concerns over how AI safety and ethics were being addressed by the company.
Benioff said that technology industry leaders and experts will need to ensure that AI averts some of the problems that have beleaguered the web in the past decade or so — from the manipulation of beliefs and behaviors through recommendation algorithms during election cycles, to the infringement of privacy.
“We really have not quite had this kind of interactivity before” with AI-based tools, Benioff told the Davos crowd last week. “But we don’t trust it quite yet. So we have to cross trust.”
“We have to also turn to those regulators and say, ‘Hey, if you look at social media over the last decade, it’s been kind of a f—ing s— show. It’s pretty bad. We don’t want that in our AI industry. We want to have a good healthy partnership with these moderators, and with these regulators.”
Limitations of LLMs
Jack Hidary, CEO of SandboxAQ, pushed back on the fervor from some tech executives that AI could be nearing the stage where it gets “general” intelligence, adding that systems still have plenty of teething issues to iron out.
He said AI chatbots like ChatGPT have passed the Turing test, a test called the “imitation game,” which was developed by British computer scientist Alan Turing to determine whether someone is communicating with a machine and a human. But, he added, one big area where AI is lacking is common sense.
“One thing we’ve seen from LLMs [large language models] is very powerful can write says for college students like there’s no tomorrow, but it’s difficult to sometimes find common sense, and when you ask it, ‘How do people cross the street?’ it can’t even recognize sometimes what the crosswalk is, versus other kinds of things, things that even a toddler would know, so it’s going to be very interesting to go beyond that in terms of reasoning.”
Hidary does have a big prediction for how AI technology will evolve in 2024: This year, he said, will be the first that advanced AI communication software gets loaded into a humanoid robot.
“This year, we’ll see a ‘ChatGPT’ moment for embodied AI humanoid robots right, this year 2024, and then 2025,” Hidary said.
“We’re not going to see robots rolling off the assembly line, but we’re going to see them actually doing demonstrations in reality of what they can do using their smarts, using their brains, using LLMs perhaps and other AI techniques.”
“20 companies have now been venture backed to create humanoid robots, in addition of course to Tesla, and many others, and so I think this is going to be a conversion this year when it comes to that,” Hidary added.