Financial news

Huge computing power ‘can deliver human-level AI in 5 years’

By  | 

In the race to build a machine with human-level intelligence, it seems, size really matters.

“We think the most benefits will go to whoever has the biggest computer,” said Greg Brockman, chairman and chief technology officer of OpenAI.

The San Francisco-based AI research group, set up four years ago by tech industry luminaries like Elon Musk, Peter Thiel and Reid Hoffman, has just thrown down a challenge to the rest of the AI world.

Late last month, it raised $1bn from Microsoft to speed its pursuit of the Holy Grail of AI: a computer capable of so-called artificial general intelligence, a level of cognition that would match its makers, and which is seen as the final step before the advent of computers with superhuman intelligence.

According to Mr Brockman, that money — a huge amount for a research organisation — will be spent “within five years, and possibly much faster”, with the aim of building a system that can run “a human brain-sized [AI] model”.

Whether a computer that matches the neural architecture in the human brain would deliver a comparable level of intelligence is another matter. Mr Brockman is wary about predicting precisely when AGI will arrive, and said that it would also require advances in the algorithms to make use of the massive increase in computing power.

But, speaking of the vast computing power that OpenAI and Microsoft hope to put at the service of its AI ambitions within five years, he added: “At that point, I think there’s a chance that will be enough.”

OpenAI’s huge bet points to a parting of the ways in the artificial intelligence world after a period of rapid advance. Deep learning systems, which use artificial neural networks modelled on one idea of how the human brain works, have provided most of the breakthroughs that have put AI back at the centre of the tech world. OpenAI argues that, with enough computing power behind them, there is a good chance that these networks will evolve further, right up to the level of human intelligence.

But many AI researchers believe that deep learning on its own will never become much more than a form of sophisticated pattern-recognition — perfect for facial recognition or language translation, but far short of true intelligence.

Some of the most ambitious research groups — including DeepMind, the British AI research company owned by Alphabet — believe that teaching computers new types of reasoning and symbolic logic will be needed to complement the neural networks, rather than just building bigger computers.

READ ALSO  Business leaders optimistic in economic outlook: Survey

“If we allocated $100m for compute, what could we do? We’re thinking about it, and you can imagine other people are thinking about it as well,” said Oren Etzioni, the head of Allen Institute for Artificial Intelligence, one of the best-funded American AI research groups. But he added: “To reach the next level of AI, we need some breakthroughs. I’m not sure it’s simply throwing more money at the problem.”

Others are more forthright. Asked whether bigger computers alone will deliver human-level AI, Stuart Russell, a computer science professor at the University of California, Berkeley, points to the verdict in his forthcoming book on the subject: “Focusing on raw computing power misses the point entirely . . . We don’t know how to make a machine really intelligent — even if it were the size of the universe.”

Even the possibility that OpenAI may be on the right track, though, has been enough to attract a huge cash injection from the world’s most valuable company, setting up a race to build far more advanced hardware systems for AI.

Mr Brockman calls it “a public benefit Apollo program to build general intelligence”. That reflects the mission set by OpenAI’s founders, to build an AI whose benefits are not limited to one corporation or individual government.

It could also create unmatched wealth. Pointing to the stock market value of today’s leading tech companies, he said: “That’s the value we produce with computers that aren’t very smart. Now imagine we succeed in building the kind of technology we’re talking about, an artificial general intelligence — that company is going to be by a huge margin unprecedented in history, the number one.”

OpenAI’s bet is that, as computer hardware gets more powerful, the learning algorithms used in deep learning systems will evolve, developing capabilities that today’s coders could never hope to program into them directly.

It is a controversial position. Critics like Mr Russell argue that simply throwing more computing power at imperfect algorithms means “you just get the wrong answer more quickly.” Mr Brockman’s response: “You can get qualitatively different outcomes with increased computation.”

READ ALSO  Microsoft CEO Satya Nadella says this piece of advice impacted him profoundly

He claims that some of the tests carried out by OpenAI in its four-year history hint at the kind of advances that could come from massive increases in hardware.

Two years ago, for instance, the researchers reported the results of a system that read customer reviews on Amazon and then used statistical techniques to predict the next letter. The system went further, according to OpenAI, learning for itself the difference between positive and negative sentiment in the reviews — a level of understanding beyond anything it might have been expected.

A far bigger language system released this year, called GPT-2, went a step further, said Mr Brockman, developing a degree of semantic understanding from applying the same kind of huge statistical analysis.

One of OpenAI’s most recent experiments — an AI system that beat a top human team at the video game Dota 2 — also showed that today’s most advanced AI systems can perform well at games that are far closer to the real world than board games like chess.

That echoed work by DeepMind on playing the game Starcraft. According to Mr Brockman, the OpenAI system taught itself to operate at a higher level of abstraction, setting an overall goal and then “zooming in” on particular tasks as needed — the kind of planning that is seen as a key part of human intelligence.

Even many of the sceptics, who are cautious about OpenAI’s zealous insistence that a single AI technique will be sufficient to replicate human intelligence, seem wary of writing off its claims completely. “It’s fair to say that deep learning has been a paradigm shift [in AI],” said Mr Etzioni. “Can they achieve something like that again?”

Bringing in Microsoft to bankroll the effort represents a change in direction for the research group as it tries to accelerate the move to AGI. Most of the $1bn investment will return to the software company in the form of payments to use its Azure cloud computing platform, with Microsoft working on developing new supercomputing capabilities to throw at the effort.

Mr Brockman denies that this is a deviation from OpenAI’s goal of staying above the corporate fray. Microsoft, he said, would be limited to the role of “investor and a strategic partner in building large-scale supercomputers together”.

READ ALSO  Labour hit by ‘sophisticated and large-scale cyber attack’

The software company’s investment will give it a large minority stake in OpenAI’s for-profit arm, as well as a seat on its board. Like all of the organisation’s equity investors, its potential returns have been capped at a fixed level, which has not been disclosed.

If OpenAI’s work ever produces the kind of huge wealth that Mr Brockman predicts, most of it will flow to the group’s non-profit arm, reflecting its promise to use the fruits of advanced computer intelligence for the benefit of all humanity.

AI curve steeper than Moore’s Law

The tech industry is accustomed to riding the curve of Moore’s Law, which describes the way that computing power roughly doubles every two years. But OpenAI is counting on a much more powerful exponential force to quickly take the capacity of its AI systems to a level that seems almost unimaginable today.

The research group calculates that since the tech industry woke up to the potential of machine learning seven years ago, the amount of processing capacity being applied to training the biggest AI models has been increasing at five times the pace of Moore’s Law. 

That makes today’s most advanced systems 300,000 times more powerful than those used in 2012. The advance reflects the amount of money now being poured into advanced AI, as well as the introduction of parallel computing techniques that make it possible to crunch far more data.

Mr Brockman said OpenAI was counting on this exponential trend being carried forward another five years — something that would produce results that, he admits, sound “quite crazy”.

As a comparison, he said that the past seven years of advances would be like extending the battery life of a smartphone from one day to 800 years: another five years on the same exponential curve would take that to 100m years.

Today’s most advanced neural networks are roughly on a par with the honey bee. But with another five years of exponential advances, OpenAI believes it has a shot at matching the human brain.

Via Financial Times

Print Friendly, PDF & Email

Hold dit netværk orienteret