WHY THIS MATTERS IN BRIEF

If you double it gets big, but if you times it by a million you get a fundamentally different kind of AI …

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

A million here, times a million there. Pretty soon you’re talking about gigantic numbers, so Nvidia’s most recent claims for its Artificial Intelligence (AI) accelerating hardware in terms of the performance boost it’s delivered over the last decade, and the huge performance boosts they aim to deliver again over the next 10 years shouldn’t be sniffed at.

 

 

As the performance of AI accelerator chips, like Nvidia’s GPUs, continues to ramp exponentially they’ll only add even more fuel the new industry of AI factories which are springing up all around the world and unlock even bigger breakthroughs in AI processing power and therefore AI agency and capability. But, at a fundamental level it means that soon we could see AI models one million times more powerful than existing examples such as ChatGPT and GPT-4 – in AI processing terms at least.

In last weeks Nvidia’s earnings call CEO Jensen Huang claimed that Nvidia’s GPUs had boosted AI processing performance by a factor of no less than one million in the last 10 years, and then he went on.

“Moore’s Law, in its best days, would have delivered 100x in a decade,” Huang explained. “By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models, across that entire span, we’ve made large language model processing a million times faster.”

 

 

Put another way: no Nvidia, no ChatGPT. The AI language model that is said to run on around 10,000 Nvidia GPUs and has captured the world’s consciousness by demonstrating something akin to its own actual consciousness in recent months wouldn’t be here without Jensen. And, of course, the team at OpenAI who actual put it into operation.

If one million times the performance in the last decade isn’t impressive enough, Huang has news for you: Nvidia’s going to do it again.

“Over the course of the next 10 years, I hope through new chips, new interconnects, new systems, new operating systems, new distributed computing algorithms and new AI algorithms and working with developers coming up with new models, I believe we’re going to accelerate AI by another million times,” Huang says.

 

 

Exactly how one measures these claimed performance boosts isn’t clear. But the result of the next one million times AI processing boost will be what Huang describes as AI “factories.”

“There was a time when people manufactured just physical goods. In the future, almost every company will manufacture soft goods. It just happens to be in the form of intelligence,” Huang predicts. “I expect to see AI factories all over the world,” Huang explains. “There will be some that are large, and there are some that will be mega large, and then there’ll be some that are smaller.”

“My expectation is that you’re going to see really gigantic breakthroughs in AI models in the next company, the AI platforms in the coming decade. But simultaneously, because of the incredible growth and adoption of this, you’re going to see these AI factories everywhere.”

 

 

So, there you have it. AI factories across the world. At what point exactly the AI’s the produce become fully conscious and take over that world is anyone’s guess. But an AI model that’s a million times more powerful than ChatGPT? That sounds like sooner rather than later.

The post Nvidia CEO predicts AI models a million times more powerful in 10 years appeared first on Matthew Griffin | Keynote Speaker & Master Futurist.

By