WHY THIS MATTERS IN BRIEF

With the cost of training the world’s largest AI models expected to top $200 Ml and even $1 Bn this is not a game for startups.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

The rise of multimodal foundation models, increasing investment in Generative AI, an influx of regulations, and shifting opinions on Artificial Intelligence (AI) around the globe is all discussed in the latest Stanford Institute for Human-Centered Artificial Intelligence (HAI) 2024 AI Index, a 500-page report covering AI development and trends. For this year’s report, the seventh HAI has published, the Institute said it broadened its scope to include more original data than ever, including new estimates on AI training costs and an entirely new chapter dedicated to AI’s impact on science and medicine.

 

 

Overall, the report paints a picture of a rapidly growing and increasingly complex (and expensive) AI landscape dominated by commercial entities, particularly US tech giants. The number of new LLMs released globally in 2023 doubled compared to the previous year, according to the report. Investment in generative AI also skyrocketed, and so did global mentions of AI in legislative proceedings and regulations – in the US alone last year, the total number of AI-related regulations increased by 56.3%.

 

The Future of Generative AI and AI, by Futurist Keynote Matthew Griffin

 

One of the biggest takeaways from the report, however, is the dominance of US tech companies. While two-thirds of models released last year were open-source, the highest-performing models came from commercial entities with closed systems. Private industry accounted for 72% of foundational models released last year, putting out 108 compared to 28 from academia and just four from government. Google alone released a whopping 18 foundation models in 2023. For comparison, OpenAI released seven, Meta released 11, and Hugging Face released four. Overall, US companies released 62 notable machine learning models last year compared to 15 coming out of China, eight from France, five from Germany, and four from Canada. But in spite of all this data it’s the cost of training these models that is catching more and more people’s attention.

 

 

“The training costs of state-of-the-art AI models have reached unprecedented levels,” it reads, citing the exponential increase as a reason academia and governments have been edged out of AI development.

According to the report, Google’s Gemini Ultra cost an estimated $191 million worth of compute to train, and OpenAI’s GPT-4 cost an estimated $78 million, which is actually slightly lower than some previous estimates of how much that model cost – now imagine how much more it’d be if these companies had to pay for all the training data they scraped from the internet!

For comparison, the report notes that the original 2017 Transformer model, which introduced the architecture underlying all of today’s LLMs, cost only around $900.

 

 

On the achievements and potential of AI, the report discusses how AI systems have passed human performance on several benchmarks – including some in image classification, visual reasoning, and English understanding – and what it’s doing to turbocharge scientific discovery. While AI started to accelerate scientific discovery in 2022, 2023 saw the launch of even more significant science-related AI applications, the report says. Examples include Google DeepMind’s GNoME – an AI tool that facilitates the process of materials discovery – although some chemists have accused the company of overstating the model’s impact on the field, EVEscape – an AI tool developed by Harvard researchers that can predict viral variants and enhance pandemic prediction –  and AlphaMissence which assists in AI-driven mutation classification.

AI systems have also demonstrated rapid improvement on the MedQA benchmark test for assessing AI’s clinical knowledge. GPT-4 Medprompt, which the report calls “the standout model of 2023” in the clinical area, reached an accuracy rate of 90.2% – marking a 22.6% increase from the highest score in 2022. What’s more, the FDA is approving more and more AI-related medical devices, and AI is increasingly being used for real-world medical purposes.

 

 

Of course, AI progress is not a straight line, and there are many significant challenges, lingering questions, and legitimate concerns.

“Robust and standardized evaluations for LLM responsibility are seriously lacking,” the report authors wrote, citing how leading AI developers primarily test their models against different responsible AI benchmarks, complicating efforts to systematically compare the risks and limitations of the top models.

The report highlights many other issues surrounding the technology: Political deepfakes are simple to create but difficult to detect; the most extreme AI risks are difficult to analyze; there is a lack of transparency around the data used to train LLMs and around key aspects of their specific designs; researchers are finding more complex vulnerabilities in LLMs; ChatGPT is politically biased toward Democrats in the US and the Labour Party in the UK; and LLMs can output copyrighted material. Additionally, AI is leaving businesses vulnerable to new privacy, security, reliability, and legal risks, and the number of incidents involving the misuse of AI is rising rapidly with 2023 seeing a 32.3% increase over 2022.

 

 

Clocking in at over 500 pages, the report is a doozy. But it’s unquestionably the deepest and most thorough overview of the current state of AI available at the moment. If you want to dive deeper but don’t have time for the full report, HAI has also published some handy charts and will be presenting the findings and answering questions in a webinar later this year.

The post The cost of training Frontier AI has already gone through the roof appeared first on Matthew Griffin | Keynote Speaker & Master Futurist.

By