WHY THIS MATTERS IN BRIEF
AI is getting to the point where it’s consuming more energy than small countries, and so researchers are trying to find more energy efficient methods to train and run it.
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Engineers at Northwestern University in the US used a novel design for transistors that not only helps them miniaturize them but also aids in making Artificial Intelligence (AI) tasks 100 times more energy efficient, a press release said.
The AI wave has swept the tech industry, and big and small companies are working to incorporate AI-powered features into their products. I’ve previously covered how silicon-based chips and custom chips have powered the rise of AI, and companies like Microsoft have spent millions on such chips to build up cloud-based infrastructure from scratch.
The Future of Artificial Intelligence 2050, by Keynote Matthew Griffin
As more businesses climb the AI bandwagon, the demand for such infrastructure will inevitably grow. However, Mark Hersam, a professor of Materials Science at Northwestern University, points out that the approach is energy-intensive since data is collected, sent to the cloud for analysis, and then results are sent back to the user. Instead, local processing of data would be much more energy efficient.
Before analysis, collected data needs to be sorted into various categories for the machine learning process to begin. Since each silicon transistor can perform only one step of the data processing tasks, the number of transistors needed to complete the job increases in proportion to the size of the data set.
Hersam’s team decided to move away from silicon and used 2D molybdenum disulfide and one-dimensional carbon nanotubes to make their mini transistors. The design of these new transistors was such that they could be reconfigured to work on different steps of the analysis.
“The integration of two disparate materials into one device allows us to strongly modulate the current flow with applied voltages, enabling dynamic reconfigurability,” Hersam added in the press release.
Not only did this drastically reduce the number of transistors and the energy consumed, but it also helped miniaturize the analysis to such a degree that it could be integrated into a regular wearable device or edge computing device.
The researchers used publicly available medical datasets to demonstrate the device’s capability. They trained the AI to interpret the data from Electrocardiogram (ECG), something that even medical workers require intensive training on.
The device was then asked to classify the data into six types of heartbeats that are often seen: viz., normal, atrial premature beat, premature ventricular contraction, paced beat, left bundle branch block beat, and right bundle branch block beat for 10,000 ECG samples, which it successfully achieved with 95 percent accuracy.
A task of such complexity would require at least 100 silicon transistors for computation, but the Northwestern researchers achieved the same with two just two transistors using their new design.
Hersam also highlights how local processing of data also protects patient privacy.
“Every time data are passed around, it increases the likelihood of the data being stolen,” If personal health data is processed locally — such as on your wrist in your watch — that presents a much lower security risk.”
In the future, the team envisions that their devices will be incorporated into everyday wearables, powering real-time applications without sapping grid power.
The research findings were published in the journal Nature Electronics.
The post New mini transistors reduce AI’s energy consumption by a hundred fold appeared first on By Futurist and Virtual Keynote Speaker Matthew Griffin.