AI marketing is a con – especially when it comes to CPUs
Zak Storey
Sun, October 20, 2024 at 5:00 PM GMT+7·5 min read
When you buy through links on our articles, Future and its syndication partners may earn a commission.
Credit: Shutterstock / NicoElNino
Artificial intelligence is increasingly making its presence felt in more areas of our lives, certainly since the launch of ChatGPT. Depending on your view, it’s that big bad bogeyman that’s taking jobs and causing widespread copyright infringement, or a gift with the potential to catapult humanity into a new age of enlightenment.
What many have achieved with the new tech, from Midjourney and LLMs to smart algorithms and data analysis, is beyond radical. It’s a technology that, like most of the silicon-based breakthroughs that came before it, has a lot of potency behind it. It can do a lot of good, but also, many fear, a lot of bad. And those outcomes are entirely dependent on how it’s manipulated, managed, and regulated.
It’s not surprising then, given how rapidly AI has forced its way into the zeitgeist, that tech companies and their sales teams are equally leaning into the technology, stuffing its various iterations into their latest products, all in the aim of encouraging us to buy their hardware.
Check out this new AI powered laptop, that motherboard that utilizes AI to overclock your CPU to the limit, those new webcams featuring AI deep-learning tech. You get the point. You just know that from Silicon Valley to Shanghai, share-holders and company execs are asking their marketing teams “How can we get AI into our products?” in time for the next CES or the next Computex, no matter how modest the value will actually be for us consumers.
My biggest bugbear comes in the form of the latest generation of CPUs being launched by the likes of AMD, Intel, and Qualcomm. Now, these aren’t bad products, not by a long shot. Qualcomm is making huge leaps and bounds in the desktop and laptop chip markets, and the performance of both Intel and AMD’s latest chips is nothing if not impressive. Generation on generation, we’re seeing higher performance scores, better efficiency, broader connectivity, lower latencies, and ridiculous power savings (here’s looking at you, Snapdragon), among a whole slew of innovative design changes and choices. To most of us mere mortals, it’s magic way beyond the basic 0s and 1s.
Despite that, we still get AI slapped onto everything regardless of whether or not it’s actually adding anything useful to a product. We have new neural processing units (NPUs) added to chips, which are co-processors that are designed to accelerate low-level operations that can take advantage of AI. These are then put into low-powered laptops, allowing them to use advanced AI features such as Microsoft’s Copilot assistant to tick that AI checkbox, as if it makes a difference to a predominantly cloud-based solution.
The thing is though, CPU performance, when it comes to AI, is insignificant. Like seriously insignificant, to the point it’s not even mildly relevant. It’s like trying to launch NASA’s JWST space telescope with a bottle of Coke and some Mentos.
The Asus Vivobook S 15 Copilot+ in silver pictured on a wooden desk.
Emperor’s new clothes?
I’ve spent the last month testing a raft of laptops and processors, specifically in regard to how they handle artificial intelligence tasks and apps. Using UL’s Procyon benchmark suite (makers of the 3D Mark series), you can run its Computer Vision inference test, and that can spit out a nice number for you, giving you a score for each component. Intel Core i9-14900K? 50. AMD Ryzen 9 7900X? 56. 9900X? 79 (that’s a 41% performance increase, by the way, gen-on-gen, seriously huge).
Here’s the thing though: chuck a GPU through that same test, such as Nvidia’s RTX 4080 Super, and it scores 2,123. That’s a 2,587% performance increase compared to that Ryzen 9 9900X, and that’s not even using Nvidia’s own TensorRT SDK, which scores even higher than that.
The simple fact of the matter is that AI demands parallel processing performance like nothing else, and nothing does that better than a graphics card right now. Elon Musk knows this – he’s just installed 100,000 Nvidia H100 GPUs in xAI’s latest AI training system. That’s more than $1 billion worth of graphics cards in a single supercomputer.
Obscured by clouds
To add insult to injury, the vast majority of popular AI tools today require cloud computing to fully function anyway.
LLMs (large language models) like ChatGPT and Google Gemini require so much processing power and storage space that it’s impossible to run them on a local machine. Even Adobe’s Generative Fill and AI smart filter tech in the latest versions of Photoshop require cloud computing to process images.
It’s just not feasible or possible to really run the vast majority of these AI programs that are so popular today on your own home machine. There are exceptions, of course; certain AI image-generation tools are far easier to operate on a solo machine, but still, you’re far better off using cloud computing to process it in 99% of use cases.
The one big exception to this rule is localized upscaling and super-sampling. Things like Nvidia’s DLSS and Intel’s XeSS, and even to a lesser extent AMD’s own FSR (although this is predominantly based on deep-learning models, applied via rasterization hardware, meaning you don’t need AI componentry) are fantastic examples of a good use of localized AI. Otherwise though, you’re basically out of luck.
Yet still, here we are. Another week, another AI-powered laptop, another AI chip, much of which, in my opinion, amounts to a lot of fuss about nothing.