If you feel inundated by artificial intelligence, you’re not alone. Developers can’t stop talking about the possibilities of generative AI (GenAI) to change the way they code. CEOs can’t stop talking about it, either: AI-related talk on earnings calls has “gone vertical” of late. There’s so much AI that we’ve even started training the large language models (LLMs) behind GenAI applications with the output of those applications. (This won’t end well.)
But amid all this endless hype and hope around AI, one company recently managed to talk for over two hours in a global keynote without mentioning AI once, even as the impact of AI was omnipresent in the products on parade. That company is Apple, and its syntactic abstinence is a lesson for all in how to properly use AI.
Talk is cheap
Though AI talk has hit overdrive, AI hype isn’t exactly new, as Gartner Distinguished VP Analyst Mark Raskino points out. Chatter about AI jumped way up in the early 1980s and hasn’t slowed since. What’s different now, however, is how pervasive AI has become, both inside and outside tech circles. For some, things like ChatGPT are a “vehicle for viral hype”—a vehicle that keeps picking up speed.
Such speed comes with consequences. For example, every LLM needs data, and rich sources of data such as the Internet Archive, Stack Overflow, Reddit, and more have seen massive surges in traffic, leading to the crash of the Internet Archive and legal blockades from Reddit and others. Meanwhile, some are fighting back against alleged copyright infringement in the training data used to feed applications like GitHub Copilot. It’s all a bit messy.
Indeed, as James Penny, TAM Asset Management’s chief investment officer, suggests, “Companies that even mention the word AI in their earnings are seeing boosts to their share price, and that smells very much like the dot-com era.” Although it seems a bit silly given just how raw things like GenAI remain, there’s evidence that AI has driven the boom in the stock market without actually doing much to drive a commensurate boom in corporate earnings.
Dot-com era, indeed.
Meanwhile, one company keeps making big investments in AI without making a big deal about AI. That company is Apple, and it points to more responsible and productive use of AI than most companies have mustered.
Behind the scenes
Apple isn’t new to AI. The company has made AI integral to its products with Siri and other less-visible (or audible) ways. Not surprisingly, Apple has hired AI talent for a long time, and such hiring has become more noticeable of late. The company has a careers landing page dedicated to AI, with the headline “Machine Learning and AI: The work is innovative. The experience is magic.”
On that page, Apple offers the secret to how it uses AI: “The people working here in machine learning and AI are building amazing experiences into every Apple product, allowing millions to do what they never imagined.” Their focus, in other words, is on how customers experience AI, not on the AI itself. This has long been Apple’s approach: make tech integral to a customer’s experience without making tech the focal point for that experience. The technology is meant to be essentially invisible. If you notice it, Apple has failed.
On stage at its annual Worldwide Developers Conference, Apple tended to refer to AI as magic. The word was used 13 times. (Apple tends to overuse the word magic in the way most companies overuse the term AI). Speaking of the new Apple Vision Pro, Apple executive Alan Dye gushed, “It’s remarkable and it feels like magic.” He didn’t need to go into the details of the AI and other tech that feed into that magic. The point is the experience, not the ingredients.
This is a good lesson for every company.
First, although GenAI is the current “it” technology, it’s not always the right approach. Diffblue’s Mathew Lodge recently advised that reinforcement learning trumps GenAI for some use cases. Long before GenAI became the topic du jour, it was also the case that regression analysis or other methods should be a company’s first stop before getting on the machine learning bus.
More recently, I had a conversation with an industry friend who stressed, “You can use an LLM for many things, but if your output is structured data rather than unstructured, it can be a very inefficient way to do it.” It’s an intriguing point because some of the GenAI services the cloud vendors are rolling out have already been done before using dedicated models, which prove to be much more efficient than GenAI. As he explains, developers are enthralled by GenAI because it’s probabilistic in nature; it’s not trying to find The One True Answer, but rather a reasonable answer given patterns in the training data. That can be good, but it is “like searching without indexes. It doesn’t scale well.”
This is not to say GenAI is bad. It’s just not good or best for a range of use cases. (And even when it’s a great approach, it still requires a lot of resources). For some use cases, old-fashioned inferencing works best. Inferencing is a way to train the AI to see patterns in data, then to compare incoming new data against those patterns. GenAI, again, is all about creating things that look like the data in the LLM, resulting in newly created data that is reasonable but not necessarily right. Both are interesting; neither is always the right tool.
Second, no matter which approach to AI a company chooses (and the reality is that most enterprises will want to embrace a range of approaches because they’ll have a range of use cases), the AI should never be the point. AI is a means, not an end. As Apple showed, it’s very possible to sell an AI-infused vision without making AI the point of the pitch. No one really cares how cool the AI is. They care about the resultant experience.
So, sell that AI experience, not the AI.