I’ve known Siri, Apple’s voice assistant, for nearly a dozen years, and yet I still can’t recall a single meaningful conversation we had. In turn, ChatGPT and I have known each other for six months, yet we’ve talked about everything from the meaning of life to planning a romantic dinner for two and even collaborated on programming and film projects. I mean, we have a relationship.
Siri’s limitations mean he still can’t carry on a conversation or engage in a long-running back-and-forth project. For better or for worse, the Siri we use today on our iPhones, iPads, MacBooks, Apple Watch, and Apple TV isn’t much different from the one we first saw back in 2011 on the iPhone 4s.
Six years ago I wrote about it Siri’s first brain transplant (Opens in a new tab), the moment Apple began using machine learning to train Siri and improve its ability to respond to conversational queries. The introduction of machine learning and, soon after, an embedded neural network in the form of Apple’s A11 Bionic chip on the iPhone 8, marked what I thought was a turning point for the first consumer-level digital assistant.
This programming and silicon helped Siri understand the question and its context, allowing it to move beyond rote answers to intelligent responses to more natural language questions.
Early Siri was not her
Not being able to fully talk to Siri doesn’t seem like much of an issue, though we’ve already seen the movie Ha And I understand what we can ultimately expect from our chatbots.
However, it wasn’t until this distant future was snapped to the present by OpenAI’s GPT-3 and ChatGPT that Siri’s inability was thrown into primary relief.
Despite Apple’s best efforts, Siri was slowing down in learning mode. This is probably because Siri is still primarily built on machine learning rather than generative AI. It’s the difference between learning and creativity.
All of the AI-powered chatbots and image tools we use today are creating something entirely new from prompts and, soon, art and images. They are not answering bots, they are construction bots.
I doubt any of this is missing out on Apple. The question is, what is Apple going to do and what can it do about it? I guess we won’t have to look any further than the next Worldwide Developers Conference (WWDC 2023). We’re all focused on a potential $3,000 mixed reality headset that Apple might show off in June, but the company’s biggest announcements will certainly revolve around artificial intelligence.
“Apple has to be under incredible pressure now that Google and Microsoft are rolling out their own natural language solutions,” Moor Insights CEO and Cheif analyst Patrick Morehead (Opens in a new tab) Tell me via Twitter DM.
Siri is more talkative
like Reported on 9to5Mac, Apple might actually – finally – update its language creation for Siri (Bobcat). Note that this is not the same as “generative AI”. I guess that means Siri will get a little better at casual banter. I also don’t expect much more than that.
Unfortunately, Apple’s own spirit may prevent it from catching up with GPT-4, let alone GPT-3. Industry watchers aren’t exactly expecting a moment of breakthrough.
“I think what they’re doing in AI won’t necessarily be a leap as much as a more ethically oriented and measured approach to AI in Siri. Apple loves, lives and dies by its privacy commitments and I would expect no less in how it delivers Siri that relies on AI to a greater degree,” Executive Director of Creative Strategies and Principal Analyst Tim Bajarin (Opens in a new tab) He wrote to me in a letter.
Privacy above all
Apple’s unwavering adherence to user privacy could get in the way of its work when it comes to truly generative AI. Unlike Google and Microsoft Bing, it does not have a huge data store that relies on the search engine to draw from. Nor does it train its AI on the vast ocean of internet data. Apple does machine learning on the device. iPhone and Siri know what they know about you based on what’s on your phone, not what Apple can know about you and the 1.5 billion global iPhone users. Sure, developers can use Apple’s ML tools to build new AI models and integrate them into their apps, but they can’t simply collect your data to learn more about you to help Apple deliver a better Siri AI.
As I wrote in 2016: “It’s also interesting to consider how Apple could deliberately disable its AI efforts. Data about your buying habits in iTunes, for example, isn’t shared with any of Apple’s other systems and services.”
Apple’s domestic approach could hinder its potential AI efforts. As Morehead told me, “I see most of the action on device and in the cloud. Apple is strong on device but weak in the cloud and that’s where I think the company will struggle.”
As I see it, Apple has a choice. Giving up a bit of user privacy to finally turn Siri into the voice assistant we’ve always wanted or continuing the course with incremental AI updates that improve Siri but don’t allow it to rival ChatGPT.
“Freelance web ninja. Wannabe communicator. Amateur tv aficionado. Twitter practitioner. Extreme music evangelist. Internet fanatic.”