Shortcuts for Mac adds support for LLM actions.So last year’s WWDC was too bold, too loud, too defensive. And, as it turned out, too aggressive in promising features Apple couldn’t deliver.
This year’s WWDC strikes me as Apple sticking to its knitting a little more, focused on what it feels it can currently do well. Apple is not a leader in developing AI models, but it does make a bunch of devices that people use every day. Maybe focus on that a bit more?
The sense I get from Apple, based on the keynote and various conversations around Apple Park today, is that the company wants to revert a little to what it used to do quite well. For years now, it’s been building features that use AI (what it used to call machine learning) to improve features scattered throughout its operating systems.
Make no mistake—Apple’s still committed to AI and to trying to catch up with the rest of the industry. But on Monday, I saw industry commentators complaining that Apple couldn’t match up with Anthropic’s Claude editing Rakuten’s code base for seven hours or Google’s firehose of new features from I/O with varying degrees of weirdness and likelihood that they will ship anytime soon.
Trying to make those kinds of commentators happy is what got Apple into this mess. Is anyone frustrated that Apple’s not generating weird AI videos or advanced coding systems all by itself? Apple’s AI stuff needs to get better, but what the company really needs to be is a builder of platforms that are good for users, including those who want to use AI to perform tasks.
On that score, Apple Intelligence does not seem to have faded away. Apple talked about it up front in the keynote, despite the fact that it knew it would be judged for what it did last year and that owning up to its failure to ship certain features (now due by the end of this year) would sting a little.
Apple is adding new features to Visual Intelligence, a feature that has never really seemed essential. Now it’ll analyze screenshots of your device interface, using on-device models to find the most relevant items in images and processing them in interesting ways, from creating calendar events based on images to performing image searches in any app that builds an App Intent to give Visual Intelligence access to their image search features.
Live translation is a feature that will be welcome, though it’s a place where Apple lags behind similar features from Google. Automatic translation in Messages has been a long time coming, while the more intense FaceTime or phone call audio translations are cooler (and are limited to a small palette of languages, for now).
Apple’s generative model, used in Genmoji as well as Image Playground, has apparently been updated, but in a nod to how far behind the curve Apple’s model has seemed, you can also now just use ChatGPT’s generative models to create images in Image Playground. I think it’s actually a good example of how Apple doesn’t necessarily need to build every AI feature out there.
Speaking of Apple’s models, they’ve been updated, and Apple is opening them up much more broadly. App developers have direct access to the smaller on-device model, with relatively free rein to build features based on it. Even more impressive is the latitude being given to actions in Shortcuts on the Mac, which can use the on-device model, Apple’s Private Cloud Compute, or even ChatGPT to perform tasks and return data. (It’s interesting that individual Shortcut developers get access to Private Cloud Compute before app developers do.)
There are several places where Apple Intelligence has just been diffused into the system, where you least expect it. A new Reminders item in the Share Sheet will take any text, including that on a web page, and parse it for possible to-do items using Apple’s on-device model. Then the user can choose which items to add to Reminders. Reminders has also been updated to support the use of Apple Intelligence to auto-categorize those items.
However, there are a few areas where Apple does still seem to be pushing its AI message a little bit beyond what is required. After seeing a couple of demos of Fitness Buddy, a feature that provides AI-generated motivational interjections while working out, I’m pretty sure I hate it. The feature uses an artificial voice to essentially repeat a load of stats that are already being displayed on the Apple Watch, with the occasional exhortation that you’re “doing great” or “crushing it.” At first blush, this seems like the kind of feature that could’ve been built without AI at all.
After last year, Apple could’ve been forgiven for wanting to soft-pedal this year’s Apple Intelligence announcements and regroup. It didn’t do that, nor did it double down on last year. Instead, it’s chosen a middle ground—a bit safe and familiar but also a place where Apple can feel a bit more like itself. In the long run, it needs to get this right. In the short term, maybe it should focus on meeting its users where they are, rather than pretending to be something it’s not.
If you appreciate articles like this one, support us by becoming a Six Colors subscriber. Subscribers get access to an exclusive podcast, members-only stories, and a special community.
.png)


