IT Brief UK - Technology news for CIOs & IT decision-makers
Story image

Apple expands Apple Intelligence with developer access & new AI

Today

Apple has announced a range of new features for Apple Intelligence, expanding its functionality across iPhone, iPad, Mac, Apple Watch, and Apple Vision Pro.

Apple Intelligence now includes Live Translation, upgrades to visual intelligence, and enhancements to Image Playground and Genmoji. Shortcuts can tap directly into Apple Intelligence, and developers are being given access to the on-device large language model at the core of the technology. This model is designed to power private, intelligent experiences within third-party apps even when offline.

Craig Federighi, Senior Vice President of Software Engineering at Apple, said:

"Last year, we took the first steps on a journey to bring users intelligence that's helpful, relevant, easy to use, and right where users need it, all while protecting their privacy. Now, the models that power Apple Intelligence are becoming more capable and efficient, and we're integrating features in even more places across each of our operating systems. We're also taking the huge step of giving developers direct access to the on-device foundation model powering Apple Intelligence, allowing them to tap into intelligence that is powerful, fast, built with privacy, and available even when users are offline. We think this will ignite a whole new wave of intelligent experiences in the apps users rely on every day. We can't wait to see what developers create."

Apple Intelligence features are now available for testing and will be accessible to users with compatible devices and supported languages in the coming spring. By the end of the year, support will expand to include Danish, Dutch, Norwegian, Portuguese (Portugal), Swedish, Turkish, Chinese (traditional), and Vietnamese.

Live Translation

Live Translation is integrated into Messages, FaceTime, and the Phone app, enabling users to communicate across languages. Apple states that this function is handled entirely on device to maintain user privacy. Messages can be automatically or manually translated, FaceTime calls offer live captions with translated text while hearing the original speaker's voice, and translations on phone calls are spoken in real time.

Image Playground and Genmoji

Users can generate Genmoji from descriptions, mix emoji, and adjust personal attributes in created images. Image Playground introduces new art styles via ChatGPT, including oil painting and vector art. Descriptions can be sent to ChatGPT for custom image creation, with user control over data sharing ensured.

Visual intelligence expansion

Visual intelligence now lets users search and act on anything displayed on their device screens. This includes identifying objects, searching for similar items online, asking ChatGPT context-based questions, and extracting event details to prepopulate calendar entries. Users can access visual intelligence by pressing the screenshot buttons, after which options appear to save, share, or explore further.

Apple Watch integration

Apple Watch introduces Workout Buddy, a feature that combines workout and fitness history to produce personalised motivational feedback. This feedback uses a generative voice based on Fitness+ trainers and is processed locally for privacy. Workout Buddy will be available in English across several workout types for Apple Watch with Bluetooth headphones and an Apple Intelligence-supported iPhone nearby.

Developer access to on-device models

The Foundation Models framework is being opened for developers, who can now build intelligent and private app experiences that work offline at no additional AI inference cost. The framework provides native Swift support and includes features such as guided generation and tool calling. Example uses include generating personalised quizzes in education apps or enabling natural language search in outdoor applications.

Shortcuts enhancements

Shortcuts now benefit from a wider range of intelligent actions. Users can summarise text, create images, and use Apple Intelligence-powered models—either on-device or via Private Cloud Compute—for responses within shortcuts, while safeguarding data privacy. ChatGPT can also be integrated to provide response generation in shortcuts routines.

Additional features

Other improvements include automatic identification of relevant actions in Reminders, email order tracking in Wallet, poll suggestion capabilities in Messages, customisable chat backgrounds, and enhanced features in Photos and Mail. Writing Tools, Clean Up in Photos, Genmoji, Image Playground, Mail summaries, Smart Reply, improved Siri, natural language search, summarisation in Notes and transcriptions, and notification improvements remain available as part of Apple Intelligence.

Privacy measures

Apple highlights that Apple Intelligence is designed with privacy in focus, utilising on-device processing for most features. For more complex requests needing larger models, Private Cloud Compute keeps user data private, ensuring it is not stored or shared with Apple. Independent experts are able to inspect the code deployed on Apple silicon servers to ensure privacy commitments are upheld.

Apple Intelligence features are available for testing through the Apple Developer Program, with public beta access coming next month. The new functions will support all iPhone 16 models, iPhone 15 Pro, iPhone 15 Pro Max, iPad mini (A17 Pro), and iPad and Mac devices with M1 chips and later. Availability of features and language support may vary by region and device.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X