How health, travel and tech sectors are delivering meaningful product experiences
In this episode of the Somo Sessions podcast, we talk to our product and design experts about their favourite product experiences.Read more
The Apple World Wide Developer Conference (WWDC) in San Jose, California has come and gone and Somo was again in attendance to get the first word on advancements in platforms coming out of Cupertino. This year’s announcements put a clear focus on improving and preserving user experience. That’s not to say that there were not some exciting innovations on display, but the primacy of performance and incremental feature development were a welcome sideline to the usual whiz and bang.
Some general observations before we dig into the details, though. Keeping with the trend, the conference attendees were noticeably more international than last year. Common areas were full of chatter in a wide variety of languages, with snippets of German and Mandarin being almost as common as snippets of Swift. Looming trade wars have clearly not deterred Apple from building a deeper developer community outside of the US. That said, American informality and cheerfulness were very much on show. At WWDC, you are never far from a smiling, helpful face. This attitude was universal among speakers, event staff, and Apple employees. Perhaps one of the best reasons to attend is access to helpful and very knowledgeable Apple engineers that are willing to deep dive on specific questions in the rolling lab sessions.
Load times have been reduced drastically all across the OS. Apple said that iOS 12 will allow users to swipe to the camera app from the lockscreen 70% faster, display the keyboard 50% faster and launch apps up to twice as fast. All of this has been achieved by ramping up the CPU faster, so instead of gradually (and slowly) ramping up the CPU clock speed based on demand, in iOS12 they boost the processor instantly to the highest state to get the maximum performance and ramp it down to the minimum to preserve battery as soon as it is no longer required. All of that can be done efficiently thanks to the integration of hardware and software.
As a consequence of these improvements plus some optimisation with the animations (i.e reducing the animation duration for transitions) now the system feels much faster and smoother in general, and old devices have a new lease of life, as many of them work as good as when they were new.
Machine learning is very powerful and has been improving drastically in the last few years; therefore every major tech company is integrating it in most of their products. Apple has been using it for a while in their own apps, such as Siri, and last year they introduced Core ML, a framework for developers that makes it easy to integrate machine learning in their apps that will run machine learning models on the device, making its use faster and more private.
This year they’ve introduced Core ML 2, which is built on top of low level technologies like Metal and Accelerate, meaning that they make use of the CPU and GPU to run faster and more efficiently. Core ML 2 lets you integrate a broad variety of machine learning model types into your app. In addition to supporting extensive deep learning with over 30 layer types, it also supports standard models such as tree ensembles, SVMs, and generalized linear models.
Additionally, now with Create ML and playgrounds in Xcode 10 it’s easy to create a machine learning model with just a couple of lines of code and some drag and drop. These models are trained using transfer learning, taking advantage of the pre-trained models from the OS, which significantly reduces the amount of sample data required, making the training much faster and reducing the size of the models.
Last year Apple introduced ARKit which was announced as the biggest AR platform in the world. During the first tests the technology itself looked really promising, the tracking was very stable and accurate (much better than other existing AR frameworks), it was easy to implement and offered, for example, the ability to detect horizontal planes or combine ARKit code with their Vision API. Since then Apple has kept improving the framework, introducing ARKit 1.5 with improvements like vertical plane detection, 2D image detection, and general improvements around picture quality.
Now during WWDC they’ve introduced ARKit 2.0, which augments these capabilities further. The main improvements are:
Persistent AR experiences: Now it’s possible to persist AR experiences between sessions that can be resumed at a later time. They made a demo of an AR game that used this feature.
Shared AR experiences: They allow multiple users to share the same augmentation environment, which opens opportunities around multiplayer games, collaboration tools etc.
3D object detection: In addition to the 2D image detection that was introduced with ARKit 1.5 now it’s possible to detect known 3D objects to trigger the augmentation. Metaio, which was acquired by Apple in 2015, had a version of this technology, so Apple has clearly been polishing that capability since.
Quick Look and ‘.usd’ file format: A new file format has been created to share and display 3D content natively on iOS. This content will be accessible through the native apps like Safari, Messages or Mail, but also can be integrated in third-party apps or displayed in an AR experience.
One of the most interesting new capabilities announced for both users and developers are Shortcuts. Shortcuts allow apps to expose specific functionalities to the system through the Shortcuts API; these functionalities can then be used in multiple places like Siri, either by voice activation, Spotlight or smart predictions; at the same time you can combine many shortcuts together into a unified workflow, which opens a lot of opportunities around automation.
To expose a piece of functionality, it needs to be ‘donated’ to Siri and that can be done either by donating a NSUserActivity or an INInteraction object. You can also specify the precise scope of what Siri is allowed to do with your feature; for example, you can prevent a feature from appearing in Spotlight but show it in predictions.
Watch OS also includes some improvements and new capabilities, the most relevant ones being:
Interactive notifications: Now it is possible to display custom controls in notifications to take actions without opening the app.
Siri Watch Face: Through Siri Shortcuts some direct actions from third party apps can now be displayed directly in the watchface.
Watch apps now have enhanced audio controls.
Enhanced workout session APIs are now available.
Motion sensor APIs have been made available to understand symptoms of Parkinsonism.
The developer tools have been optimised to increase their performance, with faster compilation, the ability to run multiple UI and performance tests simultaneously in physical devices and the ability to run any unit test in a continuous integration setup on many different simulated device types.
Additionally, source control integration has been improved, as now Xcode 10 will mark up source code to highlight code that has been modified by someone else.
Apple has taken a big step forward polishing and improving their systems and technologies. This focus on optimising and expanding their existing capabilities will result in faster and more powerful apps, even on older devices.
In addition, opening Siri up a bit more to developers through custom intents and shortcuts is great news for both users and developers as that will allow a deeper and smarter integration with all types of app, making for a more seamless and better user experience.