Money Matters: Key takeaways from our London Tech Week event
At our recent London Tech Week event, we explored the future of banking and strategies for ever-evolving customer needs.Read more
The technology demo Duplex, where Google Assistants made Turing-test-beating phone calls on your behalf, is either amazing or the start of the AI takeover, depending on your perspective. But beyond the hype of personal digital assistants (can we reclaim this term yet?!) and self-driving cars, what did I/O mean for brands and apps in 2018/9?
Firstly, let’s talk Android. I suspect that most companies and brands are interesting in keeping up with this corner of Google. There were a few things we already knew - Android P Developer Preview has already been out since March - and a few things we didn’t. I’m not going to discuss the UI changes in P or other platform level tweaks, as they have been covered in detail elsewhere. I’m interested in how this affects businesses and product owners looking to improve their current offerings.
As demonstrated in the Keynote, Google wants your apps to be more predictive, to present relevant data quickly, and then to get out of the way of the user as much as possible. This can go against many KPIs companies set for themselves (i.e. a number of times an app is opened, time spent in the app, etc.), but can represent a great user experience.
To help with this, a platform-level offering, called “Slices” allows users to see deep-linked snippets of your app’s UI directly in a search, an assistant request, or another area of the system like the notification shade. This allows long-dormant apps to resurface, reminding users how relevant they are and why they installed them in the first place. A wide range of templates at several detail levels are available and developers can start creating these now.
Google is also opening up custom “App Actions” that allow your app to respond to voice, search, or other system-level interactions requests that were previously unavailable. This is a big deal for content-based apps but is in a very limited developer preview, so most developers are unlikely to see this in the wild any time soon. Watch this space!
AI and machine learning is always a big topic for Google and this I/O was no different. “ML Kit” is a coherent set of libraries and cloud services that will work across iOS and Android as part of the Firebase offering. These seem incredibly flexible and allow several pre-built models (which, to be honest were already available in other Android/iOS libraries), the ability to import your own TensorFlow Lite models, and the ability to keep data private on devices but to aggregate improvements to your models from users automatically. This opens up a whole swathe of opportunities to developers and brands and allows them to have a single approach for both of the main mobile platforms. All you need is a data set and an idea to get going.
Android Things, Google’s IoT platform, was released as v1.0 just ahead of I/O and is now ready for production. Google has worked hard to take away the difficult problem of hardware design by producing reference (pre-certified) modules allowing developers to concentrate on building their software experience. As it’s built with Android, 99% of the existing libraries and practices already exist, and companies have an existing pool of developers to call on. They have specific services (IoT Core) which make managing 1,000s of devices in the field manageable. App update roll-outs and automatic security (3 years of support for Long Term Support versions) are managed remotely. Additionally, Google’s full suite of general purpose enterprise-grade services including Firebase are available, as are Amazon’s, Microsoft’s or any other service that offers an API or Android library. Many smart devices are already coming to market which use this approach (like the smart displays and speakers from LG, JBL, and Lenovo).
Flutter is Google’s cheeky upstart mobile framework. If you haven’t heard of it, it’s a cross-platform development framework (iOS, Android, and more), written in the Dart programming language. It has quickly gained popularity since I/O 2017 as it offers a quick and expressive way to create beautiful apps rapidly on both platforms at once. It does this by compiling down to fully native code and is independent of the platform version or software layer provided by an OEM. After spending some time with, we think it has a lot to offer. It is currently in its 3rd Beta so I am hoping to see a release candidate before the end of the year. The team has improved many of the tools and components including accessibility and localisation support. The Beta status hasn’t stopped many Companies, Agencies, and Google themselves from releasing production apps. Flutter is definitely one to watch and consider for many (but maybe not all) types of apps in the future, giving React Native a serious run for its money.
There was much more too, including cloud-based targets for collaborative AR games, a more intelligent Assistant, improvements to Android TV and Wear OS by Google, and many new libraries and tools designed to make Android development easier and less prone to bugs. On the web, they announced improvements to their web language Angular, better Progressive Web App development support, and improvements to Chrome and Chromebooks.
In general, there was much to be celebrated by brands and developers alike. This I/O marks another solid march towards having Android and other Google technologies on more and more surfaces around us. Not only that, but Google’s outreach programs and commitment to open source means that they are contributing to much more than just their own future. As a fanboy, developer and consumer, I can’t wait to see what comes from these advances in the next few years, maybe even shaping some of that future with some of you.