By David Glance, University of Western Australia
Both Apple and Google use their developer conferences to introduce updates to their mobile operating systems. Google IO, held last month, introduced Google’s latest version “N” of Android, along with new apps.
Apple has done the same this week at its World Wide Developer Conference, introducing iOS 10.
While both Apple and Google are likely to be monitoring each others’ releases to stay competitive, there is a big difference underlying the Android and iOS approaches to feature development that means their respective focus will also differ.
Google has little control over hardware and the rate at which OS versions are released on the vast array of Android phones. It can take years before a new version will reach a significant proportion of Android users, if at all.
Android Marshmallow (version 6), released in 2015, is still only on 10% of devices. Apple’s iOS 9, also released last year, is on 86% of devices.
Another important difference driving the design of features is the range of devices that each company targets. Google’s software is designed to run on a range of operating systems and not just Android, whereas iOS apps are tied into the particular release of the operating system software.
What is possibly a more distinguishing difference is the way Apple has emphasised the integration of third-party apps into its own applications. iMessage, for example, will come with an “app drawer” that can do everything from providing custom animations to allowing users to exchange money or pay for services and products.
Google, on the other hand, has released “Allo”, which is a self-contained messaging app, albeit with some clever predictive awareness built in.
Apple Maps gets upgraded in iOS 10 but still generally lags behind Google Maps. As with iMessage, Apple is expecting developers to add functionality to Maps to make it truly useful.
In the meantime, Apple has added traffic information (for certain countries) and alternative routes based on that information; features Google Maps has had for some time.
Changes to Apple Photos brings it more into line with Google’s Photos app. Photos will be automatically arranged into suggested albums based on a range of information including automatically recognised faces. Photos will also create automatic presentations with accompanying music to create “memories”. Again, this is similar to features in Google Photos.
Clouds and watches
An important distinction in approaches that may not be obvious is how much processing happens on the phone itself rather than on the cloud.
Apple can take advantage of the enormous processing power of its phones and do a great deal of its processing locally, whereas Google does much of this type of processing on the cloud.
This means that Apple can ensure greater privacy and security as the information never leaves the phone. Theoretically, law enforcement agencies could intercept Google’s process of facial recognition by gaining access to that information on their servers, something that they would not be able to do with Apple’s approach.
Apple has enhanced its use of information on the lock screen, including what can be done directly from those notifications. This coincides with Google’s revamped notifications in Android N.
Google and Apple also announced upgrades to their respective watch OSs. Google upgraded Android Wear to version 2.0 and Apple’s watchOS goes to version 3.0. In Apple’s case, watchOS gets huge speed-ups (which it really needed), and the ability for apps on the watch to update in the background, in preparation for being launched.
Another intriguing feature not available on Android Wear 2.9 is Apple’s SOS app, which allows the watch to automatically dial emergency services and provide updates to them on the wearer’s location.
The watch will then notify pre-configured contacts and let them know that the SOS button has been activated. This feature still relies on the wearer having their phone available but could prove incredibly useful as an alternative to panic buttons that are provided to the elderly in case of emergencies.
It could also be useful in cases of personal security as activating a feature via the watch in an emergency may be much easier than through the phone.
Other features introduced for watchOS 3 include the ability to “scribble” messages – another feature already available on Android Wear.
Any user of Android on one of the latest phones and a user of iOS would in essence be likely to do the same things on either platform.
If you really value security and privacy, Apple would have the edge. If you use other Apple products, using an Android phone would put you at more of a disadvantage. In all other events, either set of users could answer any “I can do this” with a “me too”.
David Glance is director of the UWA Centre for Software Practice at the University of Western Australia.
This article was originally published on The Conversation. Read the original article.