Google’s Firebase, an application-development platform, is quickly becoming a robust AWS and Azure competitor; and now, with a new tool named ML Kit, Google is attempting to lead the way when it comes to developers integrating machine learning into their mobile apps.
ML Kit is a mobile SDK for Android and iOS that relies on a series of API calls. That’s the subtle brilliance; you don’t need to know how to create your own machine learning model, or have an existing framework available to you. Firebase and ML Kit offer developers the ability to start from scratch and scale to their needs.
The core of the ML Kit SDK is three existing API technologies: Google Cloud Vision API, TensorFlow Lite, and the Android Neural Networks API. ML Kit allows for on-device or cloud-based machine learning, with most features available strictly for on-device learning.
Interestingly, the cloud-based APIs are still in preview; if you need production-ready APIs, Google suggests using the Cloud Vision API directly. The cloud features also require you to adhere to the Google Cloud Platform License Agreement, and there may be charges associated with using the cloud.
ML Kit offers some production-ready APIs, so long as they’re for “common use cases” such as text recognition, face detection, identifying landmarks, scanning barcodes, or labeling images.
Devs who import their own TensorFlow models will be able to leave some of the work to ML Kit. In the below video, Google says all you have to do is import a model, and it takes care of the rest. Ars Technica reports that larger models (which Google says can be “tens of megabytes in size”) will be problematic, but that Google is working on a way to digest a TensorFlow model and return a TensorFlow Lite model.
Implementation isn’t difficult, either. It’s a matter of implementing the SDK, preparing the input data, and applying the model of choice to that data. The features listed in the chart above all have pre-fabricated APIs for ML Kit.
The platform is competitive with Apple’s CoreML, which also uses TensorFlow (and has a converter for those models). The upshot for iOS developers is CoreML also accepts Apache MXNet, bespoke Python CoreML tools, and ONYX; CoreML also has several pre-fabricated models to get developers up and running.
All told, ML Kit is nascent, but promising. Leaning into the powerful TensorFlow backend is not only natural for Google, it’s smart. Forcing TensorFlow to be the bottleneck for ML Kit may not be wise, but it’s doubtful any developer keen on having machine learning services in their app will balk.