Google Pay GM and vice president Ambarish Kenghe announced the introduction of the MatFormer framework, which he said would “further enhance our on-device capabilities”.
“With MatFormer, you can mix and match AI models within a single framework to use the right sized model for your task. This gives you the best of both worlds – high performance and low resource consumption,” Kenghe said during Google I/O Connect in Bengaluru today.
With the announcement, the framework is officially available to developers on GitHub.
As Kenghe said, the framework is designed specifically to improve on-device capabilities. As of now, Android has a built-in foundational model, Gemma Nano, designed specifically for mobile devices, the first mobile OS to have a built-in model. The model was developed to focus on privacy and delivering AI results on unreliable and unstable networks.
With the addition of the framework, developers now have the freedom to inculcate features from several Gemini models under a single framework in order to develop an on-device model that works best for them, or for their specific use cases. “This will translate to smoother, faster, and more accurate AI experiences directly on users’ phones,” Kengre said.
Like Google’s Composition to Augment Language Models (CALM) framework and other products launched at the event, the MatFormer framework was also developed by India’s DeepMind team, according to Kenghe.
While this is the first framework of its kind, the focus on on-device integrated AI has been mirrored by other AI companies. Earlier this year, Qualcomm Technologies had launched their Qualcomm AI Hub, allowing developers to run AI models on their respective devices.
However, this was limited to allowing developers to build apps using AI, by offering optimised AI models through the hub.