Advantages and Limitations of Applying AI in Mobile Devices
As we stand on the brink of a new technological era, artificial intelligence (AI) is increasingly making its way into our daily lives, and mobile devices are no exception. In this article, we will explore the integration of AI in mobile solutions, the tools Apple and Google provide for developers, and the native solutions they create alongside their operating systems. We will also cover the technical steps to introduce them in iOS and cover most of their limitation. Content authored by David Duarte , Senior iOS Developer of The Cactai Team.
A Brief History of AI in Mobile
The first milestone in this story was laid with the release of the first iPhone in 2007. The iPhone redefined what a smartphone could be, with its immediate and profound impact. It set a new standard for mobile devices and sparked a wave of innovation that continues to this day. Following this, in 2011, AI started making its mark on Apple products with the introduction of Siri. Using natural language processing (NLP), Siri could understand and respond to user queries, setting a new standard for mobile interactivity. This was followed by Google Now, which offered predictive information based on user habits and preferences. These early implementations showcased the potential of AI to enhance user experience through personalised and intelligent interactions.
As AI continued to evolve, the development of AI-specific hardware played a crucial role in its integration into mobile devices. Apple’s A11 Bionic chip, introduced in 2017, marked a new era. It featured a dedicated Neural Engine designed to accelerate machine learning tasks. This hardware innovation enabled more complex and efficient AI computations directly on mobile devices. Fast forward to 2022, the Apple A16 chip introduced even more advancements, with a 4nm process and 16 neural cores capable of 17 trillion operations per second, showcasing the rapid development and increased capabilities of Apple’s Neural Engine over the years.
Alongside hardware advances, software tools and libraries were also introduced. CoreML and TensorFlow Lite, released in 2017 and 2018 respectively, provide mobile developers with the tools required to create machine learning models or import already developed Python models. This perfect hardware and software symbiosis optimises the use of device hardware, allowing specific code to run on the CPU, GPU, or Neural Engine, ensuring efficient performance.
The journey of AI in mobile is marked by continuous innovation and integration. From the early days with simple virtual assistants to sophisticated large language models (LLMs) like Apple Intelligence, AI is becoming deeply integrated into our phones.
The Necessity of AI in Our Mobile Devices
The pursuit of improving user experience and personalization guides Apple and other companies to integrate AI into most of their applications. Each year, their devices receive neural engine hardware improvements, alongside increased memory and computing capacity, to support more powerful solutions. While writing this from my phone, I can get predictions of the next word, unlock the iPhone through FaceID, use Duolingo for personalized lessons, and filter my dog’s pictures in the photo gallery. In all these scenarios, and more, the neural engine is at work. Additionally, the AI in the Apple Watch includes features like crash detection, atrial fibrillation (AFib) monitoring, and fall detection.
In summary, the necessity of AI in mobile devices is driven by the demand for more intelligent, efficient, and personalised user experiences.
Let’s Get Down to Business: CoreML
CoreML is Apple’s machine learning framework, designed to make it easy for developers to integrate AI into iOS apps. Introduced in 2017, CoreML allows for efficient and optimized execution of machine learning models on Apple devices. CoreML has evolved significantly through its versions, now supporting powerful stateful and transformer models, such as the conversion of Stable Diffusion XL to integrate on mobile or a large model like Mistral 7B. For a deeper dive into the capabilities of this library, I highly recommend watching this video from the last WWDC: CoreML at WWDC 2024.
It started with support for popular ML libraries like Keras and sci-kit-learn in 2017, introduced model compression and custom layers in 2018, added on-device training and advanced neural network support in 2019, improved integration with CreateML and Swift for TensorFlow in 2020, and unified model formats across Apple platforms with enhanced model security in 2021. By 2023, CoreML supported advanced Transformer models, better integration with Vision and Natural Language frameworks, and real-time application performance improvements. In 2024, it introduced a faster inference engine with Async Prediction API support, BERT embeddings, multi-label image classification, a new Augmentation API, and enhanced model conversion options. These updates ensure CoreML stays at the forefront of mobile AI capabilities.
Matching Devices with CoreML Versions
Aside from the CoreML version we can support in our app (which depends on the iOS version), we must consider the device’s computational power. Mobile devices have less powerful CPUs and GPUs compared to desktop computers and servers, limiting their ability to process large and complex AI models in real time.
Understanding the capabilities and limitations of the device is crucial when developing AI applications. While CoreML provides a powerful framework for integrating machine learning models into iOS apps, the device’s hardware capabilities will ultimately determine the complexity and performance of these models.
Working with CoreML: Creating and Integrating ML Models in iOS
CoreML supports a variety of model types and integrates seamlessly with other Apple frameworks, such as Vision for image analysis and computer vision capabilities; Natural Language for processing and analyzing text; Speech for transcribing audio input to text and generating spoken output from text; and Sound Analysis for identifying sounds such as applause, laughter, or music genres.
To make predictions or use any of these frameworks, you need a CoreML model. Developers can either use pre-trained models available from Apple or the community, or they can create custom models using tools like CreateML or convert models from other frameworks.
Getting a model is not a difficult task. If this is your starting point, you can download any of Apple’s official models from Apple’s Machine Learning Models. Then, you can add this model to your app, instantiate it, and make predictions. This is the easiest way if you don’t have ML knowledge, you have to just focus and understand the input and output of the model. You can find this info in the model description.
Create your model with CreateML
If using an already-developed model is not possible, you can use CreateML to train your own. CreateML is a powerful tool provided by Apple that allows developers to train machine learning models using a simple and intuitive interface. CreateML abstracts much of the complexity involved in training models, making it accessible even to developers with limited machine learning experience. It provides pre-built templates and workflows for various types of machine-learning tasks, including image classification, object detection, text classification, and more.
While CreateML simplifies the process of training machine learning models, having a basic understanding of AI concepts can be beneficial. Developers should be familiar with:
领英推荐
Import a Python model with CoreML
Importing an already developed Python model into CoreML involves several steps, leveraging the coreml tools library to convert the model into a format compatible with iOS. CoreML supports a wide range of models from popular frameworks like TensorFlow, Keras, PyTorch, and ONNX. This method is particularly useful for developers who have experience with machine learning frameworks such as TensorFlow, Keras, or PyTorch and want to deploy their models on Apple devices or for large teams with dedicated developers working on a cross-platform machine learning model solution.
For these solutions the team (or iOS developers) must have a familiarity with Python and ML fundamentals, such as the understanding of training, validation, and testing machine learning models.
ML in Android: TensorFlow Lite
TensorFlow Lite is Google’s open-source deep learning framework designed for on-device machine learning inference. It is optimized for mobile devices, enabling developers to deploy machine learning models on Android and iOS. TensorFlow Lite is a lighter version of TensorFlow, specifically tailored to perform efficiently on devices with limited computational power.
Some good points of Tensorflow Lite are, it is cross platform, it is optimized for mobile devices with low latency and efficient execution and, like CoreML it also supports a model optimization.
Of course, not all TensorFlow operations are supported in TensorFlow Lite. Some advanced features and custom layers may require adjustments or simplifications to work within the constraints of TensorFlow Lite. Large and highly complex models may not perform well on mobile devices due to limited computational resources.
Wrapping up
Both CoreML and TensorFlow Lite offer tools and APIs that simplify the integration of machine learning models into mobile apps. However, CoreML’s tight integration with Apple’s development environment can make it easier for developers already familiar with Apple’s ecosystem due CoreML is specifically designed for Apple devices.
CoreML and TensorFlow Lite provide extensive model optimization techniques, which are crucial for running models efficiently on mobile devices. I can highlight CoreML is one step ahead of TensorFlow Lite in optimisations allowing, in the last version, the conversion of transformers and stateful models.
The future of AI in mobile devices
The future of AI in mobile devices is promising, with advancements in technology that will revolutionise the way we interact with our smartphones and other mobile devices. The focus seems to be improving the understanding of the user behaviour and preferences, taking care of their privacy, in greater detail to provide highly personalised experiences. In terms of solutions by the OS we will get improvements in health and fitness monitoring with more advanced health metrics.
Augmented Reality and Virtual Reality continue improving and the integration with AI provide more immersive and interactive experiences. This includes real-time object recognition, gesture control, and environment adaptation (call to take a look of ARKit and Vision frameworks advancements).
We can’t forget to mention the future improvements (not released yet) in virtual assistance and the integration of an LLM in our mobile devices (Apple Intelligence).
Conclusion
In conclusion, the future of AI in mobile devices is bright, bringing significant improvements in personalization, health monitoring, augmented reality, virtual assistants, security, and more. As these technologies continue to evolve, they will make our mobile devices smarter, more intuitive, and integral to our daily lives. It is our responsibility as developers to design and code solutions that take advantage of these advances to provide the best possible user experience.
Discover more new AI articles on our Blog ??https://cactus-now.com/blog/