Home Artists Posts Import Register

Content

One of the standout announcements made at this year’s Google Pixel hardware event was the introduction of on-device generative AI. This means that instead of relying on the cloud-based servers the phone will process the inputs directly. But Google is not alone Apple’s made a similar announcement at the iPhone 15 announcement. But why? If the cloud is ubiquitous and easily accessed what advantage does an on-device generative AI offer to users? 

1) Privacy and Data Security

Machine learning has been widely available on smartphones for the past couple of years in things like photography and image editing. But for generative AI functions the devices still needed to send and receive data from the cloud. Shifting that model to an integrated chip inside the end-user device provides privacy and data security. This is especially important as people regularly use generative AI for a wide range of tasks in a number of fields including medical, legal, and government work. By eliminating the risk posed by shuffling data from the device to online ML servers one major potential attack vector has been removed. 

2) Avoids Legal Complications

As countries and trading blocks take a stricter view on data-sharing accumulating, storing, and using user data has become a much more complicated task than before. Social media firms like Meta and Google are finding that the way they handle user data has become a liability. As with privacy and data security keeping user data on-device for generative AI tasks helps companies bypass regulatory scrutiny and avoid the issue entirely. 

3) Speed

As anyone who’s used ChatGPT or Midjourney before knows, the speed at which your results appear can take several seconds if not longer, depending on your prompt input. You’re sharing server time with thousands of other users at the same time. By moving generative AI processing locally you can speed up the process and avoid any latency caused by network congestion or server load. This is especially important if the generative AI is part of an on-device voice assistant where response time is essential to the user experience.

4) Accessibility

Sending and receiving network packets over a data connection or WiFi requires a device to have a connection. But if you’re in a remote location where data connection is spotty the generative AI experience won’t need to suffer because of it. Additionally, because the LLM is on the device locally users won’t have to pay additional fees. Plus the less tech-savvy will be able to take advantage of them without having to use potentially confusing 3rd party apps or services in order to use them. 

5) Localized Learning

LLMs are often trained on very broad data sets. If you need a chatbot that can write novels in the style of John Steinbeck or pen song lyrics in the fashion of John Denver that’s great. But if you need generative AI to be more personalized to you, for example, write an email in your voice, then the data set needs to be more localized to you. Having an on-device generative AI helps with that. Instead of feeding an LLM that is absorbing millions of people’s speech patterns, behaviors, and expressions a local one is just absorbing yours. Building you a virtual persona that can be used in a variety of situations.

Of course, there are downsides. Running generative AI locally can potentially shorten battery life if the tasks are computationally lengthy negating any benefits you would see from reduced WiFi or mobile data connections. But overall I think it's a definite positive for end users, especially from a data security and privacy perspective. I will add that these are early days still and in the super competitive arena of smartphones, all manufacturers are looking to integrate a similar feature in their own products to keep ahead or at least keep pace with the competition. So definitely expect to see on-device generative AI as a must-have feature on all flagship phones moving forward. 

Files

Comments

Anonymous

The speed. Yasssss!