Personal voice on iOS 17: Apple’s most important AI feature

Key points

  • Apple’s progress in consumer-facing AI has been slow compared to Google and Microsoft.
  • The Personal Voice feature in iOS 17 impressed the author and renewed faith in Apple’s progress in AI.
  • While not actively relied upon in everyday life, Personal Voice showcases Apple’s efforts in AI and could pave the way for more advanced features in the future.

Compared to Google and Microsoft, Apple’s progress in the consumer-facing AI department has been slow. While the Cupertino company has reported that it has tested Apple GPT in its secret laboratories, iPhone users still don’t have access to aa shrewd assistant. Today, those on iOS can use the ancient knowledge of Siri to do some basic tasks. But even then, this assistant often fails to understand unsophisticated commands. After using the Personal Voice feature in iOS 17, however, my faith in Apple and its advances in AI has been renewed.


What is personal voice?

For the unfamiliar, Apple first announced Personal Voice a few months ago as one of several accessibility features coming to its platforms later this year. This addition is now available in beta for those running iOS 17 and iPadOS 17 and requires a 15-minute setup. During the setup process, you are asked to read short sentences in a calm environment. After reading more than a hundred of them, iOS starts analyzing them when your iPhone is locked and connected to the charger. Note that the prep phase took several days to wrap up my iPhone 14 Pro. So even if you buy the latest iPhone (and a protective case for it), don’t expect the process to be super quick. Once ready, your iPhone or iPad will be able to read text using your virtual voice.

Practice with your own voice

Apple iPhone 14 Pro Max display review cover

Initially, I had very low expectations for the final result, as Apple isn’t exactly famous for its artificial intelligence. But boy, was my mind blown after trying it for the first time. Yes, there’s an inevitable robotic layer to the virtual voice, but it sure does sound like me. And considering that this feature works completely offline, it’s really impressive how far Apple has come in this field.

To get a more objective view of the results of the feature, I even sent virtual voice messages to a number of contacts. Some were completely unaware that the voice was artificial, while others could detect the accompanying robotic layer. Everyone, however, agreed that the personal voice does indeed sound like my real voice.

Do I actively rely on Personal Voice in my daily life? Not exactly. After all, it’s an accessibility feature designed for those who are speech impaired or at risk of losing their voice along the way. It’s certainly a nice party trick to show off a few times to those unfamiliar with the latest mobile technology. However, you probably won’t use it much unless you have a speech impediment. Still, it’s a promising materialization of Apple’s not-so-secret efforts when it comes to AI-powered features, especially since reports indicate that we could see some major AI-focused offerings from Apple next year.

How could it pave the way for more advanced features

With Personal Voice becoming accessible to millions of iPhone customers later this year, one wonders if users will finally be able to generate virtual identities of themselves. While I don’t see Apple doing it, video manipulation and including facial expressions can now be manipulated, not to mention replicating a person’s voice. Give the virtual version of you a few years’ chat history with someone, and you may even be able to replicate someone’s personality, tone of speech, and style. Visualize the result with an Apple Vision Pro and you’ve got an identical virtual twin that can replicate someone’s look, voice and mindset.

#Personal #voice #iOS #Apples #important #feature
Image Source : www.xda-developers.com

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *