Apple announced that iPhone and iPad users will soon have the ability to hear their devices speak in their own voice.
The upcoming feature,”Personal Voice,” will give users randomized text prompts to generate 15 minutes of audio.
Also Read: Apple launches its “Buy now, Pay later” Service in USA
There’ll be another new tool called “Live Speech” which lets users type in a phrase, and save commonly-used ones, for the device to speak during phone and FaceTime calls or in-person conversations.
Apple says it’ll use machine learning, a type of AI, to create the voice on the device itself rather than externally so the data can be more secure and private.
It might sound like a quirky feature at first, but it’s actually part of the company’s latest drive for accessibility. Apple pointed to conditions like ALS where people are at risk of losing their ability to speak.
“At Apple, we’ve always believed that the best technology is technology built for everyone,” said Tim Cook, Apple’s CEO.
And Philip Green, a board member at the Team Gleason nonprofit whose voice has changed significantly since being diagnosed with ALS, said in the press release: “At the end of the day, the most important thing is being able to communicate with friends and family.”
“If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world,” he added.
It’s not the first time Apple has ventured into the AI voice market, as iPhone users will be familiar with Siri. It uses machine learning to understand what people are saying.
And back in 1984, Steve Jobs was passionate about getting the Apple Macintosh 128K to say “Hello” in a voice demo at its unveiling. That was pretty advanced tech for the time, and was dramatized in a key plot point in the 2011 biopic about the late Apple cofounder.
It’s not clear exactly when Personal Voice will be available, but Apple says it’ll be before the end of the year.
Leave a Reply