–> –> Apple
A user can create a Personal Voice on iPad, iPhone, or Mac by reciting prompts out loud into their device. Then, their iPhone, iPad, or Mac uses machine learning to generate a voice for all their Live Speech calls. Once the Personal Voice is created, text inputs to Live Speech will sound like the sender’s voice for the recipient to hear, enabling a more personal way to use text-to-speech software.
Apple says that Personal Voice caters to those with a recent diagnosis that will affect their speaking ability. Personal Voice is like creating a reservoir for your voice for people whose voice will be progressively impacted.
Also: Alexa can now place your Panera delivery for you
To create a Personal Voice, users must read a randomized set of prompts for 15 minutes. The 15 minutes of audio allows the iPad, iPhone, or Mac to learn the inflection and cadence of the user’s voice when talking.
Personal Voice requires a voice to study and replicate within phone and FaceTime calls. Those unable to speak can still use Live Speech to speak to people in real-time on phone and FaceTime calls, but it will likely be a computerized voice.
Also: I spent $130 on these reading glasses and can never go back to cheap readers
Detection Mode makes interacting with physical objects easier for those with visual impairments. Point and Speak are built into the Magnifier app on iPhone and iPad and can help those who cannot see or have difficulty seeing navigate the world around them.
Other accessibility features integrated into Apple products include pairing hearing devices directly to Mac to adjust to a person’s hearing comfort. Voice Control improvements will offer suggestions to distinguish between certain homophones like site, cite, and sight, based on context. Text Size will be easier to adjust within Mac apps such as Finder, Messages, Mail, and Calendar, and users will be able to adjust Siri’s speaking speed, ranging from 0.8x time to 2x time.