I often use speak screen and speech to text. In fact Karen is someone I listen to around 8 hours a day everyday all year long. She helps navigate posts, news and understand messages and emails. She reads my messages and helps me better comprehend the written and seen world around me. One of the biggest reasons I was able to finish school and operate in the world today is because of Karen.
Karen is a voice within apples voices that reads and synthesizes texts. Unlike WOPR from the movie War Games she actually sounds pretty amazing. But that’s besides the point. When you listen to someone talk the same way for so long you notice little differences.
Recently I have been using the AirPods pro and have noticed how bass filled Karen is. She sounds stuffy and is hard to understand like she is speaking through a pillow. It’s actually hard to understand her now because of the muffled nature of the bass in her voice. Is there anyway to work around the bass settings in the AirPods themselves. Most apps like Spotify allow you to adjust within the app.
But speak screen is often used across multiple apps. For instance I am using the speech to text to look over this post for any errors.
Anyways I am just concerned that apple has overlooked this and wasn’t sure where and how it can be addressed.
Please help if you have used speak screen or TTS before or know of how this has been addressed in the past. Thank you.