Imagine navigation apps like Waze and Google Maps being able to recognize what podcast or radio show you’re listening to and adopting the voice of the host or artist you’re listening to in providing directions.
Waze currently overshadows my podcast listening. The volume on the podcast I’m listening to lowers when the Waze navigator tells me where to turn next. Because of this, I miss little details in the episodes. Integrating the directions more cleanly into whatever I’m listening to would be great – have the podcast content stop, replaced by the next set of directions in the voice of whomever I’m listening to.
There’s got to be the technology to do this. Google just updated Google Now to provide more contextual support for users. For example, if you’re texting with your friend about what movie you will, you can call up Google Now without leaving the text application. Google Now will read your text and pull up information on the movies you and your friend mentioned in your texts. There’s got to be a way to port that kind of machine learning to how apps interact sonically.