IR Blaster: Right now, it's possible to have the Echo and Home control a TV, but only through 3rd party devices. If the Echo or Home had a top-mounted 360-degree IR Blaster, the smart speakers could natively control TVs, entertainment systems, and heating and cooling units. Echo and Homes are naturally placed out in the open, making the devices well suited to control devices sporting an infrared port. Saying "turn on the TV" or "turn on the AC" could trigger the Echo to broadcast the IR codes from the Echo to the TV or wall-mounted AV unit. This would require Amazon and Google to integrate a complete universal remote scheme into the Echo and Home. That's not a small task. Companies such as Logitech's Harmony, Universal Remote Control and others are dedicated to ensuring their remotes are compatible with everything on the market. It seems like an endless battle of discovering new IR codes, but one I wish Amazon and Google would tackle. I would like to be able to control my electric fireplace and powered window shades with my Echo without any hassle.
Image Credits:Bryce Durbin / TechCrunch
Hardware

Amazon’s Alexa is getting a more natural-sounding voice

In addition to getting a generative AI-powered upgrade, and the ability to continue conversations without again using the wakeword “Alexa,” Amazon’s voice assistant is going to gain a more natural-sounding voice. The company introduced today an updated “speech-to-speech” engine that’s now more context-aware of the user’s emotions and the tone of your voice, which then allows Alexa to respond with a similar emotional variation in its output.

The company demoed the new voice which offered a less robotic-sounding Alexa, which included more expressiveness — something the company noted was powered by large transformers that were trained on different languages and accents.

For example, if a customer asked for an update about their favorite sports team and they had won the latest game, Alexa would be able to respond with a joyful voice. If they had lost, however, Alexa would sound more empathetic.

“And we’re working on a new model — which we refer to as speech-to-speech — again powered by massive transformers. Instead of first converting a customer’s audio request into text using speech recognition, and then using an LLM to generate a text response or an action, and then text-to-speech to produce audio back — this new model will unify these tasks, creating a much richer conversational experience,” said SVP of Alexa Rohit Prasad.

Amazon said Alexa will be able to exhibit attributes like laughter, surprise, and even uh-huhs that encourage users to continue the conversation.

This is all powered by Amazon’s Large Text-to-Speech (LTTS) and Speech-to-Speech (S2S) technologies. The former enables Alexa to adapt its response using textual input such as a user’s request or the topic that is being discussed, while the latter layers on audio input in addition to text to allow Alexa to adapt its response with more conversational richness, Amazon says.

Correction, 9/20/23 12:28 pm et: the new engine is dubbed ‘speech-to-speech,’ not ‘text-to-speech.’ The article was updated to reflect this. 

Read more about Amazon's Fall Event on TechCrunch

Topics

, , ,
Loading the next article
Error loading the next article