---
layout: post
title: "Voice Chapter 11: multilingual assistants are here"
description: "Our assistant can now control more things and in multiple languages at the same time."
date: 2025-10-22 00:00:01
date_formatted: "October 22, 2025"
author: Mike Hansen
comments: true
categories: Assist
og_image: /images/blog/2025-10-voice-chapter-11/art.webp
---
Welcome to Voice Chapter 11 đ, our [long-running series](/blog/categories/assist/) where we share all the key developments in Open Voice. In this chapter, we will tell you how our assistant can now control more things in the home, in multiple languages at the same time, all while not talking your ear off. Whatâs more, our list of supported languages has grown again with several languages that big techâs voice assistants wonât support. Join us for a deeper look at this voice chapter in our [livestream](https://www.youtube.com/watch?v=sIkguv0NEQI) on Wednesday, October 29. Itâs been a couple of months, weâve been building up our voice, and now have a lot to say, so letâs get to it!
## Multilingual assistants
Our original goal for the [Year of Voice back in 2023](/blog/2022/12/20/year-of-voice/) was to âlet users control Home Assistant in their own languageâ. Weâve come a long way towards that goal, and really broadened our language support. Weâve also provided options that allow users to customize voice assistant pipelines with the services that best support their language, whether run locally or in the cloud of their choice. But what if you speak two languages within your home?
For some time, users have been able to create [Assist](/voice_control/) voice assistant pipelines for different languages in Home Assistant, but interacting with the different pipelines has either required multiple voice satellite devices (one per language) or some kind of automation [trigger to switch languages](https://www.youtube.com/live/ZgoaoTpIhm8?t=3902).
Since even the tiniest voice satellite hardware we support is capable of running [multiple wake words](/blog/2024/06/26/voice-chapter-7/#3x-wake-words-and-2x-accuracy) now, weâve added support in 2025.10 for configuring **up to two wake words** and voice assistant pipelines on each Assist satellite! This makes it straightforward to support dual language households by assigning different wake words to different languages. For example, âOkay Nabuâ could run an English voice assistant pipeline while âHey Jarvisâ is used for French.
Multiple wake words and pipelines can be used for other purposes as well. Want to keep your local and cloud-based voice assistants separate? Easy! Assign a wake word like âOkay Nabuâ to a fully local pipeline using our own [Speech-to-Phrase](/blog/2025/02/13/voice-chapter-9-speech-to-phrase/) and [Piper](https://github.com/home-assistant/addons/blob/master/piper/DOCS.md). This pipeline would be limited to basic voice commands, but would not require anything to run outside of your Home Assistant server. Alongside this, âHey Jarvisâ could be assigned to a different pipeline that uses external services like Home Assistant Cloud and an LLM to answer questions or perform complex actions.
Weâd love to hear feedback on how you plan to use multiple wake words and voice assistants in your home!
## Voice without AI
The whole world is engulfed in hype about AI and adding it to all the things â [weâre not exactly quiet about the cool stuff weâre doing with AI.](/blog/2025/09/11/ai-in-home-assistant/) While powering your voice assistants with AI/LLMs makes them much more flexible and powerful, it comes at a cost: paying to use cloud-based services like OpenAI and Google, or pricey hardware and energy to run local models via systems like Ollama. We started building our voice assistant before AI was a thing, and thus it was designed without requiring it. We continue to make great progress towards delivering a solid voice experience to users who want to keep their home AI free â keeping [AI opt-in only and not required](https://newsletter.openhomefoundation.org/ai-is-optional-privacy-isnt/) are guidelines we follow.
[Assist](/voice_control/), our built-in voice assistant, can do a lot of cool things without the need for AI! This includes [a ton of voice commands in dozens of languages](/voice_control/builtin_sentences/) for:
* Turning lights and other devices on/off
* Opening/closing and locking/unlocking doors, windows, shades, etc
* Adjusting the brightness and color of lights
* Running scripts and activating scenes
* Controlling media players and adjusting their volume
* Playing music on supported media players via [Music Assistant](/integrations/music_assistant/)
* Starting/stopping/pausing multiple timers, optionally with names
* Adding/completing items on to-do lists
* Delaying a command for later (âturn off lights in 5 minutesâ)...
* âŚand more!
Want to include your own voice commands? You can quickly add [custom sentences](/voice_control/custom_sentences/) to an automation, allowing you to take any action and tailor the response.
The easiest way to get started is with [Home Assistant Voice Preview Edition](/voice-pe/), our small and easy-to-start with Voice Assistant hardware. This, combined with a [Home Assistant Cloud subscription](/cloud/), allows any Home Assistant system to quickly handle voice commands, as our privacy-focused cloud processes the speech-to-text (turning your voice into text for Home Assistant) and text-to-speech (turning Home Assistantâs response back into voice). This is all without the use of LLMs, and supports the development of Home Assistant đ.
For users wanting to keep all voice processing local, we offer add-ons for both speech-to-text and text-to-speech:
* [Whisper](https://github.com/home-assistant/addons/blob/master/whisper/DOCS.md) is a powerful speech-to-text system that comes in [different sizes with varying hardware requirements](https://github.com/openai/whisper#available-models-and-languages)
* [Speech-to-Phrase](/blog/2025/02/13/voice-chapter-9-speech-to-phrase/) is our speech-to-text system that trades flexibility for speed
* [Piper](https://github.com/home-assistant/addons/blob/master/piper/DOCS.md) is our fast neural text-to-speech system with [broad language support](https://rhasspy.github.io/piper-samples/)
All of this together shows just how much can be done without needing to include AI, even though it can do [some pretty amazing things](https://youtu.be/mLtFUG4YG1A). And weâre continuing to close the gap with the features highlighted in this blog post, including multilingual assistants, improved sentence matching, and the ability to ask questions from automations.
### More intents
Intents are what connect a voice command to the right actions in Home Assistant to get something done. While the end result is often simple, such as turning on a light, intents are designed as a âdo what I meanâ layer above the level of basic actions. In the previous section, we listed the sorts of voice commands that intents enable, from turning on lights to adding items to your to-do list. Over the last three years, weâve been progressively adding new and more complex intents.
Recently, weâve added three new intents to make Assist even better. To control media players, you can now set the **relative** volume with voice commands like âturn up the volumeâ or âdecrease TV volume by 25%â. This adds to the existing volume intent, which allows you to set the absolute volume level like âset TV volume to 50%â.
Next, itâs now possible to set the speed of a fan by percentage. For example, âset desk fan speed to 50%â or even âset fans to 50%â to target all fans in the current area. Make sure you [expose](/voice_control/voice_remote_expose_devices/) the fans you want Assist to be able to control.
Lastly, you can now tell the kids to âget off your lawnâ because your robot is going to mow it! Making use of the [lawn_mower](/integrations/lawn_mower) integration, your voice assistant can now understand commands like âmow the lawnâ and âstop the mowerâ. Paired with the existing smart vacuum commands, you may never need to lift a finger again to keep things clean and tidy.
### Ask question
*Picture this:* you come home from work and, as you enter the living room, your voice assistant asks what type of music youâd like to hear while preparing dinner. As the music starts to play, it mentions you left the garage door open and wants to know if youâd like it closed. After dinner, as youâre hanging out on the couch, your voice assistant informs you that the temperature outside is lower than your AC setting and asks for confirmation to turn it off and open the windows.
*Surely youâd need a powerful LLM to perform such wizardry, right?* With the [Ask Question action](/integrations/assist_satellite/#action-assist_satelliteask_question), this can all be done locally using Assist and a few automations!