Home Assistant Guide

Simple tutorials for powerful automations

Guide to ExtendedOpenAI Conversation as a Voice Assistant in Home Assistant

ExtendedOpenAI Conversation is a powerful Home Assistant integration that uses OpenAI's latest language models (like GPT-4.1) to supercharge your smart home's voice assistant. It lets you speak to your home in natural language and have it understand complex requests, hold more natural conversations, and perform advanced automations - all without being limited to a fixed set of commands.

With ExtendedOpenAI, your assistant can control devices, manage lists, set reminders, create calendar events, answer questions, fetch web information, and more. Unlike basic voice agents, it can understand context, follow up on previous questions, and intelligently decide which Home Assistant functions to call.

This guide explains how to get started, create an OpenAI account, add credit, pick the best model for your needs, configure advanced options (like tokens and tools), understand billing, and install everything through HACS. By the end, you'll have a smart, flexible voice assistant that truly understands you - and your home.

Install ExtendedOpenAI Conversation via HACS

  1. Open HACS → Integrations.
  2. Click ⋯ → Custom repositories, add:
    https://github.com/jekalmin/extended_openai_conversation as type "Integration" .
  3. Search for "Extended OpenAI Conversation" and install it.
  4. Restart Home Assistant.
  5. Settings → Devices & Services → Add Integration → choose "Extended OpenAI Conversation."

Create OpenAI Account & Top‑Up Credit ?

  1. Sign up at OpenAI platform.
  2. Verify your email, then go to Billing → Add payment method.
  3. Link a card and optionally add prepaid credit (e.g., $5–10) to buffer usage.
  4. Enable auto‑recharge to prevent running low on funds.
  5. Credits expire after 12 months - monitor usage and remaining balance in billing tools.

OpenAI Models & Pricing

OpenAI has many models - costs vary. See full pricing at api/pricing .

I personally use gpt‑4.1‑mini but feel free to experiment with different models and find what works best for you and your budget.

Configure the Integration

  1. Go to Settings: In Home Assistant, open Settings → Devices & Services and look for Extended OpenAI Conversation.

    This integration lets your Home Assistant talk to advanced AI models, so you can use natural language to control your smart home.

  2. Paste Your API Key: Enter the API key you received from your OpenAI (or compatible) provider.

    This key lets Home Assistant access the AI model securely. Never share your API key with anyone you don't trust!

  3. Select a Model: Choose your preferred AI model (e.g., gpt-4.1-mini).

    Tip: Larger models are smarter but may be slower or use more API credits. "Mini" models are faster and cheaper, but may not be as advanced.

  4. Adjust Advanced Options:
    • Temperature:

      Controls how "creative" or random the AI's responses are. Lower values (e.g., 0.2) make the assistant more predictable and focused. Higher values (e.g., 0.8) make answers more varied and creative, but sometimes less direct. For voice assistants, a value between 0.2 and 0.6 is usually best.

    • Top-p:

      Short for "nucleus sampling." Like temperature, it limits how much the AI can "branch out" in its responses. Lower values make the output safer and more focused, while higher values allow more variety. If you're not sure, leave it at the default (often 1).

    • Max Tokens:

      This sets the maximum length of each response. "Tokens" are chunks of words - the higher the number, the longer the assistant can speak, but it also uses more API credits. For most uses, 300-500 is enough.

    • Max Function Calls per Conversation:

      This controls how many times the assistant can call a Home Assistant function or service during a single conversation (for example, if you ask it to turn on multiple lights or run several automations). If you set this number low, the assistant will do fewer actions before stopping and waiting for you to say something else.

  5. Save: Click the Save button to apply your settings.
  6. Set as Voice Assistant: Go to Settings → Voice Assistants and choose this agent as your conversation agent.

    This will make Extended OpenAI handle your spoken commands.

  7. Ensure STT/TTS Providers Are Set: Make sure you have a Speech-to-Text (STT) provider and a Text-to-Speech (TTS) provider set up (such as Home Assistant Cloud).

    These handle converting your spoken words into text and reading responses out loud.

Once set up, you can ask your Home Assistant almost anything - and make it even smarter by enabling or creating your own custom functions!

Functions in ExtendedOpenAI

A function is simply a specification describing a task your assistant can perform, including what information (parameters) it needs and what it should do. When you ask your assistant to do something, ExtendedOpenAI will check if any functions match your request, collect the details it needs, and then run the corresponding action in Home Assistant. Functions can do things like:

  • Add items to your shopping list
  • Create or look up calendar events
  • Speak a message out loud (TTS)
  • Get sensor history or check travel time
  • And much more - you can create nearly any custom action!

Types of Functions: Native and Custom

ExtendedOpenAI includes a set of native functions that cover common tasks - like controlling devices, sending notifications, or managing lists. You can use these out-of-the-box, but the real power comes from adding your own custom functions to handle tasks unique to your home, your devices, or your routines.

How a Function Is Structured

A function has two main parts:

  • spec: This section describes what the function does and what information it needs (parameters). It includes a name, a description, and a detailed parameters section. Each parameter describes a single piece of info the assistant needs to collect from you.
  • function: This section tells Home Assistant what to do, using YAML - usually a script or automation. Here, you call Home Assistant services, and you can use any parameters you defined in the spec above.

By editing or creating new functions, you can make your voice assistant as capable and personal as you like.

Example: Calendar Event Creation Function

Let's walk through an example function that lets your assistant add a new event to your calendar by voice:


- spec:
    name: create_event
    description: Adds a new calendar event.
    parameters:
      type: object
      properties:
        summary:
          type: string
          description: Defines the short summary or subject for the event.
        description:
          type: string
          description: A more complete description of the event than the one provided by the summary.
        start_date_time:
          type: string
          description: The date and time the event should start.
        end_date_time:
          type: string
          description: The date and time the event should end.
        location:
          type: string
          description: The location
      required:
      - summary
  function:
    type: script
    sequence:
      - service: calendar.create_event
        data:
          summary: "{{summary}}"
          description: "{{description}}"
          start_date_time: "{{start_date_time}}"
          end_date_time: "{{end_date_time}}"
          location: "{{location}}"
        target:
          entity_id: calendar.household_calendar

How This Function Works

  • name: create_event - This is the name the assistant looks for when deciding which function to call.
  • description: A brief explanation so users know what this function does.
  • parameters: Lists all the details needed to create an event: a summary, description, start/end time, and location. The summary is marked as required, but you could add more required fields if needed.
  • function: This part actually does the work! Here, it calls the Home Assistant service calendar.create_event, filling in the details using the info you (or your assistant) provided.

This example shows how flexible functions can be: you can create new actions for any scenario by adjusting the spec and the sequence.You can build your own functions by copying and editing examples like the one above. Just change the name, description, the list of parameters, and the function sequence to do what you want. If you want your assistant to turn on a group of lights, play music, set reminders, or control a custom device, you just need to describe the task and map out the data needed. Try experimenting - if you can create a Home Assistant script, you can turn it into a voice-controllable function!

Tip: Most built-in (native) services are already supported, but for anything unique to your home or setup, a custom function is often the simplest solution.

Starting with Voice Commands ?️

Use voice input to say:

  • "Add milk to my shopping list"
  • "Schedule a meeting for tomorrow at 3pm"
  • "In 15 minutes, turn off the kettle"

The assistant will parse intent, call appropriate function, and perform action.

Tips & Best Practices

  • Enable only needed functions for privacy/security.
  • Monitor tool usage - especially web and file search costs.
  • Adjust temperature and token limits per scenario.
  • Be mindful of context window to control cost.

Summary

  1. Install via HACS (correct GitHub URL)
  2. Create OpenAI account, top up credit
  3. Choose your model (we recommend gpt‑4.1‑mini)
  4. Set advanced options (temp, tokens, tool limits)
  5. Define tool specs
  6. Configure integration and voice assistant settings
  7. Use voice to control and query your Home Assistant

Using GPT-5 and the "max_completion_tokens" Parameter

As of 2025, OpenAI's GPT-5 family includes several models such as gpt-5, gpt-5-mini, and gpt-5-nano. The smallest of these, gpt-5-nano, introduces a change to how token limits are defined - and this can cause an error if your ExtendedOpenAI integration still uses the old parameter name.

Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'}}

This error appears because gpt-5-nano no longer accepts the older max_tokens parameter. Instead, it uses max_completion_tokens, which defines the maximum number of tokens the model can generate in its response. If your integration hasn't been updated for this new parameter, the request will fail with the above error.

How to Fix or Work Around It

  • Use gpt-5-mini or gpt-5: These models remain compatible with max_tokens and work normally. They're slightly slower and more expensive than gpt-5-nano, but they avoid this specific issue.
  • Check for Updated Forks: "Forks" are community-maintained copies of a GitHub project. Developers can edit and improve them independently of the main repository. Visit the ExtendedOpenAI Conversation GitHub page and click the Forks link to view available versions - some may already include support for max_completion_tokens.
  • Wait for an Update: The main version of the integration may soon fix this automatically. If it's already resolved by the time you're reading this, please let me know so I can update this article!
  • For Advanced Users: If you're comfortable editing Python, you can manually modify the integration's code to replace max_tokens with max_completion_tokens. This is an advanced change and falls outside the scope of this beginner-friendly guide.

In short, if you encounter this specific max_tokens error, the simplest workaround is to switch to gpt-5-mini or gpt-5 until your integration (or a forked version) adds support for max_completion_tokens.

Some Useful Functions

These are some functions I use in my ExtendedOpenAI integration that you may find useful.
Check Travel Time With Google Maps API Note: You will need a Google Maps API key for this to work. To get a Google Maps API key, sign in to the Google Cloud Console, create or select a project, and enable the desired Maps APIs (such as Directions, Places, or Distance Matrix). Then, go to "APIs & Services → Credentials" and create an API key - you may want to restrict its usage for security.

- spec:
    name: check_travel_time
    description: Check travel time between two locations using Google Maps Distance Matrix API.
    parameters:
      type: object
      properties:
        origins:
          type: string
          description: The starting location.
        destinations:
          type: string
          description: The destination location.
      required:
      - origins
      - destinations
  function:
    type: rest
    resource_template: "https://maps.googleapis.com/maps/api/distancematrix/json?origins={{ origins }}&destinations={{ destinations }}&key=AIzaSyDgqzUfmGHgRK915RD0zpk7w-PfbAM9lGg"
    value_template: >-
      {% if value_json.rows and value_json.rows[0].elements and value_json.rows[0].elements[0].duration %}
      {{ value_json.rows[0].elements[0].duration.text }}
      {% else %}
      No travel time found,
      {% endif %}
	        
Search Plex Library Note: Replace the IP address and API key below with your own. To use the Plex API, you'll need your Plex server's local IP address (found in Plex settings under Network or by checking your router's device list). For the API key (known as a Plex "token"), sign in to Plex in your browser, open the Web App, then view the network requests in your browser's developer tools while refreshing the page; look for a URL containing X-Plex-Token= - that's your key. Make sure your Plex server allows connections from your Home Assistant device, and always keep your token secure.

- spec:
    name: search_plex
    description: Use this function to search for media in Plex and return JSON.
    parameters:
      type: object
      properties:
        query:
          type: string
          description: The search query to look up media on Plex.
      required:
        - query
  function:
    type: rest
    resource_template: "http://http://127.0.0.1:32400/search?query={{query}}&X-Plex-Token=YOUR_PLEX_TOKEN_HERE"
    headers:
      Accept: "application/json"
    value_template: >-
      {{ value }}
	            
	        
Send Notification to Mobile Device Note: Replace the name with your own, and "mobile_app_pixel_7" with your own phone's id.

- spec:
    name: send_message_to_conors_phone
    description: Use this function to send message to Conors phone as a notification.
    parameters:
      type: object
      properties:
        message:
          type: string
          description: message you want to send
      required:
      - message
  function:
    type: script
    sequence:
    - service: notify.mobile_app_pixel_7
      data:
        message: "{{ message }}"
	            
	        
Set Alarm on Android Phone Note: You will need to create a script to set the android alarm - you can find one here.

- spec:
    name: set_android_alarm
    description: Set an alarm on the Android phone using the HA mobile app. Time format is "HH:MM:SS".
    parameters:
      type: object
      properties:
        alarm_time:
          type: string
          description: The time to set the alarm (24-hour, e.g., "07:30:00").
      required:
        - alarm_time
  function:
    type: script
    sequence:
      - service: script.set_android_alarm
        data:
          alarm_time: "{{ alarm_time }}"