Echoic

Docs

Configure AI providers

Connect optional AI providers for summaries, cleanup, action items, and advanced routing.

Before you connect a provider

Decide what should stay local and what can use the cloud.

Echoic can transcribe locally by default, then use the AI provider you choose for summaries, cleanup, action items, templates, and transcript questions. Provider routing controls where transcript text is sent for those AI features.

Use local providers for the most private work

Ollama and Apple Intelligence keep AI work on-device when available. They are the right default for sensitive transcripts when local quality is good enough.

Use cloud providers for stronger or faster models

OpenAI, Anthropic, Groq, xAI, Gemini, DeepSeek, and OpenRouter can improve summary quality, latency, context length, or model choice, but transcript text is sent to the provider you select.

Route by workflow

A practical setup is local cleanup for dictation, a stronger cloud model for important meeting summaries, and a cheaper or faster model for titles, tags, and quick follow-ups.

Setup checklist

The same basic flow works for most providers.

  1. 1Confirm whether the provider is local, cloud, router, or custom.
  2. 2Create the account or local runtime before opening Echoic provider settings.
  3. 3Copy the API key once and store it in Echoic. Never paste keys into transcripts, prompts, or shared notes.
  4. 4Choose the exact model name shown by the provider.
  5. 5Test with a short transcript before routing long meetings.
  6. 6Set spend limits, usage alerts, or key restrictions in the provider console when available.

Provider-by-provider setup

Connect each AI provider in Echoic.

Ollama

Local

Private summaries and cleanup that run on your Mac after you install a local model.

Requirements

  • Ollama installed and running on your Mac.
  • At least one local chat model pulled with Ollama.
  • Enough memory for the model you choose.

Setup steps

  1. 1.Install Ollama, then open Terminal and run a model pull command such as ollama pull llama3.2 or another model that fits your Mac.
  2. 2.Start Ollama if it is not already running. The default local server is http://localhost:11434.
  3. 3.In Echoic, choose Ollama as the provider and select or enter the local model name exactly as Ollama reports it.
  4. 4.Run a short summary test before using it on a long meeting, because local model quality and speed depend heavily on model size and Mac memory.

Model tip

Small models are faster for cleanup. Larger models are usually better for meeting summaries and action items.

Privacy note

Prompts stay on your Mac when you use a local Ollama model. No cloud API key is required.

Troubleshooting

If Echoic cannot connect, confirm Ollama is running and that the model name matches ollama list.

OpenAI

Cloud

High-quality summaries, action items, rewriting, and general-purpose assistant workflows.

Requirements

  • An OpenAI API account.
  • An API key from the OpenAI platform.
  • Billing and usage limits configured in your OpenAI account if needed.

Setup steps

  1. 1.Create or open an OpenAI platform account.
  2. 2.Create an API key from the API keys page.
  3. 3.In Echoic, choose OpenAI, paste the API key, and select the model you want Echoic to use.
  4. 4.Use a short transcript to confirm the key, model access, and streaming response behavior.

Model tip

Use stronger models for long meeting reasoning and lower-cost models for quick dictation cleanup.

Privacy note

Transcript text sent to OpenAI leaves your Mac. Keep local transcription enabled if you only want AI notes, not cloud speech recognition.

Troubleshooting

If requests fail, check that the key is active, the account has billing enabled, and the selected model is available to your account.

Codex Login

External login

Using your existing local Codex CLI authentication without pasting an OpenAI API key into Echoic.

Requirements

  • Codex CLI installed on the same Mac.
  • An active Codex login in Terminal.
  • A model available through that Codex account.

Setup steps

  1. 1.Open Terminal and run codex login.
  2. 2.Complete the browser or device-code authentication flow.
  3. 3.Run codex login status to confirm credentials are present.
  4. 4.In Echoic, choose Codex Login and select the model Echoic should request through the local Codex authentication path.

Model tip

Choose Codex Login when you want Echoic to follow the same account state you manage with the Codex CLI.

Privacy note

Echoic uses the local Codex login path, but generated summaries still use the remote model behind that account.

Troubleshooting

If Echoic reports that Codex is unavailable, run codex login status in Terminal and sign in again if needed.

Anthropic

Cloud

Careful long-context summarization, meeting synthesis, and polished follow-up writing.

Requirements

  • An Anthropic Console account.
  • An Anthropic API key.
  • Access to the Claude model you plan to use.

Setup steps

  1. 1.Create an API key in the Anthropic Console.
  2. 2.In Echoic, choose Anthropic and paste the API key.
  3. 3.Select a Claude model suited to the length and complexity of your transcripts.
  4. 4.Run a short transcript through summary and action-item generation to confirm access.

Model tip

Prefer larger-context Claude models for long meetings and cheaper models for cleanup or title generation.

Privacy note

Transcript text sent to Anthropic is processed by Anthropic according to your account and API settings.

Troubleshooting

If a long transcript fails, try a larger-context model or reduce the amount of transcript text included in the request.

Groq

Cloud

Fast responses for lightweight summaries, cleanup, and interactive transcript questions.

Requirements

  • A GroqCloud account.
  • A Groq API key.
  • A selected Groq-hosted model.

Setup steps

  1. 1.Create an API key in GroqCloud.
  2. 2.In Echoic, choose Groq and paste the API key.
  3. 3.Select a supported Groq model.
  4. 4.Use a short transcript to verify that the selected model supports the Echoic feature you are testing.

Model tip

Groq is usually a strong choice when latency matters more than maximum reasoning depth.

Privacy note

Transcript text is sent to Groq for the AI features you route to this provider.

Troubleshooting

If you see model errors, check Groq model availability and rate limits for your account.

xAI / Grok

Cloud

Grok model access for summaries, reasoning, and alternative model routing.

Requirements

  • An xAI Console account.
  • An xAI API key.
  • Access to the Grok model you want to use.

Setup steps

  1. 1.Create an API key from the API Keys page in the xAI Console.
  2. 2.In Echoic, choose xAI / Grok and paste the API key.
  3. 3.Select a Grok model available to your team or account.
  4. 4.Run a test summary and confirm the output before routing important workflows to Grok.

Model tip

Use Grok for workflows where you specifically prefer xAI models or want to compare outputs against another provider.

Privacy note

Transcript text routed to xAI is processed by xAI. Use local providers for material that should not leave your Mac.

Troubleshooting

If authentication fails, confirm you copied an inference API key rather than a management key and that the model is available in your region.

Apple Intelligence

Local

On-device Apple AI features when supported by your Mac and macOS version.

Requirements

  • macOS 26 or newer.
  • A Mac that supports Apple Intelligence.
  • Apple Intelligence enabled in System Settings.

Setup steps

  1. 1.Update your Mac to macOS 26 or newer if your hardware supports it.
  2. 2.Open System Settings and enable Apple Intelligence.
  3. 3.In Echoic, choose Apple Intelligence as the AI provider.
  4. 4.Run a short cleanup or summary task to confirm the system provider is available.

Model tip

Use Apple Intelligence when you want the simplest local-first option and do not need to choose a third-party model.

Privacy note

Apple Intelligence is the most integrated local option, but exact processing behavior depends on Apple system settings and feature availability.

Troubleshooting

If the provider is unavailable, confirm macOS version, hardware support, regional availability, and Apple Intelligence settings.

OpenRouter

Router

Trying many hosted models through one API key and one OpenAI-compatible endpoint.

Requirements

  • An OpenRouter account.
  • An OpenRouter API key.
  • Credits or provider access for the model you choose.

Setup steps

  1. 1.Create an API key in OpenRouter.
  2. 2.In Echoic, choose OpenRouter and paste the API key.
  3. 3.Select or enter the model identifier exactly as OpenRouter lists it.
  4. 4.Use provider routing intentionally, because different models may have different privacy, context, and pricing behavior.

Model tip

OpenRouter is useful for comparing models without adding separate keys for every upstream provider.

Privacy note

Requests go through OpenRouter and may be routed to an upstream provider. Review the model and provider policy before using sensitive transcripts.

Troubleshooting

If a model fails, check whether the model requires credits, has provider outages, or uses a different model identifier.

Google Gemini

Cloud

Google Gemini model access for summaries, transcript Q&A, and multimodal workflows as Echoic support expands.

Requirements

  • A Google AI Studio or Google Cloud project with Gemini API access.
  • A Gemini API key.
  • Billing, quota, and API restrictions reviewed for your account.

Setup steps

  1. 1.Create or select a project in Google AI Studio.
  2. 2.Create a Gemini API key and restrict it where your Google setup allows.
  3. 3.In Echoic, choose Google Gemini and paste the key.
  4. 4.Select a Gemini model and test with a short transcript.

Model tip

Use Gemini when you prefer Google models or want to compare summary style and action-item extraction against another provider.

Privacy note

Transcript text routed to Gemini is sent to Google. Keep local routing for content that should stay on-device.

Troubleshooting

If the key fails, confirm the Gemini API is enabled for the project and that key restrictions or billing settings are not blocking requests.

DeepSeek

Cloud

Cost-conscious reasoning, coding-adjacent summaries, and OpenAI-compatible routing.

Requirements

  • A DeepSeek API account.
  • A DeepSeek API key.
  • A selected DeepSeek model name.

Setup steps

  1. 1.Create an API key in the DeepSeek platform.
  2. 2.In Echoic, choose DeepSeek and paste the key.
  3. 3.Select or enter a supported DeepSeek model.
  4. 4.Run a short test and compare output quality before using it for important summaries.

Model tip

Use reasoning-oriented DeepSeek models when you want more deliberate synthesis, and faster models for cleanup.

Privacy note

Transcript text routed to DeepSeek is sent to DeepSeek. Do not use it for transcripts that must remain local.

Troubleshooting

If requests fail, verify the model identifier and account access. DeepSeek also supports OpenAI-compatible configuration with https://api.deepseek.com.

Custom OpenAI-compatible endpoints

Advanced

Connecting providers such as Mistral, Together AI, Fireworks, self-hosted vLLM, LM Studio, or an internal gateway.

Requirements

  • An OpenAI-compatible base URL.
  • An API key or local placeholder key if the server requires one.
  • The exact model identifier exposed by that endpoint.

Setup steps

  1. 1.Choose Custom OpenAI-compatible in Echoic.
  2. 2.Enter the provider base URL, such as https://api.together.xyz/v1, https://api.fireworks.ai/inference/v1, or http://localhost:8000/v1 for a local vLLM server.
  3. 3.Enter the API key. For local servers that ignore authentication, use the placeholder value the server expects.
  4. 4.Enter the model name exactly as the endpoint exposes it, then run a short transcript test.

Model tip

Custom endpoints work best when the provider closely follows OpenAI chat or responses semantics, including streaming if you enable streaming features.

Privacy note

Privacy depends entirely on the endpoint. A local vLLM server can stay on your machine or LAN; a hosted provider receives transcript text.

Troubleshooting

If the connection fails, test the base URL outside Echoic, confirm whether the URL should include /v1, and verify that the selected model supports chat-style generation.