Ollama
LocalPrivate summaries and cleanup that run on your Mac after you install a local model.
Requirements
- Ollama installed and running on your Mac.
- At least one local chat model pulled with Ollama.
- Enough memory for the model you choose.
Setup steps
- 1.Install Ollama, then open Terminal and run a model pull command such as
ollama pull llama3.2or another model that fits your Mac. - 2.Start Ollama if it is not already running. The default local server is
http://localhost:11434. - 3.In Echoic, choose Ollama as the provider and select or enter the local model name exactly as Ollama reports it.
- 4.Run a short summary test before using it on a long meeting, because local model quality and speed depend heavily on model size and Mac memory.
Model tip
Small models are faster for cleanup. Larger models are usually better for meeting summaries and action items.
Privacy note
Prompts stay on your Mac when you use a local Ollama model. No cloud API key is required.
Troubleshooting
If Echoic cannot connect, confirm Ollama is running and that the model name matches ollama list.