Ollama Setup Guide
Use Overlay with Ollama for powerful local AI models
Visit the official Ollama website and follow the installation instructions for your operating system:
https://ollama.ai/downloadUse the following command to download a model. For example, to download Llama 3:
ollama pull llama3
Other popular models include: mistral, gemma, mixtral, phi, orca-mini, and many more.
View the full model libraryIMPORTANT: You must run Ollama with the OLLAMA_ORIGINS environment variable for Chrome extension support!
Use this command to start Ollama with extension support:
OLLAMA_ORIGINS=chrome-extension://* ollama serve
This command allows the extension to connect to your local Ollama server. Keep this terminal window open while using Ollama with Overlay.
For more help, visit the Ollama documentation at https://github.com/ollama/ollama/blob/main/docs/api.md
Connection Issues
Ensure Ollama is running with the correct OLLAMA_ORIGINS flag. Check that no firewall is blocking the connection.
Models Not Appearing
Verify that you have successfully pulled the models using the ollama pull command. Try restarting the Ollama server.
Slow Performance
Larger models require more computational resources. Consider using a smaller model if your hardware is limited.