Ollama Setup Guide

Use Overlay with Ollama for powerful local AI models

Installing Ollama
First, install Ollama on your computer

Visit the official Ollama website and follow the installation instructions for your operating system:

https://ollama.ai/download
Pulling Models
Download models you want to use

Use the following command to download a model. For example, to download Llama 3:

ollama pull llama3

Other popular models include: mistral, gemma, mixtral, phi, orca-mini, and many more.

View the full model library
Running Ollama with Chrome Extension Support
Critical step: Run Ollama with special flags to allow the extension to connect

IMPORTANT: You must run Ollama with the OLLAMA_ORIGINS environment variable for Chrome extension support!

Use this command to start Ollama with extension support:

OLLAMA_ORIGINS=chrome-extension://* ollama serve

This command allows the extension to connect to your local Ollama server. Keep this terminal window open while using Ollama with Overlay.

For more help, visit the Ollama documentation at https://github.com/ollama/ollama/blob/main/docs/api.md

Troubleshooting
Common issues and solutions

Connection Issues

Ensure Ollama is running with the correct OLLAMA_ORIGINS flag. Check that no firewall is blocking the connection.

Models Not Appearing

Verify that you have successfully pulled the models using the ollama pull command. Try restarting the Ollama server.

Slow Performance

Larger models require more computational resources. Consider using a smaller model if your hardware is limited.