๐
Formula
Stage 4 โ The Recipe ยท Step-by-step guides and implementation logic
๐
Guides
All formula documents
โก
Quick Start
MacBook step-by-step guide
-
Install OllamaDownload and install via Homebrew or curl installer# via Homebrew brew install ollama # or via curl curl -fsSL https://ollama.com/install.sh | sh
-
Verify InstallationConfirm Ollama is installed and check the versionollama --version
-
Start Ollama ServiceLaunch the local inference server on port 11434ollama serve
-
Pull a ModelDownload a language model (new terminal window)ollama pull llama3.2 ollama pull nomic-embed-text # for embeddings
-
Test the InstallationRun a test inference to confirm everything worksollama run llama3.2 "Hello, how are you?"
๐
Common Issues
Port 11434 in use
Another process is using port 11434. Run
lsof -i :11434 to find and kill it before starting Ollama.Permission denied
Use
sudo for the installation command, or ensure your user has the necessary permissions.Not enough RAM
Minimum 8GB required. Use smaller models like
phi3 or mistral on memory-constrained systems.