๐ณ
Environment
Stage 2 โ The Context ยท Setup guides, constraints, and AI stack configuration
๐ป
System Requirements
RAM (Min)
8 GB
RAM (Recommended)
16 GB
Disk Space
10 GB free
macOS
12.0+
GPU (Optional)
NVIDIA / Apple M-series
Docker
Required for Qdrant
๐ฅ๏ธ
Ollama Installation
Select your platform
# macOS โ via Homebrew
brew install ollama
# Or via curl installer
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama service
ollama serve
# Pull a model
ollama pull llama3.2
# Linux โ one-line installer
curl -fsSL https://ollama.com/install.sh | sh
# Start as systemd service
sudo systemctl enable ollama
sudo systemctl start ollama
# Pull a model
ollama pull llama3.2
# Windows โ download from:
# https://ollama.com/download/windows
# Then run the .exe installer
# Or via winget:
winget install ollama
# Pull a model
ollama pull llama3.2
๐๏ธ
Qdrant Vector Database
Containerized setup with Docker
# Pull and run Qdrant container
docker pull qdrant/qdrant
docker run -d \
--name qdrant \
-p 6333:6333 \
-p 6334:6334 \
-v $(pwd)/qdrant_storage:/qdrant/storage \
qdrant/qdrant
# Qdrant UI available at http://localhost:6333/dashboard
๐ง
nomic-embed-text Setup
4096-dimension embedding model
# Pull embedding model via Ollama
ollama pull nomic-embed-text
# Generate embeddings (4096 dimensions)
curl http://localhost:11434/api/embeddings \
-d '{"model": "nomic-embed-text", "prompt": "Your text here"}'
๐ค
AI Client Configuration
๐ฆ
Ollama
Default endpoint:
http://localhost:11434. Configure models via ollama pull <model>.
๐ค
Claude (AI)
Used for local AI tasks like populating Qdrant and generating Obsidian shortcut titles. See
claude.md for persona rules.