ollama
Run large language models locally with easy model management, GPU acceleration, and simple CLI/API access.
1
Mentions
Discussed In
Recent Mentions
"https://ollama.com/blog/claude"
Run large language models locally with easy model management, GPU acceleration, and simple CLI/API access.
"https://ollama.com/blog/claude"