Installation

How to Install Ollama โ€” Step-by-Step Guide

2 min read ยท Apr 11, 2026

What is Ollama?

Ollama is the easiest way to run large language models locally. It’s like an app store for AI models โ€” download, install, and chat with powerful AI right on your own computer. No cloud, no API keys, no monthly fees.

Why Ollama?

  • Free and open source
  • One-click installation on Windows, Mac, and Linux
  • Huge model library โ€” Llama, Mistral, Phi, Gemma, and dozens more
  • Runs entirely on your hardware โ€” your data stays private
  • API compatible with OpenAI’s format โ€” works with most AI tools

System Requirements

Before you start, make sure your system meets these minimum requirements:

ComponentMinimumRecommended
RAM8 GB16 GB+
Storage10 GB free20 GB+
OSWindows 10+, macOS 12+, Ubuntu 20.04+Latest versions
GPUNone (CPU works)NVIDIA/AMD GPU with 8GB+ VRAM

๐Ÿ’ก Tip: Even without a dedicated GPU, Ollama can run small models on your CPU. It’ll be slower, but it works. The guide covers which models to pick based on your hardware.

Installation Steps

Step 1: Download Ollama

Visit ollama.com and download the installer for your operating system.

Step 2: Install

Windows: Run the downloaded .exe file and follow the installer.

macOS: Open the .dmg and drag Ollama to your Applications folder.

Linux: Run the install script:

curl -fsSL https://ollama.com/install.sh | sh

Step 3: Verify Installation

Open your terminal (or Command Prompt on Windows) and run:

ollama --version

You should see the version number. If you see an error, make sure Ollama is in your system PATH.

Step 4: Run Your First Model

ollama run llama3.2

This downloads the Llama 3.2 model (about 2GB) and starts an interactive chat. That’s it โ€” you’re running AI locally!

Common Issues

“Ollama is not recognized” โ€” Restart your terminal after installation.

Download is slow โ€” Models are several gigabytes. Be patient on the first download.

Out of memory errors โ€” You’re trying to run a model too large for your hardware. Try a smaller model like llama3.2:3b.

Next Steps

Now that you have Ollama running, check out our guide on Choosing the Right Model for Your Hardware and Best Local LLMs in 2026.

Want the complete guide?

Get the Local AI Starter Kit โ€” everything in one professional PDF.

Get the Kit โ†’

Want the complete guide?

Get the Local AI Setup Kit โ€” everything in one professional PDF. Cover page, table of contents, and 8 structured chapters.

Get the Kit โ†’