Agents

Basics

Pro Tips

Run a Local AI Agent in Minutes

Apr 21, 2025

Want to run your own AI agent entirely offline? With smolagent and Ollama, it’s easier than ever—even on older hardware.

Here’s how I set it up on a 2017 MacBook Pro (2.9 GHz i7, 16 GB RAM) no GPU:

🔧 Prerequisites

  • Install Ollama → https://ollama.com/download

  • Install uv (Python package manager) → https://docs.astral.sh/uv/getting-started/installation/

🚀 Step-by-Step Setup

1. Pull a local model (like Qwen2):

ollama pull qwen2:7b

2. Start the Ollama server:

ollama serve

3. Clone the repo and set up your environment:

git clone https://github.com/formalmind/local-agents-template.git

cd agents

uv venv

source .venv/bin/activate

4. Initialize the model with smolagent In src/agents/__init__.py:

from smolagents import LiteLLMModel
model = LiteLLMModel(
    model_id="ollama_chat/qwen2:7b",
    api_base="http://127.0.0.1:11434",
    num_ctx=8192,
)

5. Run the agent:

uv run agents

Output:

(agents) 🐇 uv run agents
ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>

🎉 That’s it! You now have a fully local AI agent running on your machine—no API keys or cloud access required.

Try it out and customize the agent to your needs.