S

Sajiron

10 min readPublished on Feb 04, 2025

How to Set Up DeepSeek with Ollama and Docker (Step-by-Step Guide)

DALL·E 2025-02-01 23.56.35 - A futuristic AI-powered workstation setup featuring multiple monitors displaying DeepSeek AI running on Ollama. The environment includes a sleek moder.webp

With the increasing demand for efficient and powerful AI models, DeepSeek has gained popularity for its advanced language processing capabilities. Ollama provides an easy way to run and manage AI models locally, making it an excellent choice for deploying DeepSeek on your machine.

In this guide, we’ll walk through the DeepSeek setup guide using Ollama, ensuring a smooth installation and configuration process.

🚀 What is DeepSeek?

DeepSeek is an open-source large language model (LLM) optimized for high-performance inference and fine-tuning. It is a powerful alternative to models like LLaMA, GPT, and Mistral. DeepSeek AI offers fast processing and scalable performance, making it ideal for research and development.

🏗️ What is Ollama?

Ollama is a simple and efficient framework for running AI models locally. It allows you to pull, run, and interact with LLMs without dealing with complex configurations. Ollama simplifies DeepSeek AI deployment for developers and researchers.

🛠️ Prerequisites

Before getting started, ensure you have the following installed:

A system with sufficient GPU/CPU resources

Docker (optional, if running inside a container)

🔧 How to Install DeepSeek on Ollama?

For macOS & Linux

curl -fsSL https://ollama.ai/install.sh | sh

For Windows

Download and install from Ollama.ai.

Once installed, verify Ollama is working:

ollama version

📦 How to Pull the DeepSeek Model in Ollama?

Ollama allows you to run DeepSeek AI with a simple command:

ollama pull deepseek-r1:1.5b

This downloads and installs the DeepSeek LLM (1.5B) on your system. If you want to try different model sizes or configurations, check out the official library: DeepSeek on Ollama.

🚀 How to Run DeepSeek AI Locally?

Once the model is downloaded, you can start using it:

ollama run deepseek-r1:1.5b

This will start an interactive chat session with DeepSeek AI.

🖥️ How to Use DeepSeek AI with the Ollama API

Ollama provides an API to integrate DeepSeek AI into your applications.

Run the Ollama server:

ollama serve

Then, send a request using curl:

curl http://localhost:11434/api/generate -d '{
"model": "deepseek-r1:1.5b",
"prompt": "What is DeepSeek?",
"stream": false
}'

🏗️ Running DeepSeek AI in a Docker Container

If you prefer using Docker, create a Dockerfile:

FROM ubuntu:latest

RUN apt update && apt install -y curl

RUN curl -fsSL https://ollama.ai/install.sh | sh

RUN ollama serve & \
sleep 5 && \
ollama pull deepseek-r1:1.5b

COPY ./entrypoint.sh /entrypoint.sh

RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]

Next, create an entrypoint.sh file:

#!/bin/bash

ollama serve &

sleep 5

ollama run deepseek-r1:1.5b

Then, build and run the container:

docker build -t deepseek-ollama .
docker run -it deepseek-ollama

🎯 Conclusion

You have now successfully set up DeepSeek AI using Ollama! Whether you're using it for research, chat applications, or AI-based automation, Ollama makes DeepSeek deployment easy.

If you want to extend this setup, consider fine-tuning DeepSeek or integrating it into web applications using frameworks like Next.js or FastAPI.