1

Download and install ollama

Download using this link

2

Check whether docker is installed

If docker is installed, you will see the version number. if not, install docker.

docker --version
3

Pull the latest image of Open WebUI

docker pull ghcr.io/open-webui/open-webui:main
4

Create a volume for Open WebUI

Create a volume for Open WebUI using the following command

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
5

Wait for a minute and the access Open WebUI

Go to http://localhost:3000 in browser to access Open WebUI.You will need to create a admin account by providing email and password. Then you will see the chat interface. But there won’t be any model.

6

Install DeepSeek model

There are different models available. You can install 8B Llama deepseek R1 by running the following command. Then, you can now use the chat interface to interact with the model. Although it was working, it was slow for me. It took about 6 minutes to get the response.

ollama run deepseek-r1:8b