How to run OllOllama on Docker
AI Summary
Summary: Installing and Running Ollama with Docker
- Installation Methods:
- Primary method: Use the installer from Ollama.com.
- Alternative method: Use Docker for a self-contained environment.
- Docker Advantages:
- Isolates dependencies.
- Clean removal of programs.
- Performance Considerations:
- Minor performance impact on Linux.
- More significant impact on Mac and Windows due to virtual machines.
- No GPU pass-through on Mac.
- Model Storage:
- Models are stored separately due to their large size.
- Docker allows flexibility in model storage location.
- Docker Setup:
- Assumes Docker is installed.
- Command:
docker run -g gpus=all -v <host_dir>:<container_dir> -p 11434:11434 --name Ollama <image_name>
- Docker Commands:
docker run
: Starts a new container.-d
: Runs container in the background.--gpus all
: Allows GPU pass-through (not on Mac).-v
: Mounts a volume from host to container.-p
: Maps container port to host port.--name
: Sets a container name.- Image updates: Use
docker pull
to get the latest image.- Container vs. Image:
- Image: Blueprint on Docker Hub (e.g.,
ol/olOllama
).- Container: Running instance of an image.
- Using Ollama:
- Run Ollama client inside the container with
docker exec
.- Create aliases for convenience.
- Add aliases to shell RC files for persistence.
- Accessing Ollama Remotely:
- On the same network: Expose container with
ollama host=0.0.0.0
.- On different networks: Use solutions like TailScale for secure access.
- Logs and Cleanup:
- Use
docker logs
to view container logs.- Use
docker stop
anddocker rm
to stop and remove containers.- Use
docker rmi
to remove images.- Final Notes:
- Native install is preferred, but Docker is an option.
- Questions and video suggestions are welcomed in the comments.