Harbor - Containerized LLM Toolkit for Frontend, Backend, and APIs
AI Nuggets
Harbor Installation and Usage Instructions
Prerequisites
- Ensure Docker is installed on your system. If you need help installing Docker, refer to other videos on the channel.
Installation Steps
Create a conda environment named Harbor to keep everything separate.
conda create --name Harbor
Clone the Harbor repository and change directory into it.
git clone <Harbor-repo-url> cd <Harbor-repo-directory>
(Replace
<Harbor-repo-url>
and<Harbor-repo-directory>
with the actual URL and directory name from the GitHub repo link provided in the video description.)Create a symbolic link for Harbor’s CLI.
ln -s <path-to-harbor-cli> harbor
(Replace
<path-to-harbor-cli>
with the actual path to the Harbor CLI.)Source your shell to activate the settings.
source <shell-config-file>
(Replace
<shell-config-file>
with the name of your shell configuration file, such as.bashrc
or.zshrc
.)Running Harbor
Start Harbor with the following command:
harbor up
This command spins up AMA (a default LLM backend) and Open Web UI (a default frontend), both configured to work together.
After Harbor has started, give it a few seconds to become ready.
Open the Harbor interface in your browser:
harbor open
Setting Up Open Web UI
Create an account on the Open Web UI interface.
- Click on “Sign up”.
- Enter a username, email, and password.
- Click on “Create account”.
Once logged in, select a model (if available) or download a new one.
- Go to the admin panel by clicking on your username in the top right or the bottom left.
- Click on “Admin settings” > “Models”.
- Enter the model name (e.g.,
mol:7billion
) and click the download icon to fetch the model.After the model is downloaded, you can select it from the dropdown menu.
To start a chat with the model, click on “New chat” and select the model from the “Select a model” dropdown.
Stopping Harbor
- To stop all Harbor containers and clean up, use the following command:
harbor down
Additional Services
- Harbor also supports other backends and frontends like CRX NG, Light LLM, VLM, and more, which can be managed with similar ease.
Video Description Links
- M compute website: (URL provided in the video’s description)
- Harbor GitHub repo: (URL provided in the video’s description)
- Coupon code for a 50% discount on a range of GPUs: (Code provided in the video’s description)
Tips
- Harbor is not designed as a deployment solution but as a helper for local LLM development environments.
- Harbor simplifies the process of running LLM backends and frontends with just one or two commands.
- It’s a good starting point for experimenting with LLMs and related services.
(Note: The actual URLs and commands are to be extracted from the video description and GitHub repo link provided in the video’s description. The placeholders
<Harbor-repo-url>
,<Harbor-repo-directory>
, and<path-to-harbor-cli>
should be replaced with the exact details from the provided resources.)