How To Connect Llama3 to CrewAI [Groq + Ollama]



AI Nuggets

CLI Commands

  1. Install Ollama:

    # For Mac  
    curl -L https://ollama.com/install.sh | bash  
      
    # For Windows  
    iwr https://ollama.com/install.ps1 -useb | iex  
  2. Pull down the Llama 3 language model and run it:

    olama run --model=llama-3-8b-latest  
  3. Create a custom large language model for Crew AI:

    olama create --name=crew-ai-llama-3-8b --from=llama-3-8b-latest --df=model.yaml  
  4. List installed models:

    olama list  
  5. Set up a Python environment and install dependencies using Poetry:

    poetry install -D --no-root  
    poetry shell  
  6. Run the main Python script:

    python main.py  

Website URLs

  1. Ollama website for downloading and setting up Ollama:
  2. Grok Cloud for testing out different large language models using Gro:

Tips

  • When using Ollama, make sure it’s running by checking the toolbar icon.
  • Store API keys in an .env file and ensure it’s ignored by version control to keep them private.
  • When using Grok, create API keys through the Grok Cloud console.
  • To avoid rate limiting with Grok, set the RPM Max (requests per minute) for the crew to a lower number, such as 2.
  • For complex tasks, it’s recommended to use the 70 billion parameter model of Llama 3 for better results.
  • If you’re using Crew AI and want to switch to a different LLM, you only need to make a change in one place in the code.
  • When working with Crew AI, it’s important to specify the LLM you want to use to avoid defaulting to a paid model like ChatGPT-4.
  • For smaller tasks, the 8 billion parameter model of Llama 3 is more appropriate due to its speed and efficiency.

Additional Information from the Video Description

  • The source code mentioned in the video is available for free and can be downloaded from the description link provided in the video.
  • For community support, there’s a school community created for developers to get help with their code. The link to join the community is also provided in the video description.