When using Ollama, make sure it’s running by checking the toolbar icon.
Store API keys in an .env file and ensure it’s ignored by version control to keep them private.
When using Grok, create API keys through the Grok Cloud console.
To avoid rate limiting with Grok, set the RPM Max (requests per minute) for the crew to a lower number, such as 2.
For complex tasks, it’s recommended to use the 70 billion parameter model of Llama 3 for better results.
If you’re using Crew AI and want to switch to a different LLM, you only need to make a change in one place in the code.
When working with Crew AI, it’s important to specify the LLM you want to use to avoid defaulting to a paid model like ChatGPT-4.
For smaller tasks, the 8 billion parameter model of Llama 3 is more appropriate due to its speed and efficiency.
Additional Information from the Video Description
The source code mentioned in the video is available for free and can be downloaded from the description link provided in the video.
For community support, there’s a school community created for developers to get help with their code. The link to join the community is also provided in the video description.