🦾 The Cleanest way to write GenAI applications (it’s NOT Langchain)



AI Nuggets

Instructions and Tips from the Video Transcript

Setting Up the Environment

  • Import necessary libraries:
    • Use dotenv to load environment variables (API keys).
    • From magentic, import prompt decorator and light_llm chat model.

Using the prompt Decorator

  • Create a Python function with the prompt decorator to return text from GPT models.
  • Example function:
    from magentic import prompt  
      
    @prompt(prompt="Say hello to {name}", model="clo3")  
    def hello(name):  
        pass  
  • Call the function with a parameter to get the output.

Changing the Model

  • Specify a different model using the model argument in the prompt decorator.
  • Example:
    @prompt(prompt="Say hello to {name}", model="gpt-3.5-turbo")  
    def hello_openai(name):  
        pass  
  • Use environment variables to set the backend and model.

Using the chat_prompt Decorator

  • Import additional message types and chat_prompt decorator.
  • Define a function with structured conversation using system and user messages.
  • Example:
    from magentic import chat_prompt, SystemMessage, UserMessage  
      
    @chat_prompt(messages=[SystemMessage(...), UserMessage(...)])  
    def get_movie_quote(movie):  
        pass  

Structured Output with Pydantic

  • Define a Pydantic class to constrain the output to a specific schema.
  • Use the prompt decorator to return an instance of the Pydantic class.
  • Example:
    from pydantic import BaseModel  
      
    class Review(BaseModel):  
        sentiment: str  
        grade: int  
        summary: str  
        date: str  
      
    @prompt(prompt="Extract a review from {review}", model="...", output=Review)  
    def extract(review):  
        pass  

Function Calling

  • Define a function that can be called by the LLM to perform external actions.
  • Use the prompt decorator with a functions list to enable function calling.
  • Example:
    @prompt(prompt="Use the appropriate function to answer the question", functions=[get_weather], model="...")  
    def answer(question):  
        pass  

Asynchronous Execution

  • Use async and await for asynchronous execution with the prompt and chat_prompt decorators.
  • Example:
    @prompt(prompt="Tell me more about {topic}", model="...")  
    async def answer_async(topic):  
        pass  

Streaming and Object Streaming

  • Use streaming to receive partial answers while the LLM is generating.
  • Use object streaming to receive objects one by one as they are generated.
  • Example for object streaming:
    class Superhero(BaseModel):  
        name: str  
      
    @prompt(prompt="Create a superhero team named {name}", output=Iterable[Superhero])  
    def create_superhero_team(name):  
        pass  

Conclusion

  • Magentic and Light LLM reduce boilerplate and provide powerful features like object streaming and function calling.
  • Access to 100+ LLMs through Light LLM.

Additional Information from the Video Description

  • The video description may contain links to the GitHub repository or documentation for Magentic and Light LLM, as well as any additional resources or tutorials mentioned in the video. Unfortunately, without access to the actual video description, I cannot provide the exact URLs or additional information that may be included there. Please refer to the video description on YouTube for this information.