Define a Pydantic class to constrain the output to a specific schema.
Use the prompt decorator to return an instance of the Pydantic class.
Example:
from pydantic import BaseModel class Review(BaseModel): sentiment: str grade: int summary: str date: str@prompt(prompt="Extract a review from {review}", model="...", output=Review) def extract(review): pass
Function Calling
Define a function that can be called by the LLM to perform external actions.
Use the prompt decorator with a functions list to enable function calling.
Example:
@prompt(prompt="Use the appropriate function to answer the question", functions=[get_weather], model="...") def answer(question): pass
Asynchronous Execution
Use async and await for asynchronous execution with the prompt and chat_prompt decorators.
Example:
@prompt(prompt="Tell me more about {topic}", model="...") async def answer_async(topic): pass
Streaming and Object Streaming
Use streaming to receive partial answers while the LLM is generating.
Use object streaming to receive objects one by one as they are generated.
Example for object streaming:
class Superhero(BaseModel): name: str@prompt(prompt="Create a superhero team named {name}", output=Iterable[Superhero]) def create_superhero_team(name): pass
Conclusion
Magentic and Light LLM reduce boilerplate and provide powerful features like object streaming and function calling.
Access to 100+ LLMs through Light LLM.
Additional Information from the Video Description
The video description may contain links to the GitHub repository or documentation for Magentic and Light LLM, as well as any additional resources or tutorials mentioned in the video. Unfortunately, without access to the actual video description, I cannot provide the exact URLs or additional information that may be included there. Please refer to the video description on YouTube for this information.