Langgraph - The Agent Orchestrator



AI Summary

Summary: Langra - An Agent Orchestrator

  • Introduction
    • Langra is a package from Langin, developed by Aris Chase and his team.
    • It serves as an agent orchestrator.
    • Recommended to read a Medium blog post explaining orchestration before the recording.
  • Background
    • Orchestration and choreography are service interaction patterns from microservices.
    • Orchestration can be applied to micro agents to create multi-agent interaction systems.
  • Motivation for Langra
    • Existing multi-agent frameworks are often monolithic.
    • Traditional frameworks define agents and orchestration within the same component.
    • This can be problematic if changes to orchestration affect agent definitions.
  • Langra’s Approach
    • Decouples agent definition from orchestration.
    • Allows development of agents separately and orchestration on top.
    • Similar to microservices, where atomic services are orchestrated by tools like Camunda, Pega, or Flowable.
  • Capabilities of Langra
    • Combines traditional microservices with micro agents.
    • Supports agents from different LLM providers and frameworks (e.g., Gemini, Bedrock, OpenAI GPT, Autogen, Crei, Langroid).
    • Enables hybrid techniques for service interaction.
  • Use Case Example
    • An application that presents regional news to geographically distributed users.
    • Atomic agents for BBC, CNN, Fox, and NDTV collect news.
    • Langra orchestrates these agents to filter news for users from India, USA, and EMEA.
  • Demonstration
    • For simplicity, responses from agents are hardcoded instead of calling actual services.
    • Four wrappers created for BBC, CNN, NDTV, and Fox agents.
    • Langra uses a shared state (messages list) accessible by all agents.
    • Workflow defined with entry points, edges, and a finish point.
    • Compilation of workflow generates a runnable interface with invoke functions.
    • Example run collects and prints news from all four agents.
  • Conclusion
    • The demonstration simplifies Langra’s concept without using real LLM agents.
    • Future recordings will explore more complex use cases and integration with traditional microservices.