When we first considered developing a system using LLMs to analyze communications, the term “AI agent” wasn’t as common as it is today. However, the foundational ideas were already in place. This article explains why we built Postmaster as an Inference Engine AI Agent.
An AI agent is software designed to perform tasks or make decisions autonomously using AI techniques. These agents interact with their environment by perceiving inputs and taking actions to achieve specific goals.
Key Characteristics of an AI Agent
- Autonomy: AI agents operate independently without human intervention.
- Perception: They interpret information from their environment through sensors or data inputs.
- Action: Based on perceptions, AI agents take actions to influence their environment or achieve objectives. These actions often involve interacting with external tools to enhance effectiveness.
- Goal-Oriented: AI agents are designed to achieve specific goals, from simple tasks like navigating a maze to complex ones like managing a financial portfolio.
The main difference in AI agent implementations is how they achieve their goals, especially in planning actions and decision-making.
How to achieve goals
There are two approaches, illustrated by examples:
Deterministic Approach Example
- Upon receiving an email,
- the system should identify the customer using their email address,
- search for customer data in the CRM,
- retrieve the customer’s history from the ERP,
- and then prepare a response.
In this approach, the steps to achieve the goal are clearly outlined and predetermined.
AI Approach Example
- Objective:
- When an email is received, prepare a response.
- Constraints:
- Identify the customer using the sender’s email address.Ensure the response includes customer history.Accessing customer history requires customer data from the CRM.
- Accessing the CRM requires the customer ID.
In this approach, the system is given the objectives and constraints but autonomously determines the optimal path to achieve the desired outcome.
The deterministic approach is straightforward and easier to implement. However, updating or modifying it can be difficult, as it requires redefining steps and reprogramming. It may also be less efficient, as it follows a fixed path without optimizing for context or data.
How to build an AI Agent
Large Language Models (LLMs) offer several key capabilities:
- Comprehension of User Input (Natural Language Understanding NLU): LLMs can interpret complex, unstructured language inputs from users, understanding the intent behind questions, commands, or statements. This ability allows the agent to accurately process and respond to a wide range of user requests.
- Generating Human-Like Responses: LLMs can produce fluent and natural-sounding text, making interactions with the agent feel more intuitive and human-like. This is critical for tasks like customer service, where the quality of interaction impacts user satisfaction.
- Decision-Making and Reasoning / Interpreting Instructions: LLMs can understand and act upon complex instructions, breaking down tasks into actionable steps. This allows the agent to handle multi-step processes or make decisions based on nuanced criteria.
All AI agents use LLM. The key difference is how to achieve the goal
- Workflow-Based Agents: In this approach, the steps to achieve a goal are explicitly defined in a workflow. The workflow outlines the tasks and decision-making rules required to complete the process. In this approach, the steps to achieve a goal are explicitly defined in a workflow. LLMs are extensively used to analyze the environment or situation (Perception) and to sometimes to select the appropriate process branch or pathway based on that analysis. However, these pathways are predefined, meaning that the possible routes the agent can take are already established within the workflow.
- LLM-Based Agents: in this approach Decision-Making and Reasoning capabilities of the LLMs are used to define the steps to achieve the goal. The context of the request is maintained in a “conversation” that is enriched by actions consequences.
- Inference Engine Agents. Inference engines are AI systems that apply logical rules to a set of facts or data to derive conclusions or make decisions. They are typically used in Expert Systems, where they use a predefined knowledge base containing rules and relationships to analyze information and solve problems. Inference engines operate by processing these rules systematically, either through forward chaining (starting with known facts and applying rules to infer new facts) or backward chaining (starting with a goal and working backward to determine the necessary conditions to achieve it).
Managing Complexity with Inference Engine
Synapse Postmaster analyzes incoming communications and directs them to the correct back-office system based on the sender’s request. This task involves:
- A wide variety of potential requests.
- Large amounts of data to determine the appropriate action.
Though each rule is simple in isolation, complexity arises from combining different requests and accumulating information. Effective management requires a clear understanding of the data, making a comprehensive data dictionary essential.
An inference engine is well-suited to handle this complexity. It describes the data model, tools for data retrieval, and associated constraints (e.g., needing a contract ID to retrieve contract balance). It also defines the business rules that drive objectives (e.g., if the request is for contract early termination, check cancellation rules and prepare necessary back-office operations).
Why Not LLM-Based AI?
We chose an Inference Engine for several reasons:
- Reliability: Inference Engines avoid hallucinations. Unlike LLMs, which can generate incorrect outputs, inference engines operate on well-defined rules and logic. This ensures consistent and reliable outputs, reducing decision-making errors.
- Cost-Effectiveness: LLMs capable of processing complex requests can be expensive, requiring significant computational resources. Inference engines, relying on predefined rules and logic, are more cost-effective, especially for tasks needing consistent and precise decisions.
- Auditability: Inference Engines provide an audit trail of their decision-making. Every decision can be traced back, showing how and why it was made. This transparency is crucial for compliance, troubleshooting, and improving processes, offering clear, verifiable records.
These factors —reliability, cost-effectiveness, and auditability— make inference engines a robust and practical choice for managing complex, data-driven tasks where consistency and transparency are critical.