Back to Blog

Synapse Postmaster is an A.I. Agent

Why we built Postmaster as an Inference Engine AI Agent rather than other AI approaches.

Synapse Postmaster is an A.I. Agent

When we first considered developing a system using LLMs to analyze communications, we had to make a fundamental architectural decision. This article explains why we built Postmaster as an Inference Engine AI Agent rather than using other approaches.

What is an AI Agent?

An AI Agent is software designed to perform tasks or make decisions autonomously using AI techniques. It perceives inputs from its environment and takes actions toward specific goals. But not all agents are created equal - there are several approaches to achieving autonomous goal-directed behavior.

Three Approaches to Goal Achievement

1. Deterministic Approach

In a deterministic approach, steps are predetermined in advance. For example:

  1. Receive email
  2. Identify customer
  3. Search CRM
  4. Retrieve ERP data
  5. Respond

This approach is predictable and easy to debug, but inflexible when facing novel situations.

2. LLM-Based Agents

LLM-based agents use language models' reasoning to determine steps dynamically, maintaining context through conversation. They're flexible and can handle unexpected inputs, but come with significant drawbacks for business-critical applications.

3. Inference Engine Agents

Inference Engine agents apply logical rules to facts and data to derive conclusions. They're ideal for expert systems with predefined knowledge bases, combining the flexibility of AI with the reliability of rule-based systems.

Postmaster's Use Case

Postmaster routes incoming communications to correct back-office systems based on sender requests. It handles diverse request types and large data volumes, requiring:

  • Reliable, consistent decision-making
  • Complete auditability for compliance
  • Cost-effective operation at scale
  • Integration with existing business rules

Why We Chose Inference Engine Over LLMs

1. Avoiding Hallucinations

Inference Engines avoid hallucinations unlike LLMs which can generate incorrect outputs. When routing financial communications, we cannot afford errors caused by AI "creativity."

2. Cost-Effectiveness

Predefined rules are more economical than expensive LLM computation. Processing thousands of documents daily with LLMs would be prohibitively expensive, while inference engine processing scales efficiently.

3. Full Auditability

The inference engine shows complete decision-making rationale for compliance and troubleshooting. Every decision can be traced back to specific rules and facts, essential for regulated industries.

The Hybrid Approach

This doesn't mean we abandon LLMs entirely. Postmaster uses LLMs for what they're best at:

  • Natural language understanding
  • Entity extraction
  • Intent classification
  • Content summarization

The inference engine then takes these extracted insights and applies business rules to make decisions. This gives us the best of both worlds: the language understanding of LLMs with the reliability and auditability of rule-based systems.

Conclusion

For business-critical document processing, we believe the inference engine approach provides the optimal balance of capability and reliability. It allows us to leverage AI's language understanding while maintaining the traceability and consistency that enterprises require.

Share this article: