THE COMPANY
THE IDEATION
We used to operate in the IT sector, focusing on the less glamorous back-office software that supports company operations within the banking industry.
When large language models (LLMs) gained popularity, we explored how they could be applied in our industry. We recognized that, despite their impressive language understanding capabilities, LLMs lack reasoning skills. In addition they work as a black box with no traceability in their decision-making, making them unsuitable for managing operations.
This led us to remember an AI algorithm we studied 35 years ago in school, known for its strong reasoning capabilities, ability to follow orders, and adherence to established rules—ideal for making auditable decisions: inference engines. Inference Engines are great, except they lack a fluent interface with the real world. Entries need to be structured.
And the idea was born: combining an inference engine with an LLM could be a game-changing solution for boosting operational efficiency.
Our Vision on AI
While Large Language Models (LLMs) are exceptional at generating text and understanding unstructured content like emails and complex documents, and inference engines excel at logical reasoning and decision-making, they are just two components in a broader ecosystem of AI technologies. Other elements, such as image recognition, speech-to-text and deep learning, also play crucial roles. By integrating these diverse AI capabilities, we can create a powerful “new interface” that will enable dynamic interactions.
AI is not designed to replace existing systems or handle every task. Just as you wouldn’t ask an engineer to perform a doctor’s job, AI should be viewed as a complementary tool, enhancing rather than substituting human capabilities.
To maximize its effectiveness, AI must have access to both structured data (like databases) and unstructured data (such as internal documents or emails). This comprehensive access is essential for improving AI-driven decision-making and automating processes.
Future AI architectures should be built to handle multi-modal inputs, including text, voice, images, and potentially other data types. This approach would support a more integrated and flexible method for managing diverse user interactions and requests.
Our Philosophy on Software Deployment
Just like you don’t need to be an expert in database engine development to effectively use databases, you shouldn’t need to be an AI expert to leverage our solutions. Our goal is to make our products as accessible and user-friendly as the tools you use every day. Our APIs are designed to be intuitive from a business perspective—allowing for seamless integration into your existing workflows, so you can implement your system integration processes as usual.
Services—often referred to as microservices—are the ideal approach for delivering software. This architecture promotes modularity, scalability, and flexibility, enabling continuous delivery and easier maintenance by breaking down applications into smaller, independent components that can be developed, deployed, and scaled individually.
The initial implementation of software is inherently complex due to the need for critical structuring decisions, while the provider may not yet fully understand the customer’s context, and the customer is unfamiliar with the software. Developing a Minimum Viable Product (MVP) is essential—allowing for rapid experimentation to quickly grasp the consequences of early decisions before changes become too costly.
Our Philosophy on Software Deployment
A Synapse is a junction between two neurons where signals are transmitted to facilitate communication within the nervous system.
We like to imagine neurons as partners in a process, whether with humans or IT systems, with our solutions enabling a dynamic exchange of information between them.