Show HN: KBAI – Build Hybrid AI with Deterministic Reasoning

20 hours ago 2

Build hybrid AI with deterministic reasoning

KB AI is a knowledge-based AI that uses deterministic rules to help LLMs, AI agents and other applications with fact-based decision making and planning

KB AI solves AI accuracy and explainability.

Can be taught from a single example and managed by non technical users.

Making an API call

  1. Make an API Call

    Use a simple HTTP request to query your knowledge base. Example using curl:

    curl -X POST https://<your-knowledge-base-url> \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <your-token>" \ -d '{"fact": "screenshot.applicationCompliant"}'

    Response:

    { "stopReason": "FACT_NEEDED", "facts": {}, "log": [ { "code": "RULE_STARTED", "message": "Inferring screenshot.applicationCompliant using rule 'Is the system compliant based on the visibility and approval status of applications?'", "fact": "screenshot.applicationCompliant", "dependencies": { "type": "object", "properties": { "application.isApproved": { "type": "boolean", "parameters": { "type": "object", "required": [ "application" ], "properties": { "application": { "type": "string" } } } }, "screenshot.visibleApplications": { "type": "array", "items": { "type": "string" } } } } }, { "code": "FACT_NEEDED", "message": "Inference stopped due to unknown fact, re-run with fact screenshot.visibleApplications provided", "fact": "screenshot.visibleApplications" } ] }
  2. Request parameters:
    • fact – name of the fact to infer, required
    • facts – initial set of facts to use for inference, optional
  3. Response fields:
    • stopReason – COMPLETED if answer obtained (will be contained in facts, alongside with the intermediate reasoning outcomes) or FACT_NEEDED if inference stopped requiring a fact. To re-run inference, call the endpoint providing facts from the output, as well as the missing fact.
    • facts – all facts that were provided or inferred
    • log – step-by-step reasoning log, specifying rules in order of execution. If inference stopped with FACT_NEEDED code, the last log entry of FACT_NEEDED type will contain the name of the missing fact. Each rule in the log includes description and dependencies JSON schema defining expected data types for the input facts.

Inferfacing with LLMs

KBAI can be interfaced with LLMs in two directions.

Firstly, KBAI can be used to provide a set of facts and a completed reasoning chain to LLM. This can be done either an an initial prompt, by adding the facts output of KBAI to it, or offering KBAI inference as a function to LLM.

Usually it's sufficient to provide the facts alone and LLMs can make conclusion of their meaning by their names, however, in some cases, you may wish to consider adding description of the reasoning steps and the rule definitions that were used to reach the conclusion.

Another way to offer KBAI output to LLM is to expose KBAI rules to LLM as functions. For this purpose, KBAI exports ready-to-use function parameters JSONB schemas.

Simply click "Export parameters" link next to the rule you're interested to provide as output to LLM and use the parameters to let LLM know what information the function expects.

Secondly, in many practical applications execution of the rules may require answers that can be obtained by LLM from an unstructured source data (texts, files, document images etc). In such cases it makes sense to configure LLM to provide answers whenever KBAI inference stops with FACT_NEEDED code.

Combining KBAI and LLMs in this way allows creating agentic approach where KBAI reasoning guides LLM to make step-by-step decisions based on the source data, while keeping the more complex reasoning defined by explainable KBAI rules, which allows for creation of hybrid AI agents that are much more powerful and predictable, compared to the traditional all-LLM approach.

FAQ

What is KBAI?

KBAI is a tool for building hybrid AI agents with fact-based reasoning. It uses a deterministic reasoning engine to provide accurate and reliable responses, originally developed for onboarding and support automation in Able CDP. KBAI enables the creation of agents and applications by handling logical reasoning outside of large language models (LLMs), resulting in more precise and faster responses compared to traditional prompt engineering.

How does KBAI work?

KBAI operates like a deterministic reasoning engine, not dissimilar to traditional solvers but is engineered for the application without using legacy technologies. It converts natural language rules into a logical framework, eliminating ambiguities. Users can review and refine these interpreted rules through KBAI’s interface, which displays them in human-readable language while hiding the underlying machine logic. This iterative, WYSIWYG-like process allows users to clarify rules and ensure the system behaves as intended without needing extensive test cases.

How does KBAI handle evaluations and accuracy?

KBAI’s deterministic nature ensures consistent outputs for given inputs, reducing uncertainty in AI reasoning. Instead of relying on tools like Langsmith for evaluations, KBAI focuses on resolving ambiguities in the initial rule interpretation. Users manually review and refine rules in the interface to confirm they align with intended outcomes. This approach minimizes the need for complex test frameworks, though general-purpose testing can be applied if needed.

How does KBAI integrate with LLMs?

KBAI interoperates with LLMs in two primary ways:

KBAI-to-LLM: KBAI processes user activity or data (e.g., evaluating dozens of parameters like business type or funnel setup in Able CDP) and generates a reasoning chain (final conclusions, intermediary facts, and source data). This output is sent to an LLM, which translates it into a user-friendly response within the context of the user’s question. LLMs excel at this text transformation, making the process reliable.

LLM-to-KBAI: When KBAI needs a fact it cannot derive from its knowledge base, it pauses and queries an LLM (or a custom model) for specific information, such as extracting details from a document (e.g., “What’s the hourly wage in the contract?”). KBAI then uses these answers for further reasoning, ensuring consistency by handling complex logic itself.

What are some example use cases for KBAI?

Support Automation: In Able CDP, KBAI evaluates user activity (e.g., business type, funnel setup, and system status) to provide step-by-step guidance, which an LLM then translates into user-friendly responses.

Legal Contract Analysis: KBAI breaks down complex documents by asking LLMs simple, targeted questions (e.g., about specific contract terms), then uses the answers to perform reliable reasoning, avoiding the inconsistency of LLMs processing lengthy documents directly.

How does KBAI improve over traditional LLM-based systems?

KBAI reduces variability by handling logical reasoning outside of LLMs, which often struggle with consistent outputs for complex tasks. By converting ambiguous natural language rules into a deterministic framework, KBAI ensures reliable results. It also simplifies the process of refining AI behavior, as users can iteratively clarify rules without rewriting prompts or running extensive tests.

Can KBAI be tested like traditional AI systems?

While KBAI can be tested using general-purpose frameworks, its design minimizes the need for extensive testing. The iterative rule-refinement process allows users to directly validate and adjust the system’s behavior. For systems integrating KBAI with LLMs, testing may focus on the LLM’s ability to provide accurate facts from unstructured data, depending on the specific use case and backend model.

Read Entire Article