Non-hallucinating AI
for self-served BI.

Selfr is a self-served BI platform powered by a new class of AI model that completely eliminates hallucinations, making conversational BI 100% factual and trustable.

Selfr is a self-served BI platform powered by a new class of AI model that completely eliminates hallucinations, making conversational BI 100% factual and trustable.

Agentic architecture with 3 separate agents
Agentic architecture with 3 separate agents
Agentic architecture with 3 separate agents

"Data is only useful if it's trustworthy, which is why hallucinations are the biggest hindrance to the adoption of AI for decision making."

Nour Lake - CEO and co-founder @Selfr

"Data is only useful if it's trustworthy, which is why hallucinations are the biggest hindrance to the adoption of AI for decision making."

Nour Lake - CEO and co-founder @Selfr

"Data is only useful if it's trustworthy, which is why hallucinations are the biggest hindrance to the adoption of AI for decision making."

Nour Lake - CEO and co-founder @Selfr

SELF-SERVED BUSINESS INTELLIGENCE

SELF-SERVED BUSINESS INTELLIGENCE

SELF-SERVED BUSINESS INTELLIGENCE

Making self-served BI a reality.

By fully eliminating LLM hallucinations, our breakthrough agentic AI technology clears the final barrier to widespread adoption of natural language BI.

By fully eliminating LLM hallucinations, our breakthrough agentic AI technology clears the final barrier to widespread adoption of natural language BI.

By fully eliminating LLM hallucinations, our breakthrough agentic AI technology clears the final barrier to widespread adoption of natural language BI.

All the benefits of natural language BI…

Zero skills required

Zero skills required

Accessible to anyone who can ask questions in plain English, no special skills required.

Accessible to anyone who can ask questions in plain English, no special skills required.

Instant insights

Instant insights

Empower employees to answer their data related business questions in real time.

Empower employees to answer their data related business questions in real time.

No more BI backlogs

No more BI backlogs

Eliminate BI backlogs and empower BI/data teams to work on value-added projects.

Eliminate BI backlogs and empower BI/data teams to work on value-added projects.

…with none of the LLMs flaws.

Zero hallucinations

Zero hallucinations

Our unique AI agents completely eliminates hallucinations, making chat-based BI finally working.

Our unique AI agents completely eliminates hallucinations, making chat-based BI finally working.

Our unique AI agents completely eliminates hallucinations, making chat-based BI finally working.

Relevant answers

Relevant answers

Our agentic architecture leverages your entire data context to deliver highly relevant answers.

Our agentic architecture leverages your entire data context to deliver highly relevant answers.

Our agentic architecture leverages your entire data context to deliver highly relevant answers.

OUR PROPRIETARY TECHNOLOGY

OUR PROPRIETARY TECHNOLOGY

OUR PROPRIETARY TECHNOLOGY

The most advanced AI.
Built specifically for BI.

Our unique data intelligence platform uses advanced AI agents specifically designed to outperform any existing text2SQL solution and guarantee zero hallucinations.

Our unique data intelligence platform uses advanced AI agents specifically designed to outperform any existing text2SQL solution and guarantee zero hallucinations.

Our unique data intelligence platform uses advanced AI agents specifically designed to outperform any existing text2SQL solution and guarantee zero hallucinations.

"Our goal was not to reduce, but to completely eliminate hallucinations. Fine-tuning, RAG, and LLM-as-a-judge didn't cut it. We had to design our own new models and agents."

Naim Kosayyer - CTO and co-founder @Selfr

"Our goal was not to reduce, but to completely eliminate hallucinations. Fine-tuning, RAG, and LLM-as-a-judge didn't cut it. We had to design our own new models and agents."

Naim Kosayyer - CTO and co-founder @Selfr

"Our goal was not to reduce, but to completely eliminate hallucinations. Fine-tuning, RAG, and LLM-as-a-judge didn't cut it. We had to design our own new models and agents."

Naim Kosayyer - CTO and co-founder @Selfr

MAPPING AVAILABLE DATA

MAPPING AVAILABLE DATA

MAPPING AVAILABLE DATA

Context agent

An instruction-tuned Llama 3 model that uses a novel MoE adapter architecture to understand the semantics of your data and inform the intent agent of which data is available. It meticulously analyzes data, metadata, documentation, SQL models, and permissions for all authorized schemas in your warehouse.

An instruction-tuned Llama 3 model that uses a novel MoE adapter architecture to understand the semantics of your data and inform the intent agent of which data is available. It meticulously analyzes data, metadata, documentation, SQL models, and permissions for all authorized schemas in your warehouse.

An instruction-tuned Llama 3 model that uses a novel MoE adapter architecture to understand the semantics of your data and inform the intent agent of which data is available. It meticulously analyzes data, metadata, documentation, SQL models, and permissions for all authorized schemas in your warehouse.

Schema showing how the context agent pulls metadata from a modern data stack
Schema showing how the context agent pulls metadata from a modern data stack
Schema showing how the context agent pulls metadata from a modern data stack

Learns from models

Learns from models

It learns data semantics by reading the SQL models and rebuilding the lineage instead of relying on docs alone.

It learns data semantics by reading the SQL models and rebuilding the lineage instead of relying on docs alone.

Column-level

Column-level

It keeps tracks of transformations at the table and column level to build a column-level lineage to better map your data.

It keeps tracks of transformations at the table and column level to build a column-level lineage to better map your data.

It keeps tracks of transformations at the table and column level to build a column-level lineage to better map your data.

Better than RAG

Better than RAG

For real-world data schemas our context agent reaches 95% relevance far above the 50% typically obtained with RAG.

For real-world data schemas our context agent reaches 95% relevance far above the 50% typically obtained with RAG.

UNDERSTANDING USER INTENT

UNDERSTANDING USER INTENT

UNDERSTANDING USER INTENT

Intent agent

A Llama 3 model fine-tuned to understand the user’s question and to analyze it in light of the context passed by the context agent (available data schemas, permissions, business rules, etc.). It infers the most likely intent and returns a well formulated question, or suggests alternatives when the intent is unclear to eliminate ambiguities.

A Llama 3 model fine-tuned to understand the user’s question and to analyze it in light of the context passed by the context agent (available data schemas, permissions, business rules, etc.). It infers the most likely intent and returns a well formulated question, or suggests alternatives when the intent is unclear to eliminate ambiguities.

A Llama 3 model fine-tuned to understand the user’s question and to analyze it in light of the context passed by the context agent (available data schemas, permissions, business rules, etc.). It infers the most likely intent and returns a well formulated question, or suggests alternatives when the intent is unclear to eliminate ambiguities.

Schema showing how the intent agent uses both the user query and the context agent to derive user intent
Schema showing how the intent agent uses both the user query and the context agent to derive user intent
Schema showing how the intent agent uses both the user query and the context agent to derive user intent

BI-specific

BI-specific

It is trained on dozens of thousands of BI questions and answers, to outperform any general AI models at BI questions.

It is trained on dozens of thousands of BI questions and answers, to outperform any general AI models at BI questions.

Organization-specific

Organization-specific

It leverages the context agent to account for all of the organization’s data and docs to determine user intent.

It leverages the context agent to account for all of the organization’s data and docs to determine user intent.

It leverages the context agent to account for all of the organization’s data and docs to determine user intent.

Usage-specific

Usage-specific

It is constantly fine-tuned with actual users input to understand the specific usage and conventions of your org.

It is constantly fine-tuned with actual users input to understand the specific usage and conventions of your org.

BUILDING RELEVANT AND TRUSTED ANSWERS

BUILDING RELEVANT AND TRUSTED ANSWERS

BUILDING RELEVANT AND TRUSTED ANSWERS

Query agent

A new type of transformer re-engineered to generate a spec for a built-in abstraction layer. This layer was designed to encode the fundamental structure of BI questions, including notions likes aggregations, filters, time operations, etc. The abstraction layer deterministically compiles the spec into a SQL query, a chart, and a text explanation with 100% accuracy (no hallucinations).

A new type of transformer re-engineered to generate a spec for a built-in abstraction layer. This layer was designed to encode the fundamental structure of BI questions, including notions likes aggregations, filters, time operations, etc. The abstraction layer deterministically compiles the spec into a SQL query, a chart, and a text explanation with 100% accuracy (no hallucinations).

A new type of transformer re-engineered to generate a spec for a built-in abstraction layer. This layer was designed to encode the fundamental structure of BI questions, including notions likes aggregations, filters, time operations, etc. The abstraction layer deterministically compiles the spec into a SQL query, a chart, and a text explanation with 100% accuracy (no hallucinations).

Schema showing how the query agent includes an abstraction layer that guarantees zero hallucination
Schema showing how the query agent includes an abstraction layer that guarantees zero hallucination
Schema showing how the query agent includes an abstraction layer that guarantees zero hallucination

Unambiguous

Unambiguous

It explicitly displays how the question was understood and suggests alternatives when the intent is unclear.

It explicitly displays how the question was understood and suggests alternatives when the intent is unclear.

Strictly data-based

Strictly data-based

If no data matches the users intend, the query agent doesn’t answer instead of fabricating a hallucinated answer.

If no data matches the users intend, the query agent doesn’t answer instead of fabricating a hallucinated answer.

If no data matches the users intend, the query agent doesn’t answer instead of fabricating a hallucinated answer.

100% trusted answers

100% trusted answers

Answers are compiled deterministically by the abstraction layer, eliminating all possible hallucinations.

Answers are compiled deterministically by the abstraction layer, eliminating all possible hallucinations.

"Zero hallucinations was a game changer for us: we deployed to non-techies who can't check the SQL code themselves."

Kim Mazzilli - Co-founder and CEO @Horace

"Zero hallucinations was a game changer for us: we deployed to non-techies who can't check the SQL code themselves."

Kim Mazzilli - Co-founder and CEO @Horace

"Zero hallucinations was a game changer for us: we deployed to non-techies who can't check the SQL code themselves."

Kim Mazzilli - Co-founder and CEO @Horace

Embrace hallucination-free AI-powered BI

Embrace hallucination-free AI-powered BI

Embrace hallucination-free AI-powered BI

© Selfr Inc. 2024

Ready to become data-driven?