July 12, 2024 · Product · 6 minute read
Don’t build ChatGPT for X. Focus on where ChatGPT doesn’t solve X

Rishi Raman · Co-Founder at Quill
A lot of AI products are essentially ChatGPT for X, but oftentimes ChatGPT is not the ideal user experience for X. Instead of trying to make a better chatbot than OpenAI, focusing on solving X specifically will often lead to a very different UX (not a chat interface).
For example, we were initially excited by the promise of “Your AI Data Analyst” and ”ChatGPT for Data Analytics” products that would enable non-technical business users to work with data. The demos were awesome, but we became frustrated that they only seemed to work with very selective inputs & outputs on unrealistically clean and small schemas. Real world schemas and business use cases ultimately made these products unusable.
We decided to break down all user problems we experienced in order to avoid making the same mistakes:
User Problem | Product Mistake | Our Solution |
---|
Can’t see the exact query being run | Thinking that hiding SQL adds value, without considering that taking it away removes the ability to validate or modify the query. This can lead to the user building incorrect dashboards that look “close enough” without realizing (compounding errors in LLMs). | Surface the exact query in the UI |
Can’t directly modify the query | Trying to achieve 100% accuracy. Without using SQL query as ‘source of truth’ & without ability to modify it, the AI ‘black box’ needs to be 100% accurate for users to trust it. | Allow the end user to modify the query via simple filtering, sorting, pivoting UI |
No support for complex schemas | Trying to get LLM to understand raw database schema when naming is often not in business language. | We use our semantic layer (cleaned customer-facing schema) which directly maps to business language, instead of the raw database schema |
No support for self-hosting | Expecting customers to build/train your own model that runs in container is too high friction. | We only send the AI the user’s text prompt and the relevant semantic layer metadata. Our server SDK runs the query in your cloud |
Slow load time | Overly complex prompt and output because they assume the design pattern should mirror chatGPT. Assuming that the value of LLMs here is similar to other use cases (written explanation) to the end-user when it really is just the query itself. | Generating a SQL query by itself is incredibly fast |
LLM generating an answer ≠ Properly traversing a decision tree
When a human is doing any kind of work or solving a problem, they are moving along a decision tree. For very simple tasks, the tree may just be one branch with a single input and output — like a 19th century factory worker performing the same task over and over again. As the complexity of the task increases, the complexity of the decision tree also increases, and the value of the work to be done is directly related to the difficulty of traversing a decision tree well. For a complex task, in order to discover a valid path, it may often require going down “wrong” paths, backtracking, and using that updated context to eventually find the right path. It’s usually too hard (if not impossible) to perfectly foresee everything that needs to be done and how to do it in a vacuum.
One branch at a time
The best AI products don’t expect an LLM to solve an entire decision tree perfectly, they help the user solve each step down the tree. The LLM adds value by making each discrete step easier and faster, instead of pretending to understand complex technical things and AI-splaining them to you. We know AIs will make mistakes, so let's design for that.
What we built
Quill is a fullstack SDK that SaaS companies use to add native, customer-facing dashboards & reporting features to their product. We’ve released a custom reporting component with AI query generation on top. Check out our docs
here. If you’re curious to learn more you can book a demo
here.
import { ReportBuilder } from "@quillsql/react"
<ReportBuilder
aiEnabled={true}
containerStyle={{ height: 600, width: "100%" }}
/>