Fabi.ai August update: Boosting SQL generation accuracy

August has been a month of building here at Fabi.ai. Our big focus has been on increasing the SQL-generation accuracy and we’re extremely pleased with the results. We’ve also started chipping away at the collaboration experience between the person asking the question and the data analyst in the event that answering the question requires a bit of extra support.

Quantum leap in SQL-Generation Accuracy 🎯

The art (and science) of generating accurate code is in providing just the right context and relevant, accurate examples pertinent to the question being asked. We’ve developed our own proprietary prompt pipeline. The AI can now create complex SQL queries with CTEs, window functions and many-way joins in the blink of an eye.

AI crushing complex SQL queries

Write SQL with a touch of magic 🪄

We’ve gotten feedback that data, engineering and other semi-technical teams would like to use Fabi.ai to jumpstart their SQL analysis and share their results out. This is great for team members that want to leverage SQL queries written by their team members without having to tap those team members on the shoulder.

You can now ask a question in Fabi.ai, then jump into SQL editing. From there you can quickly format your code and even ask the AI to help debug a query.

Quickly correct your query with AI

Other goodies 🎁

  • Gmail login: We now support 1-click login for Gmail accounts
  • PostGIS extension: If you’re using this extension you can now ask Fabi.ai geo-based questions

Quick LLM & AI learnings

  • Llama 2 vs OpenAI: We haven’t yet tried Code Llama, which seems to be promising, but when Llama 2 first came out we ran some quick tests and quickly found that it performed materially worse than OpenAI for code generation.
  • Location in prompt matters: Large prompts aren’t necessarily better. Important information that lives in the middle of a prompt tends to get ignored. It’s important to favor smaller, more relevant prompts rather than being overly inclusive.

"I was able to get insights in 1/10th of the time it normally would have"

Don't take our word for it, give it a try!