Episode Details

Back to Episodes
Guaranteed quality and structure in LLM outputs - with Shreya Rajpal of Guardrails AI

Guaranteed quality and structure in LLM outputs - with Shreya Rajpal of Guardrails AI

Published 2 years, 10 months ago
Description

Tomorrow, 5/16, we’re hosting Latent Space Liftoff Day in San Francisco. We have some amazing demos from founders at 5:30pm, and we’ll have an open co-working starting at 2pm. Spaces are limited, so please RSVP here!

One of the biggest criticisms of large language models is their inability to tightly follow requirements without extensive prompt engineering. You might have seen examples of ChatGPT playing a game of chess and making many invalid moves, or adding new pieces to the board.

Guardrails AI aims to solve these issues by adding a formalized structure around inference calls, which validates both the structure and quality of the output. In this episode, Shreya Rajpal, creator of Guardrails AI, walks us through the inspiration behind the project, why it’s so important for models’ outputs to be predictable, and why she went with an XML-like syntax.

Guardrails TLDR

Guardrails AI rules are created as RAILs, which have three main “atomic objects”:

* Output: what should the output look like?

* Prompt: template for requests that can be interpolated

* Script: custom rules for validation and correction

Each RAIL can then be used as a “guard” when calling an LLM. You can think of a guard as a wrapper for the API call. Before returning the output, it will validate it, and if it doesn’t pass it will ask the model again.

Here’s an example of a bad SQL query being returned, and what the ReAsk query looks like:

Each RAIL is also model-agnostic. This allows for output consistency across different models, even if they have slight differences in how they are prompted. Guardrails can easily be used with LangChain and other tools to structure your outputs!

Show Notes

* Guardrails AI

* Text2SQL

* Use Guardrails and GPT to play valid chess

* Shreya’s AI Tinkerers demo

* Hazy Research Lab

* AutoPR

* Ian Goodfellow

* GANs (Generative Adversarial Networks)

Timestamps

* [00:00:00] Shreya's Intro

* [00:02:30] What's Guardrails AI?

* [00:05:50] Why XML instead of YAML or JSON?

* [00:10:00] SQL as a validation language?

* [00:14:00] RAIL composability and package manager?

* [00:16:00] Using Guardrails for agents

* [00:23:50] Guardrails "contracts" and guarantees

* [00:31:30] SLAs for LLMs

* [00:40:00] How to prioritize as a solo founder in open source

* [00:43:00] Guardrails open source community involvement

* [00:46:00] Working with Ian Goodfellow

* [00:50:00] Research coming out of Stanford

* [00:52:00] Lightning Round

Transcript

Alessio: [00:00:00] Hey everyone. Welcome to the Latent Space Podcast. This is Alessio partner and CTO-in-Residence at Decibel Partners. I'm joined by my cohost Swyx, writer and editor of Latent Space.

Swyx: And today we have Shreya Rajpal in the studio. Welcome Shreya.

Shreya: Hi. Hi. Excited to be here.

Swyx: Excited to have you too.

This has been a

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us