Presented by:

645a3997823b4f1943dfaca90022b937

Shaun Thomas

Tembo

Shaun has spent decades working in the Postgres ecosystem, specializing in architecture and high availability. His "PostgreSQL High Availability Cookbook" serves as a treatise to the lessons he learned over that time. Perhaps you've read something from his PG Phriday blog series over the years?

Currently he serves as a Senior Software Engineer at Tembo, striving to help make Postgres the distributed cluster-aware platform he knows it can be!

No video of the event yet, sorry!
Download the Slides

AI is a hot topic right now, and for good reason! Natural Language Search and Retrieval Augmented Generation (RAG) are two great ways to leverage data stored in Postgres in an immediately useful way. Why use Full Text Search when we can search for intent and related topics?

Actually doing it on the other hand is a huge pain. We need to choose an embedding model to vectorize the data, then produce, maintain, and index the embeddings. Then we need to keep the embedding model around to similarly vectorize search parameters, and build queries for appropriate similarity searches. If RAG is involved, we need to choose a public Large Language Model API and juggle references fed to the system and user prompts, and relate that all back to our original data.

Or... we can just use the pg_vectorize extension. Come to this talk and we'll show you how to build a rudimentary self-maintaining RAG application with just a few Postgres queries. We'll discuss a bit about the theory behind modern AI and how Postgres plays an integral part in that ecosystem thanks to pg_vector and related extensions.

Mainly, we aim to demystify AI so anyone can use it thanks to Postgres.

Date:
2024 November 6 17:00 PST
Duration:
20 min
Room:
Dev: 422
Conference:
Seattle 2024
Language:
Track:
Dev
Difficulty:
Medium