Beyond “Spicy Autocomplete”: What AI Models Actually Do
Matt Stauffer sits down with Sam Rose, developer educator at ngrok, to unpack what’s actually happening inside the AI models we use every day. Sam breaks down ideas like parameters, gradient descent, embeddings, and vector search in a way that makes them intuitive, using simple analogies such as converting Celsius to Fahrenheit. Along the way, they explore Ethan Mollick’s concept of the “jagged frontier” of AI capability and discuss why benchmarks like SWE-bench don’t always tell the full story.
The conversation wraps with a thoughtful look at the ethical tensions around modern AI: intellectual property concerns, the massive financial momentum behind these systems, and what it means to keep using these tools thoughtfully even when you’re still wrestling with their implications.
- Matt Stauffer on Twitter
- Tighten Website
- Sam Website
- Sam on Twitter
- Prompt Caching Blog Post
- Co-Intelligence Book
- Benchmark Blog Post
- Distributed Representations of Words and Phrases and their Compositionality
- Stephen Welch on Twitter
- Welch Labs on YouTube
- Julia Turc on YouTube
- Build a Large Language Model (From Scratch) Book
- Nathan Lambert
- Aaron Francis Website
- Faster.dev
- Mitchell Hashimoto
- Armin Ronacher
- My AI Skeptic Friends Are All Nuts Blog Post
-----
Editing and transcription sponsored by Tighten.
Creators and Guests
Host
Matt Stauffer
CEO of Tighten, where we write Laravel and more w/some of the best devs alive. "Worst twerker ever, best Dad ever" –My daughter
