Episode Details

Back to Episodes

The hidden costs of pre-computing data | Chalk's Elliot Marx

Season 5 Episode 49 Published 3 months, 2 weeks ago
Description

Is your engineering team wasting budget and sacrificing latency by pre-computing data that most users never see? Chalk co-founder Elliot Marx joins Andrew Zigler to explain why the future of AI relies on real-time pipelines rather than traditional storage. They dive into solving compute challenges for major fintechs, the value of incrementalism, Elliot’s thoughts on and why strong fundamental problem-solving skills still beat specific language expertise in the age of AI assistants.

Join our AI Productivity roundtable: 2026 Benchmarks Insights

*This episode was recorded live at the Engineering Leadership Conference.

Follow the show:

Follow the hosts:

Follow today's guest(s):

OFFERS

  • Start Free Trial: Get started with LinearB's AI productivity platform for free.
  • Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era.

LEARN ABOUT LINEARB

  • AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production.
  • AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance.
  • AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil.
  • MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us