Episode Details

Back to Episodes
Managing Large Python Data Science Projects With Dask

Managing Large Python Data Science Projects With Dask

Episode 112 Published 3 years, 10 months ago
Description

What do you do when your data science project doesn’t fit within your computer’s memory? One solution is to distribute it across multiple worker machines. This week on the show, Guido Imperiale from Coiled talks about Dask and managing large data science projects through distributed computing.

We talk about projects where an orchestration system like Dask will help. Dask is designed to take advantage of parallel computing, spreading the work and data across multiple machines. Many familiar techniques for working with pandas and NumPy data are supported with Dask equivalents.

We also discuss the differences between managed and unmanaged memory. Guido shares advice on how to tackle memory issues while working with Dask.

This week we also talk briefly with Jodie Burchell, who will be a guest host on upcoming episodes. As a data scientist, Jodie will be bringing new topics, projects, and discussions to the show.

Topics:

  • 00:00:00 – Introduction
  • 00:01:56 – Guido at PyCon DE 2022
  • 00:02:14 – Working on Dask for Coiled
  • 00:03:27 – Dask project history
  • 00:04:00 – How would someone start to use Dask?
  • 00:10:28 – Managing distributed data
  • 00:11:18 – Data files CSV vs Parquet
  • 00:15:02 – Managed vs unmanaged memory
  • 00:22:42 – Video Course Spotlight
  • 00:24:01 – Dask active memory manager
  • 00:28:36 – Learning best practices and Dask tutorials
  • 00:33:06 – Where is Dask being used?
  • 00:35:45 – What are you excited about in the world of Python?
  • 00:37:55 – What do you want to learn next?
  • 00:40:31 – Thanks, Guido
  • 00:40:40 – Introduction to Jodie Burchell
  • 00:45:28 – Goodbye

Show Links:

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us