Episode Details

Back to Episodes
How Do We Protect the Software Supply Chain?

How Do We Protect the Software Supply Chain?

Episode 1363 Published 3 years, 2 months ago
Description

DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years.

 

“And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO  “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.”

 

Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter.

 

“We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”

 

Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.

 

Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.

 

This podcast episode was sponsored by AWS.

‘Trust, but Verify’

For our podcast guests, “trust, but verify” is a slogan more organizations need to live by.

 

A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.”

 

That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream.

 

That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’”

 

Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted.

 

More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”

GitBOM and the ‘Signal-to-Noise Ratio’

As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as software bills of materials, or SBOMs, fall short of giving teams all the information they need to determine code’s safety.

 

“Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us