Last week (week 6, 2021), seven data breaches were announced. In this episode, we discuss the possible scenarios for preventing attackers from getting a hold of your data, whether private or company data. And tips on how to mitigate the consequences of data leaks in cases when you have no control over data management (think of breach of 3rd party service).
How do you run Kubernetes in the cloud? Still using Kops? Or is it time to jump to the managed offerings? We go through the list of things you might be missing out on if not yet using a managed solution. Also, in this episode - what do you always configure in the k8s cluster? CNI, Ingress, IAM, and even more!
It’s been almost a year since we started the podcast, but we never took time to explain who we are and what problems we solve for our customers/employers. So in this episode, you will find more details about us and, as usual, references to useful tools, talks, and techniques.
AWS had a severe incident at the end of November. Kinesis in us-east-1 went dark for quite some time, and a ripple effect caused degradation of other services like CloudWatch, ECS, and others. As a Cloud Engineering practitioner, how do you get yourself and your organization ready for a such turn of events?
Andrey wants monitoring to be more magical, or does he want a wrong thing? What are the sane defaults? And why do we have to set up boilerplate monitoring again and again?
Mattias shares what he does for monitoring security events.
Julien explains why using logs to debug in a microservices architecture is costly and inefficient.
Initially, we planned this episode as a discussion about HashiCorp Nomad and invited Jacob Lärfors. He recently published a great article about his experience working with Nomad (see link in the show notes). However, because of a few postponements, and with HashiConf that happened just a week ago, we decided to extend the podcast’s scope to go over all of the announcements that they did during the conference. So here it is - HashiConf special: all you need to know about everything that HashiCorp announced during the conference plus a discussion about Nomad!
This is the first episode in the new format - 30 minutes short and crisp episodes, i.e., less water and side discussions, focusing on the topic, duration under (well, almost under) 30 minutes. We hope you like it!
The topic of this episode is building docker images - automation, security, best practices.
In this episode, we discuss: Saving money with T3a family Building Docker images locally and in CI Setting up deamonless Docker builds for CI and k8s Using multistage builds to keep your images nice and clean as well as encapsulate the build environment and make it portable Passing secrets to Docker build and inspecting image layers for secrets (ssh-agent and many more) Keeping Docker images updated with dependencies and updates Scanning Docker images for vulnerabilities Docker image layers caching - doing it right DockerHub is to delete old images stored for free, and GitHub is ready to host them for you Docker image naming so you can find all you need to debug quickly
In some of the information overlaps with episode #3 but greatly extends information provided before https://devsecops.fm/episodes/docker-secure-build/
In this episode, we discuss options for splitting your deployment stages. We hear people coming up with all possible type of environments - dev, test/QA, integration, stage, prod, etc How many do you actually need? What is the reason for having all those stages? Maybe do you need less? Why not deploy directly to production using some fancy technique?
Put it simply - stage or not to stage?
Let’s talk about security in the era of remote work. Most of us have experienced a flaky VPN connection. What are the alternatives? SSH certificates? Yubikey? We discussed various topics around security inside a cluster and outside.