There are 277 sessions at SCALE 23x this year. I know this because I extracted all of them from the schedule webarchive files and scored every single one.
I’m not proud of how long this took. But it surfaced some genuinely interesting tradeoffs — and the pattern of what conflicted with what tells you something real about where platform engineering is right now.
The scheduling problem is different when you manage a team #
When I was an IC, conference scheduling was mostly about depth. Find the three talks that will blow your mind and plan the rest around them. Everything else is hallway track.
Managing a platform team changes the calculus. I’m still optimizing for my own learning, but I’m also scouting for ideas to bring back to the team, watching for trends that will inform our 12-month roadmap, and — honestly — looking for external validation I can use in internal conversations. “Kelsey Hightower thinks Nix is the right move for reproducible builds” lands differently in a planning meeting than “I think Nix is the right move.”
There’s also the network dimension. The hallway conversations at SCALE aren’t incidental — for a platform team, the right connection to a Chainguard or Grafana Labs engineer can directly unlock technical help you’d otherwise spend weeks getting through support tickets.
So I needed a real schedule, not just a vague list of “sessions that sound interesting.”
How I scored everything #
For each of the 277 sessions I weighed four things:
Topic relevance got the most weight. My team owns K8s operations, observability pipelines, CI/CD, IaC, developer experience, supply chain security, and increasingly AI/ML infrastructure. Sessions that touch those directly scored high; adjacent topics scored lower; “Introduction to Kubernetes” scored near zero regardless of who was presenting.
Speaker prestige got significant weight. This is a heuristic I’ve come to trust more as I’ve gotten more senior: a known speaker at a respected company has more to lose from giving a bad talk. That doesn’t mean unknown speakers are bad — some of the best talks I’ve seen came from engineers I’d never heard of — but when you’re choosing between two relevant talks, the speaker signal matters.
Talk depth I scored on title signals. “How we” and “Lessons from” and specific production numbers (“1 million incidents”, “1,300 repos”) are green flags. “Introduction to”, “101”, “Brief Tour”, “Getting Started” are skips regardless of topic. Seven years in this role means foundational content is usually a poor use of conference time.
Uniqueness got the least weight but still mattered. A talk where a Meta engineer describes production containers-in-containers at Meta’s scale is giving me something I literally cannot get from a blog post. A general overview of OpenTelemetry concepts is not.
The schedule that came out of it #
Thursday is the PlanetNix pre-conference day. Kelsey Hightower is doing a 45-minute talk on whether now is actually the time for Nix, and I’m treating that as unmissable on speaker signal alone. The rest of the Thursday afternoon lines up nicely in the same room — Stormy Peters on reproducibility as a social contract, then two short PlanetNix sessions on Nix+K8s integration war stories. That leaves a 2.5-hour gap in the middle. I’m treating that as expo hall and hallway track time rather than filling it with workshops I’d partially pay attention to.
Friday opens with John Willis at 9am — he co-authored The DevOps Handbook and his conference batting average is genuinely high. Then Kat Morgan doing a live demo of a platform stack that includes devcontainers, Nix, Docker-in-Docker, K8s-in-containers, KubeVirt, Ceph, Cilium, Dagger, and Gitea. The sheer scope of that list is either an ambitious talk or a 60-minute incident, and either way I want to be there. The afternoon anchors on Dustin Kirkland (SVP at Chainguard, formerly Google Cloud Distinguished Engineer) talking about agentic pipelines for OS supply chains. That’s supply chain + AI from someone with serious operational credibility.
Saturday ends with a panel that’s the clearest must-attend of the entire conference: Kelsey Hightower, Stormy Peters, James Bayer, and Ron Efroni on how AI is reshaping infra and engineering. Four of the most thoughtful people in cloud-native and open source on one stage. I’ll be in that room early.
Sunday has the Mark Russinovich keynote on OSS supply chain security. He’s CTO of Azure and created Sysinternals. For supply chain content specifically, hearing what Microsoft is actually doing at scale beats any number of framework talks.
The tradeoffs are the interesting part #
Three conflicts are worth naming because they reveal something about the current state of the field.
The Friday 2pm problem. Dustin Kirkland’s supply chain talk runs 2:00–3:00 in Ballroom F. At the same time, Leigh Capili is doing a deep technical dive into Flux internals — git push to etcd: An Anatomy of Flux — in Ballroom B. Leigh Capili is one of the people who actually knows how GitOps works at the implementation level, and the talk looks architecturally dense.
I’m choosing Kirkland because supply chain security is more strategically urgent for my team right now. But the fact that these two are competing tells you something: the “how do we manage K8s the right way” question is now multi-dimensional. It’s not just GitOps vs. not-GitOps. It’s GitOps and supply chain provenance and agentic pipelines, and you can’t cover all of it in one afternoon.
Saturday 11:15am. I’m going to Renovate Your Life: How We Automated Dependency Updates for 1,300 Repos by Dimitrios Sotirakis and Philip Hope. Then at 12:30 I’m staying in the same room for We Migrated to Loki and Survived: Lessons from the Trenches — presented by Vinh Nguyen, a member of my team, in his first ever conference talk.
The talk covers ZipRecruiter’s migration from Logz.io to self-managed Grafana Loki — cost-driven, which means the architecture decisions weren’t just technical, they were financial. The abstract promises cardinality challenges, production incidents, and an honest before/after cost comparison. That’s exactly the kind of content I’d have picked on merit anyway.
The competing session is GPU Sharing Done Right, which is directly relevant to our current AI workloads. On pure content value, that’s a real tradeoff. But other folks from my team will be at the conference and can brief me afterward, and if it’s recorded I’ll watch it.
What actually made this easy: showing up for your people matters. Vinh has never spoken at a conference before. Being in that room isn’t about the content — it’s about being the kind of manager who’s present for the moments that count to the people on their team. The debrief I get from Vinh afterward will be worth more than any talk anyway. It runs against Zero Trust for Linux Admins with Open-Source IAM (Thomas Cameron, Room 101) and Rage Against the Machine: Fighting AI Complexity with Kubernetes Simplicity (Paul Yu, Ballroom A).
Renovate wins because dependency automation at 1,300 repos is a production war story with a specific scale number, which is exactly the format I trust. But I’m a little annoyed at myself about skipping Thomas Cameron. Zero Trust IAM for Linux admins is something my team keeps pushing back because it feels like “future work” — and I suspect a conference session is the thing that would make it feel concrete enough to actually schedule.
The Sunday 11:45am bloodbath. This is genuinely brutal. These all run simultaneously:
- Engin Diri (Pulumi) on building AI platforms without losing engineering principles
- Justin Garrison on the state of immutable Linux
- Noam Levy on profiling as the fourth observability signal
- Hrittik Roy on taming LLM resource usage with K8s
- Nathan Handler on building a unified cloud inventory
- Dawn Foster (Linux Foundation / CHAOSS) on OSS sustainability and corporate power dynamics
I’m going to Engin Diri because the topic is the closest match to what I’m actively trying to figure out. My team is under pressure to move faster on AI platform capabilities, and I’m trying to hold the line on platform quality. That tension is real and I want to hear someone reason through it carefully.
But Justin Garrison’s immutable Linux talk is the one I’ll regret most. He’s a former AWS EKS engineer with a track record of substantive, opinionated talks rather than surveys. And immutable OS infrastructure is the thing I keep saying “we’ll get to that” about. That’s a bad sign.
What the conflicts actually tell you #
There’s a pattern here. In previous years, the platform engineering conference schedule conflict was usually “which K8s operations talk” or “which observability vendor talk.” This year the conflicts are across dimensions:
- Supply chain provenance vs. GitOps depth
- AI platform architecture vs. immutable infrastructure
- Dependency automation vs. zero trust IAM
The field has gotten wide enough that a platform team manager can no longer track it all. That’s not a complaint — it’s a sign that platform engineering has matured into something with real breadth. But it does mean that individual learning from conferences has diminishing returns unless you’re selective about which sub-problems you’re trying to make progress on.
For my team, the through-line is: supply chain integrity, AI workload operations, and developer experience (in that order). The schedule I built reflects that, which means I’m systematically under-investing in security depth (the SunSecCon track) and over-indexing on strategic talks that give me ammunition for internal conversations.
That’s a defensible tradeoff for a manager. It might be the wrong tradeoff for an IC on my team.
The thing I’m most uncertain about #
I made a lot of calls based on “this is likely to be recorded” as justification for skipping something good. Leigh Capili’s Flux talk, Christian Hernandez’s AI readiness talk, Justin Garrison’s immutable Linux talk — all of those I’ve essentially punted to “watch the recording.”
But I never actually watch the recordings. I have a folder of “conference recordings to watch” that I haven’t opened since 2023.
So either I should stop using that as a justification, or I should build a real system for doing the post-conference review. I haven’t figured out which.
SCALE 23x runs March 5–8, 2026 at the Pasadena Convention Center.