Optical Cables and Multicast: Why Your Network Hardware Matters More Than You Think

The Problem You’ve optimized your multicast application. The code is clean. The algorithm is sound. Yet your latency numbers lag behind competitors, and you can’t figure out why. The bottleneck isn’t in your logic—it’s in the cable running under the floor. This isn’t flashy. No one writes blog posts about fiber optics at conferences. But here’s the reality: in high-frequency trading, real-time video delivery, and large-scale sensor networks, the choice between a generic fiber cable and an optimized one can mean the difference between 500ns and 5µs latency. That’s a 10x spread, and it starts at the physical layer. ...

January 17, 2025 · 7 min · Ryan J Hamby

Cache Eviction: 30 Years of Improvements

Introduction Cache hierarchies are everywhere—from L1/L2/L3 on your CPU to Redis clusters backing your microservices. But the way we organize and manage these tiers has fundamentally transformed over the past three decades. What started as simple in-memory buffers has evolved into a sophisticated science of eviction policies, write-through strategies, and predictive prefetching. This post explores why caches are structured the way they are, how eviction strategies have improved, and what the actual data tells us about their effectiveness over time. ...

January 11, 2025 · 7 min · Ryan J Hamby