Season 1 Episode 1

URL Shortener

URL shorteners like bit.ly and tinyurl.com appear simple but involve complex system design challenges including distributed ID generation, caching strategies, and handling massive read/write ratios at scale.

🎧 Listen

A concise written overview — no audio required

URL Shortener: The "Simple" System That's Anything But

Everyone thinks a URL shortener is the "Hello World" of system design interviews. Hash a URL, store the mapping, redirect on lookup — done, right? Not even close. Bitly handles 600 million clicks per month. TinyURL processes millions of URLs daily. When you dig in, you're designing a globally distributed, read-heavy, analytics-powered system that needs to respond in single-digit milliseconds.

Why It's Harder Than It Looks

The core challenge is the traffic pattern. For every URL you shorten, it might get clicked 100 times. That means you're building a system that's 99% reads — but still needs to handle millions of writes per day. Add analytics (clicks by country, device, time), abuse prevention, and global latency requirements, and your "simple" project is suddenly a real engineering challenge.

Quick math at scale: 100M URLs shortened daily × 100 clicks each = 10 billion redirects/day. That's ~115,000 reads per second and 1,200 writes per second. Storage for 5 years? About 90TB just for URL mappings.

The Encoding Problem: How to Generate Short Codes

This is where most candidates stumble. Here are your options:

ApproachProsConsVerdict
MD5/SHA + TruncateSimpleUnpredictable collisions, no fix without retry❌ Avoid
Auto-increment CounterNo collisions, simplePredictable (scraping risk), single point of failure⚠️ Risky alone
Base62 + CounterCompact, no collisionsStill predictable unless shuffled✅ With randomization
Snowflake-style IDUnique, distributed, not predictableMore complex, longer codes✅ Best at scale

The Instagram approach is elegant: combine a timestamp, machine ID, and sequence number, then base62 encode the whole thing. You get uniqueness without a single point of coordination, and the codes aren't predictable enough to scrape.

Key insight: Hashing is a trap. You can't efficiently detect or resolve collisions at scale. Counter-based approaches with distributed coordination (like Snowflake IDs) are what production systems actually use.

The Interview Framework

When you get this question, follow this path:

  1. Functional requirements — Shorten URLs, redirect (301 vs 302?), custom aliases, expiration, analytics
  2. Scale estimation — Write QPS, read QPS, storage over N years, bandwidth
  3. High-level architecture — Load balancer → Web servers → Encoding service → Database + Cache
  4. Deep dive on encoding — How to generate short codes without collisions
  5. Caching strategy — The 100:1 read ratio demands aggressive caching
  6. Extensions — Analytics, rate limiting, geographic distribution

The 301 vs 302 redirect choice matters more than you'd think: 301 (permanent) is better for SEO but makes analytics harder because browsers cache it. 302 (temporary) ensures every click hits your server — critical for accurate tracking.

Caching: Your Most Important Decision

With a 100:1 read-to-write ratio, your caching layer is arguably more important than your database choice. The pattern is straightforward:

Read path: Check cache → hit? redirect immediately. Miss? query DB → populate cache → redirect.

What to cache: The top 20% of URLs (by popularity) will cover 80%+ of your traffic. Use an LRU eviction policy with TTLs. A few hundred GB of Redis handles billions of active mappings.

Cache invalidation: Mostly a non-issue for URL shorteners — mappings rarely change. Expired URLs get evicted naturally. This is one of the rare systems where caching is simple because the data is essentially immutable.

Database Design: SQL vs NoSQL

For the URL mapping itself (short_code → long_url), you need:

  • Fast point lookups by short_code (the redirect hot path)
  • Fast writes when creating new URLs
  • Horizontal scalability to billions of rows

NoSQL (DynamoDB, Cassandra) is the natural fit: key-value lookups are the primary access pattern, and sharding by short_code distributes load evenly. No joins needed.

SQL (MySQL, PostgreSQL) works too with proper indexing and sharding, especially if you want ACID guarantees for deduplication or custom alias reservation.

Sharding strategy: Hash the short_code to determine the shard. Since codes are already pseudo-random (if using Snowflake-style IDs), you get even distribution for free.

Real-World Implementations

Bitly evolved from a single MySQL instance to a distributed architecture. Their key insight: separate the write path (URL creation) from the read path (redirects) entirely. Writes go to a primary database; reads are served from replicas and cache.

TinyURL kept it simple — a focused architecture that proves you don't always need microservices. Their success shows that a well-tuned monolith with good caching can handle enormous scale.

Instagram's ID generation solved the distributed uniqueness problem without coordination: each shard generates its own IDs using (timestamp, shard_id, sequence). This pattern is directly applicable to URL shortener code generation.

Key Takeaways

  1. It's a read-heavy system — Design for the 100:1 ratio from day one. Cache aggressively.
  2. Don't hash for short codes — Use counter-based or Snowflake-style distributed ID generation
  3. 301 vs 302 is a product decision — Analytics needs vs SEO benefits
  4. Separate read and write paths — They have completely different scale characteristics
  5. Caching is your biggest lever — A few hundred GB of Redis eliminates most database load
  6. Shard by short_code — Even distribution, fast lookups, scales horizontally

🏗 Architecture Diagram

↗ Open full screen

Metadata


Segment 1: The Problem (~2 min)

Alex So Nova, you know what's hilarious? Everyone thinks URL shorteners are like the "Hello World" of system design—

Blake Oh god, here we go. Let me guess, someone in your interview loop said "it's just a hash table, right?"

Alex *laughs* Worse! They said they'd "just use MD5 and call it a day." I'm like, buddy, Bitly processes 600 million clicks per month. You think they're just running `md5sum` on a Linux box somewhere?

Blake Wait, 600 million? That's... that's actually terrifying when you think about it. Like, every time someone clicks a bit.ly link, there's this whole distributed system that has to—

Alex —wake up, figure out where that maps to, log the click for analytics, check if it's not malware, redirect you, all in like 10 milliseconds or people rage quit.

Blake Right! And here's what gets me - the traffic pattern is completely bonkers. For every URL I shorten, it might get clicked 100 times. So you're building this thing that's like 99% reads, but you still need to handle millions of writes per day without everything falling over.

Alex It's like... okay you know those viral TikToks that get shared everywhere? Imagine if every share required your database to not die.

Blake *laughs* "Database death by TikTok" should be a monitoring alert category. But seriously, then you add analytics on top - everyone wants to know clicks by country, by time, by device...

Alex Suddenly your "simple" URL shortener is also a real-time analytics platform. Because of course it is.


Segment 2: Interview Framework (~3 min)

Blake Okay but let's say I'm in an interview and they throw this at me. Where do I even start without looking like an idiot?

Alex Requirements first, always. Functional stuff - shorten URLs, redirect fast, maybe custom aliases if you're feeling fancy. Non-functional - it needs to be stupid fast, never go down, and handle billions of URLs.

Blake Billions? Okay, math time. Let's say... 100 million URLs shortened daily?

Alex Sure, and if each gets clicked 100 times on average, that's 10 billion redirects daily. So like 1,200 writes per second and... *pause* ...115,000 reads per second.

Blake Jesus. That's a lot of reads. Storage-wise, if each mapping is maybe 500 bytes and we keep 5 years of data...

Alex You're looking at like 90TB. And that's just the URL mappings, not the click analytics.

Blake Wait, but here's what I always mess up in interviews - the bandwidth calculation. Each redirect is tiny, but 115K per second...

Alex Yeah, that's like 23 megabytes per second just for the redirect responses. Doesn't sound like much until you realize you need this globally distributed because nobody's waiting 200ms for a redirect.

Blake So architecture-wise... load balancer, web servers, some encoding service to generate short codes, database for mappings, definitely caching because of that insane read ratio—

Alex Don't forget rate limiting! Trust me, the first thing that'll happen is someone will try to shorten the entire internet.

Blake *laughs* "Sir, why did you try to shorten 50,000 URLs in the last minute?" "I was bored."


Segment 3: Deep Dive (~3 min)

Alex Alright, here's where it gets spicy. How do you actually generate those short codes without everything exploding?

Blake Oh, I know this one! You hash the original URL with MD5 and take the first—

Alex NO. Stop. Bad Nova.

Blake *laughs* I'm kidding! But seriously, why is hashing terrible here?

Alex Collisions, my friend. You have no idea when they'll happen, and when they do, you're screwed. It's like playing Russian roulette with your database.

Blake Right, so what, base62 encoding with a counter? Like abc123, then abc124...

Alex Better, but now your URLs are predictable. Some script kiddie can just increment through all your URLs and scrape everything you've ever shortened.

Blake Ugh, security. Always ruining the fun. So what does Instagram do? I remember reading something...

Alex They're clever about it - they combine timestamp, machine ID, and sequence number, then base62 encode the whole thing. You get uniqueness without predictability.

Blake That's actually... wait, but what about the database? You can't just throw billions of rows into MySQL and hope for the best.

Alex Definitely need sharding. But here's the gotcha - you can't shard by the short code because then all reads for popular URLs hit the same shard.

Blake Oh, that's nasty. Hot spot city. What about sharding by hash of the short code?

Alex Now you're thinking! Distributes load evenly. Or honestly, just go NoSQL. DynamoDB or Cassandra are perfect for this - you mostly just need key-value lookups anyway.

Blake And caching! With that 100:1 read ratio, Redis is basically mandatory, right?

Alex Oh absolutely. Cache the popular URLs, use LRU eviction, suddenly 90% of reads never touch your database. It's beautiful.

Blake Until you have to deal with cache invalidation for custom URLs and expiration dates and... *groans* why is everything so complicated?

Alex Because distributed systems hate us personally.


Segment 4: How They Actually Built It (~2 min)

Blake So what does Bitly actually do in production? Please tell me it's not just "MySQL and prayers."

Alex *laughs* They started with MySQL but quickly realized that wasn't gonna scale. Now they're all distributed with multiple data centers and aggressive caching everywhere.

Blake Their encoding is pretty clever too, right? It's not just sequential IDs?

Alex Yeah, they have this base62 scheme that gives short, non-predictable URLs while avoiding collisions entirely. It's like... elegant engineering.

Blake Meanwhile TinyURL is over there like "we use MySQL and memcached and it works fine for millions of URLs." Sometimes boring wins.

Alex Right? But here's what's interesting - both of them are obsessed with rate limiting and abuse detection. Turns out when you can hide destination URLs, bad actors love you.

Blake Makes sense. You become a vector for phishing and malware. That's a whole other engineering problem on top of the performance stuff.

Alex Exactly! Bitly's engineering blog talks about how much effort goes into detecting spam and malicious redirects. It's not just a scaling problem, it's a security problem.

Blake So you need machine learning for abuse detection, real-time threat intelligence, probably manual review queues... this "simple" URL shortener keeps getting more complex.

Alex Welcome to production systems! Everything is harder than it looks.


Segment 5: Interview Tips (~1 min)

Blake Okay, rapid-fire interview tips before we wrap up?

Alex Start simple! Don't immediately jump to "I'll use Kafka and seventeen microservices." Show you understand the problem first.

Blake And please talk about the read-heavy nature. So many people try to optimize for writes when 99% of traffic is reads.

Alex Oh, and if you say "just MD5 it," I will personally come to your interview and flip the table.

Blake *laughs* Harsh but fair. Also bring up rate limiting and abuse prevention. Shows you're thinking about real operational concerns.

Alex Here's a pro tip - mention analytics requirements. Most people forget URL shorteners aren't just redirect services, they're analytics platforms. Interviewers love when you think about the full product.

Blake And remember, sometimes the boring solution is the right solution. Don't over-engineer just to show off your knowledge of distributed consensus algorithms.

Alex Exactly. URL shorteners are perfect interview problems because they look simple but they're actually masterclasses in read-heavy distributed systems with global performance requirements.

Blake Plus there's always another layer of complexity to explore if you have time left over.

Alex Alright, that's a wrap on Episode 1 of Byte by Design! URL shorteners — simple on the surface, fascinating once you dig in.

Blake If this got your gears turning, subscribe and share it with a friend who's prepping for system design interviews. And next week, we're tackling chat systems — which are somehow even more deceptively complex.

Alex I can already hear you thinking — "just use WebSockets, right?" We'll see about that. See you next week!

Blake See you then!


References