Why System Design Is Critical for FAANG L5+
2024 data shows system design interviews account for 65% of Google L5+ hiring decisions. Unlike coding (can you solve it?), system design tests how you think and make trade-offs.
This article reveals 7 real examples (Netflix, Twitter, Uber) from actual interviews, with complete design steps and interviewer communication.
๐ผ FAANG Interview Expert lets you practice these designs in 45-minute sessions with real-time feedback on bottlenecks.
Concrete Skills You Will Gain
- โ 7 common systems fully designed (user counts, QPS, data volumes)
- โ 4-step framework (Requirements โ Capacity โ Components โ Deep Dive)
- โ Database selection criteria (SQL vs NoSQL vs NewSQL)
- โ Caching strategies (Write-Through vs Write-Back)
- โ CAP theorem practical application
4-Step System Design Framework
Step 1: Clarify Requirements (5 min)
# Functional Requirements
- "What are the core features? (post, like, follow)"
- "Real-time required? Latency tolerance?"
- "Mobile/Web priority?"
# Non-functional (get numbers)
- "Monthly active users?"
- "Daily posts/requests?"
- "Data retention? (e.g., 3 years)"
- "Availability target? (99.9% = 8.76h downtime/year)"
Step 2: Back-of-the-Envelope (5 min)
# Twitter-like System
- MAU: 300M, DAU/MAU: 50% โ DAU = 150M
- Daily tweets/user: 2 โ 300M tweets/day
# QPS
- Write: 300M / 86400s โ 3,500 QPS
- Read:Write = 100:1 โ Read: 350,000 QPS
- Peak: Avg ร 3 โ Write 10,500, Read 1,050,000
# Storage
- Tweet size: 300B ร 300M = 90GB/day
- 5 years: 90GB ร 365 ร 5 โ 164TB
Twitter Design (300M MAU)
Architecture
Client โ Load Balancer โ API Gateway โ Services
โ
Tweet Service โ Cassandra (writes)
โ
Fan-out Service โ Redis (timelines)
โ
User Service โ PostgreSQL (users)
Database Selection
Tweets: Cassandra - Write-optimized (10,500 QPS), horizontal scaling
CREATE TABLE tweets (
tweet_id UUID PRIMARY KEY,
user_id UUID,
content TEXT,
created_at TIMESTAMP
) WITH CLUSTERING ORDER BY (created_at DESC);
Users: PostgreSQL - ACID transactions, complex queries
Fan-out Strategy
# Push model (< 1000 followers)
def post_tweet_push(user_id, content):
tweet_id = create_tweet(user_id, content)
followers = get_followers(user_id)
for follower in followers:
redis.zadd(f"timeline:{follower}", {tweet_id: timestamp})
# Pull model (celebrities > 1000 followers)
def get_timeline_pull(user_id):
following = get_following(user_id)
tweets = [get_latest(celeb, 10) for celeb in following]
return merge_sort_by_time(tweets)
URL Shortener (100M URLs/day)
Capacity
# QPS: Write 1,160, Read 116,000
# Storage: 100M ร 365 ร 5 ร 500B = 91TB
# URL length: 62^7 = 3.5T URLs (7 chars)
Key Generation
def generate_short_url(long_url, server_id):
counter = redis.incr(f"counter:server_{server_id}")
short_url = base62.encode(server_id * 1M + counter)
db.insert(short_url, long_url)
return short_url[:7]
Uber (2M Rides/Day)
Geospatial Index
# Geohash + Redis
def update_location(driver_id, lat, lon):
geo = geohash2.encode(lat, lon, precision=5)
redis.geoadd(f"drivers:{geo}", lon, lat, driver_id)
redis.expire(f"drivers:{geo}", 30) # 30s TTL
def find_nearby(lat, lon, radius_km=5):
geo = geohash2.encode(lat, lon, 5)
neighbors = geohash2.neighbors(geo) + [geo]
drivers = []
for g in neighbors:
nearby = redis.georadius(f"drivers:{g}", lon, lat, radius_km)
drivers.extend(nearby)
return drivers[:10]