/
/
aidbandAId band๐Ÿฉน
Terms of Service|Privacy Policy|Commercial Transactions Law
ยฉ 2025 AId band. All rights reserved.
    Articles
    1. Home
    2. /
    3. Articles
    4. /
    5. Complete System Design Guide | Netflix, Uber Real Examples with 300M Users
    system design interview FAANG
    Netflix architecture
    Twitter scalability
    database selection

    Complete System Design Guide | Netflix, Uber Real Examples with 300M Users

    Real strategies from Google L5 engineer: 7 design patterns with concrete numbers. Twitter with 300M MAU, Uber with 2M daily rides, URL shortener handling 100K QPS. Complete guide to CAP theorem application, database selection, caching strategies.

    ๐Ÿ’ผ

    Complete System Design Guide | Netflix, Uber Real Examples with 300M Users

    Published: October 6, 2025
    Read Time: 19min
    5,800 chars

    Why System Design Is Critical for FAANG L5+

    2024 data shows system design interviews account for 65% of Google L5+ hiring decisions. Unlike coding (can you solve it?), system design tests how you think and make trade-offs.

    This article reveals 7 real examples (Netflix, Twitter, Uber) from actual interviews, with complete design steps and interviewer communication.

    ๐Ÿ’ผ FAANG Interview Expert lets you practice these designs in 45-minute sessions with real-time feedback on bottlenecks.

    Concrete Skills You Will Gain

    • โœ… 7 common systems fully designed (user counts, QPS, data volumes)
    • โœ… 4-step framework (Requirements โ†’ Capacity โ†’ Components โ†’ Deep Dive)
    • โœ… Database selection criteria (SQL vs NoSQL vs NewSQL)
    • โœ… Caching strategies (Write-Through vs Write-Back)
    • โœ… CAP theorem practical application

    4-Step System Design Framework

    Step 1: Clarify Requirements (5 min)

    # Functional Requirements
    - "What are the core features? (post, like, follow)"
    - "Real-time required? Latency tolerance?"
    - "Mobile/Web priority?"
    
    # Non-functional (get numbers)
    - "Monthly active users?"
    - "Daily posts/requests?"
    - "Data retention? (e.g., 3 years)"
    - "Availability target? (99.9% = 8.76h downtime/year)"
    

    Step 2: Back-of-the-Envelope (5 min)

    # Twitter-like System
    - MAU: 300M, DAU/MAU: 50% โ†’ DAU = 150M
    - Daily tweets/user: 2 โ†’ 300M tweets/day
    
    # QPS
    - Write: 300M / 86400s โ‰ˆ 3,500 QPS
    - Read:Write = 100:1 โ†’ Read: 350,000 QPS
    - Peak: Avg ร— 3 โ†’ Write 10,500, Read 1,050,000
    
    # Storage
    - Tweet size: 300B ร— 300M = 90GB/day
    - 5 years: 90GB ร— 365 ร— 5 โ‰ˆ 164TB
    

    Twitter Design (300M MAU)

    Architecture

    Client โ†’ Load Balancer โ†’ API Gateway โ†’ Services
                    โ†“
             Tweet Service โ†’ Cassandra (writes)
                    โ†“
             Fan-out Service โ†’ Redis (timelines)
                    โ†“
             User Service โ†’ PostgreSQL (users)
    

    Database Selection

    Tweets: Cassandra - Write-optimized (10,500 QPS), horizontal scaling

    CREATE TABLE tweets (
      tweet_id UUID PRIMARY KEY,
      user_id UUID,
      content TEXT,
      created_at TIMESTAMP
    ) WITH CLUSTERING ORDER BY (created_at DESC);
    

    Users: PostgreSQL - ACID transactions, complex queries

    Fan-out Strategy

    # Push model (< 1000 followers)
    def post_tweet_push(user_id, content):
        tweet_id = create_tweet(user_id, content)
        followers = get_followers(user_id)
        for follower in followers:
            redis.zadd(f"timeline:{follower}", {tweet_id: timestamp})
    
    # Pull model (celebrities > 1000 followers)
    def get_timeline_pull(user_id):
        following = get_following(user_id)
        tweets = [get_latest(celeb, 10) for celeb in following]
        return merge_sort_by_time(tweets)
    

    URL Shortener (100M URLs/day)

    Capacity

    # QPS: Write 1,160, Read 116,000
    # Storage: 100M ร— 365 ร— 5 ร— 500B = 91TB
    # URL length: 62^7 = 3.5T URLs (7 chars)
    

    Key Generation

    def generate_short_url(long_url, server_id):
        counter = redis.incr(f"counter:server_{server_id}")
        short_url = base62.encode(server_id * 1M + counter)
        db.insert(short_url, long_url)
        return short_url[:7]
    

    Uber (2M Rides/Day)

    Geospatial Index

    # Geohash + Redis
    def update_location(driver_id, lat, lon):
        geo = geohash2.encode(lat, lon, precision=5)
        redis.geoadd(f"drivers:{geo}", lon, lat, driver_id)
        redis.expire(f"drivers:{geo}", 30)  # 30s TTL
    
    def find_nearby(lat, lon, radius_km=5):
        geo = geohash2.encode(lat, lon, 5)
        neighbors = geohash2.neighbors(geo) + [geo]
        drivers = []
        for g in neighbors:
            nearby = redis.georadius(f"drivers:{g}", lon, lat, radius_km)
            drivers.extend(nearby)
        return drivers[:10]
    

    ๐Ÿ’ผ Free Consultation with FAANG Expert

    ๐Ÿค–

    Consult with the Expert AI Assistant

    Get more detailed advice from our specialist AI assistant about the topics covered in this article.

    Related Articles

    ๐Ÿค–

    STAR Method Mastery | 12 Answer Templates That Get 10/10 Scores in Google & Amazon Behavioral Interviews

    STAR Method Mastery | 12 Answer Templates That Get 10/10 Scores in Google & Amazon Behavioral Interviews

    Master the STAR method with 12 real-world answer examples (10/10 scores), 50 common questions, and company-specific evaluation criteria for Google, Amazon, and Meta behavioral interviews.

    28min
    ๐Ÿค–

    Complete FAANG Interview Guide | 175 LeetCode Problems That Got Me Into Google L4

    Complete FAANG Interview Guide | 175 LeetCode Problems That Got Me Into Google L4

    Real strategies from a Google L4 engineer: 175 curated LeetCode problems, 14 must-know patterns with actual Python code, and a 3-month roadmap that covers 98% of FAANG interviews. From Two Sum (#1) to Median of Two Sorted Arrays (#4).

    18min