Discover the Art of
Exploring the fascinating world of algorithms, data structures, and software engineering through clear explanations and practical examples.
AI-Powered Discovery
What do you want to learn today?
Describe a topic and get a structured, personalized reading path built from this blog's content.
New to System Design Interview Prep?
Follow this curated path — each post builds on the previous, helping you master System Design Interview Prep step by step.
- 1
The Ultimate Guide to Acing the System Design Interview
Don't panic. System Design interviews are open-ended discussions. This framework (Requirements, API, DB, Scale) will help you structure your answer.
Start Here14 min - 2
System Design Core Concepts: Scalability, CAP, and Consistency
The building blocks of distributed systems. Learn about Vertical vs Horizontal scaling, the CAP Theorem, and ACID vs BASE.
Core Concept13 min - 3
System Design Networking: DNS, CDNs, and Load Balancers
The internet's traffic control system. We explain how DNS resolves names, CDNs cache content, and Load Balancers distribute traffic.
Core Concept16 min - 4
System Design Protocols: REST, RPC, and TCP/UDP
How do servers talk to each other? This guide explains the key protocols: REST vs RPC for APIs, TCP vs UDP for transport.
Core Concept17 min - 5
System Design Databases: SQL vs NoSQL and Scaling
The eternal debate: SQL or NoSQL? We break down ACID vs BASE, Sharding vs Replication, and when to use MongoDB vs PostgreSQL.
Core Concept14 min
Featured Articles

Machine Learning Fundamentals: A Beginner-Friendly Guide to AI Concepts
What is the difference between AI, ML, and Deep Learning? We break down the jargon and explain Supervised vs. Unsupervised learning.

RAG vs Fine-Tuning: When to Use Each (and When to Combine Them)
A practical decision guide with Python code for both paths — choose the right approach before you spend weeks building the wrong one.
Fine-Tuning LLMs with LoRA and QLoRA: A Practical Deep-Dive
From the math of low-rank decomposition to running QLoRA on a single A100 — everything you need to fine-tune a 70B model without a supercomputer.
Latest posts
Recent Articles
Latest posts in publish order.
RAG vs Fine-Tuning: When to Use Each (and When to Combine Them)
TLDR: RAG gives LLMs access to current knowledge at inference time; fine-tuning changes how they reason and write. Use RAG when your data changes. Use fine-tuning when you need consistent style, tone, or domain reasoning. Use both for production assi...
Fine-Tuning LLMs with LoRA and QLoRA: A Practical Deep-Dive
TLDR: LoRA freezes the base model and trains two tiny matrices per layer — 0.1 % of parameters, 70 % less GPU memory, near-identical quality. QLoRA adds 4-bit NF4 quantization of the frozen base, enabling 70B fine-tuning on 2× A100 80 GB instead of 8...
Build vs Buy: Deploying Your Own LLM vs Using ChatGPT, Gemini, and Claude APIs
TLDR: Use the API until you hit $10K/month or a hard data privacy requirement. Then add a semantic cache. Then evaluate hybrid routing. Self-hosting full model serving is only cost-effective at > 50M tokens/day with a dedicated MLOps team. The build ...
All refreshed posts sorted by last update.
Watermarking and Late Data Handling in Spark Structured Streaming
TLDR: A watermark tells Spark Structured Streaming: "I will accept events up to N minutes late, and then I am done waiting." Spark tracks the maximum event time seen per partition, takes the global minimum across all partitions, subtracts the thresho...
Spark Structured Streaming: Micro-Batch vs Continuous Processing
📖 The 15-Minute Gap: How a Fraud Team Discovered They Needed Real-Time Streaming A fintech team runs payment fraud detection with a well-tuned Spark batch job. Every 15 minutes it reads a day's worth of transaction events from S3, scores them agains...
Stateful Aggregations in Spark Structured Streaming: mapGroupsWithState
TLDR: mapGroupsWithState gives each streaming key its own mutable state object, persisted in a fault-tolerant state store that checkpoints to object storage on every micro-batch. Where window aggregations assume fixed time boundaries, mapGroupsWithSt...
Browse every post by popularity.

LLM Skills vs Tools: The Missing Layer in Agent Design
TLDR: A tool is a single callable capability (search, SQL, calculator). A skill is a reusable mini-workflow that coordinates multiple tool calls with policy, guardrails, retries, and output structure. If you model everything as "just tools," your age...
Little's Law: The Secret Formula for System Performance
TLDR: Little's Law ($L = \lambda W$) connects three metrics every system designer measures: $L$ = concurrent requests in flight, $\lambda$ = throughput (RPS), $W$ = average response time. If latency spikes, your concurrency requirement explodes with ...

Machine Learning Fundamentals: A Beginner-Friendly Guide to AI Concepts
TLDR: 🤖 AI is the big umbrella, ML is the practical engine inside it, and Deep Learning is the turbo-charged rocket inside that. This guide explains -- in plain English -- how machines learn from data, the difference between supervised and unsupervi...

Written by
Abstract Algorithms
@abstractalgorithms
Exploring the fascinating world of algorithms, data structures, and software engineering through clear explanations and practical examples.
