# Transformer Math โ€” Complete Module Collection Tier-1 ML/AI engineering interview prep. 83 modules covering Transformers, Training, Inference, Architectures, Applications, Trust & Evaluation, AI Engineering, and Production Design Reviews. Source: https://personal-wiki.pages.dev Last updated: 2026-05-07 --- --- title: "High-Level Overview" part: "The Transformer" number: 0 emoji: "๐Ÿ—๏ธ" subtitle: "The complete Transformer pipeline โ€” from raw text to next-token prediction" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐Ÿ—๏ธ High-Level Overview > The complete Transformer pipeline โ€” from raw text to next-token prediction > [!question] Key Question > See the full machine before zooming into the parts โ†’ Tokenization ## Key Insights > [!tip] Insight > The entire architecture can be summarized as: tokenize โ†’ embed โ†’ (normalize โ†’ attend โ†’ add โ†’ normalize โ†’ FFN โ†’ add) ร— N โ†’ project โ†’ softmax. Every component is a matrix multiplication โ€” the Transformer is fundamentally a composition of linear maps with nonlinearities. > [!tip] Insight > Llama 3 trains 8B params on 15T tokens โ€”{" "} 7.5ร— more tokens than Llama 2 . The Chinchilla scaling law (Hoffmann et al., 2022) says the optimal token count is ~20ร— the parameter count. At 8B params that is 160B tokens โ€” Llama 3 goes far beyond that, trading compute budget for a smaller, higher-quality model that runs cheaply at inference. ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Anthropic, OpenAI)_ **Q:** Walk through the full forward pass of a decoder-only Transformer. What happens at each stage from raw text input to next-token probability?
Answer 1) Tokenizer converts text to integer IDs via BPE. 2) Embedding table maps IDs to d-dimensional vectors. 3) Positional encoding (sinusoidal or RoPE) adds position information. 4) N identical blocks, each: LayerNorm โ†’ Multi-Head Attention (with causal mask) โ†’ residual add โ†’ LayerNorm โ†’ FFN (SwiGLU) โ†’ residual add. 5) Final LayerNorm. 6) Linear projection to vocabulary size. 7) Softmax to get probability distribution. 8) Sample or argmax for next token.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** Why do modern Transformers use Pre-Norm (LayerNorm before sublayer) instead of Post-Norm (after)?
Answer Pre-Norm places LayerNorm before each sublayer: y = x + Sublayer(LN(x)). This keeps the residual path clean โ€” gradients flow through the identity shortcut without passing through normalization. Post-Norm (y = LN(x + Sublayer(x))) forces gradients through LN at every layer, causing training instability in deep models. Pre-Norm enables stable training without warmup, which is why every model since GPT-2 uses it.
### โ˜…โ˜†โ˜† _(Google, OpenAI, Meta)_ **Q:** What is the causal mask in self-attention and why is it necessary for autoregressive generation?
Answer The causal mask is an upper-triangular matrix of -infinity values applied to attention scores before softmax. It prevents position i from attending to positions j > i (future tokens). Without it, the model could
### โ˜…โ˜…โ˜… _(Google, Meta, Anthropic)_ **Q:** Compare the parameter count and compute distribution across the main components of a Transformer. Where do most parameters live?
Answer For a typical LLM: ~65% of parameters are in FFN layers (two large weight matrices per block: dโ†’4d and 4dโ†’d, or 8d/3 for SwiGLU). ~30% in attention (Q/K/V/O projections, reduced with GQA). ~5% in embeddings. The embedding and unembedding matrices are often tied (shared weights). Compute is dominated by attention (O(n^2 d) for sequence length n) at long contexts, but FFN dominates at short contexts since it
## Related Tokenization ยท Embeddings ยท Positional Encoding ยท MLP & Matmul ยท Self-Attention --- --- title: "Tokenization" part: "The Transformer" number: 1 emoji: "๐Ÿ”ค" subtitle: "BPE, vocabulary size, and why GPT can't count letters" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐Ÿ”ค Tokenization > BPE, vocabulary size, and why GPT can't count letters > [!question] Key Question > Why can't GPT count letters in "strawberry"? โ† High-Level Overview | โ†’ Embeddings ## Key Insights > [!tip] Insight > Think of BPE as a compression algorithm for language. Frequent patterns get short codes (single tokens), rare patterns get longer codes (multiple tokens). Just like how Huffman coding assigns shorter bit sequences to more frequent characters. > [!tip] Insight > SentencePiece operates directly on raw text, removing the need for pre-tokenization (whitespace splitting). This makes it truly language-agnostic โ€” essential for multilingual models like mT5 that cover 100+ languages (Kudo & Richardson 2018). > [!tip] Insight > The embedding lookup is not a matrix multiplication โ€” it is a table lookup (indexing rows). This is{" "} , not{" "} . The embedding matrix is learned during training: similar tokens end up with similar vectors. > [!tip] Insight > Notice the trend: larger models have a tiny embedding %. GPT-3's embedding is only 0.35% of 175B params. But GPT-2 (124M) spends 31% on embeddings! This is why small models use smaller vocabs โ€” and why Llama-3 could afford to jump to 128K vocab with 70B params. > [!tip] Insight > Llama-3 doubled vocab from Llama-2's 32K to 128K, mainly for better multilingual and code coverage. Larger vocab = shorter sequences = faster inference, at the cost of a larger embedding table. ## Code Examples ```python # BPE tokenization with tiktoken (real GPT-4 tokenizer) import tiktoken enc = tiktoken.get_encoding("cl100k_base") # GPT-4 / Claude encoding text = "Hello, world! How are you today?" token_ids = enc.encode(text) print(token_ids) # [9906, 11, 1917, 0, 2650, 527, 499, 3432, 30] print(len(token_ids)) # 9 tokens (not 8 words โ€” BPE is subword) # Decode back to text decoded = enc.decode(token_ids) print(decoded) # "Hello, world! How are you today?" # BPE merge step (simplified): # 1. Start with character-level tokens # 2. Count all adjacent pairs # 3. Merge the most frequent pair into a new token # 4. Repeat until vocab_size reached (GPT-4: ~100K merges) ``` ## Interview Questions ### โ˜…โ˜†โ˜† _(Google, OpenAI)_ **Q:** Why BPE over word-level or character-level?
Answer Word-level: vocab explodes (170K+ English words, plus morphology, misspellings, multilingual). Most words are rare โ€” embeddings undertrained. Character-level: tiny vocab but sequences become 4-5x longer, requiring much more compute (attention is O(nยฒ)). BPE: start with characters, greedily merge frequent pairs. Common words โ†’ single tokens, rare words โ†’ subword pieces. Best of both: manageable vocab (32K-128K), reasonable sequence length, handles unseen words via decomposition.
### โ˜…โ˜†โ˜† _(Google, Meta)_ **Q:** Why can\
Answer BPE tokenizes
### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** Embedding param count โ€” what % of total model?
Answer Embedding matrix E โˆˆ R^{|V|ร—d}. GPT-3: |V|=50,257, d=12,288 โ†’ 50,257 ร— 12,288 โ‰ˆ 617M params. Total model: 175B. Embedding = 617M / 175B โ‰ˆ 0.35%. Tiny! Most params are in the 96 transformer layers (attention + FFN). But for small models the ratio is much higher: a 125M param model with 50K vocab and d=768 already has 38M embedding params = 31% of total. This is why small models use smaller vocabs.
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** Weight tying: share embedding and output projection?
Answer The input embedding matrix E โˆˆ R^{|V|ร—d} maps token IDs โ†’ vectors. The output projection W_out โˆˆ R^{dร—|V|} maps final hidden states โ†’ logits over vocab. Weight tying sets W_out = E^T. Benefits: (1) halves the embedding parameter count, (2) acts as regularization โ€” forces the model to use consistent representations for input and output, (3) empirically improves performance on smaller models. Used in GPT-2, T5, LLaMA. Less common in very large models where the cost is negligible and untied weights give more capacity.
### โ˜…โ˜†โ˜† _(Google, Meta)_ **Q:** How does the tokenizer handle out-of-vocabulary (OOV) words? Why is this important?
Answer BPE never encounters OOV โ€” any text can be decomposed into known subword pieces, down to individual bytes. This is a major advantage over word-level tokenization. Even made-up words like
### โ˜…โ˜…โ˜† _(Databricks, OpenAI)_ **Q:** What is the relationship between vocabulary size and model performance? What
Answer Larger vocab -> shorter sequences (fewer tokens) -> faster attention (O(n^2)), but larger embedding table -> more parameters. Smaller vocab -> longer sequences -> slower, but smaller model. Sweet spot is 32K-128K. Llama-3 increased from 32K to 128K vocab mainly for better multilingual and code support. For small models, embedding params dominate โ€” GPT-2 (124M) spends 31% on embeddings.
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** How would you handle a new language that wasn
Answer Options: (1) Train a new tokenizer on the new language corpus (best quality but requires retraining model), (2) Use byte-level fallback (works but inefficient โ€” many tokens per character), (3) Extend the existing vocab with new language tokens and continue pre-training (common in practice, e.g., Chinese LLaMA). Option 3 is most practical: add new merge rules, initialize new embeddings (e.g., average of subcomponent embeddings), then continue pre-training on the new language.
## Further Reading - [Neural Machine Translation of Rare Words with Subword Units (BPE)](https://arxiv.org/abs/1508.07909) Sennrich et al. 2016 โ€” the paper that introduced Byte Pair Encoding for NLP tokenization. - [SentencePiece: A simple and language independent subword tokenizer](https://arxiv.org/abs/1808.06226) Kudo & Richardson 2018 โ€” unigram language model tokenizer used by T5, mT5, XLNet, and many multilingual models. - [OpenAI Tiktoken (cl100k_base)](https://github.com/openai/tiktoken) Production BPE tokenizer powering GPT-4. Fast Rust implementation with Python bindings. - [Andrej Karpathy โ€” Let](https://www.youtube.com/watch?v=kCc8FmEb1nY) Builds a GPT from scratch including the tokenization step โ€” great for seeing BPE in context of the full pipeline. - [Andrej Karpathy โ€” Let](https://www.youtube.com/watch?v=zduSFxRajkE) 2-hour deep-dive building the GPT-4 BPE tokenizer from scratch โ€” covers byte-level BPE, special tokens, and tiktoken internals. - [Tokenizer Summary โ€” Hugging Face docs](https://huggingface.co/docs/transformers/tokenizer_summary) HuggingFace reference covering every major algorithm โ€” WordPiece, Unigram, BPE, byte-level BPE โ€” with concrete examples of merge rules. - [Lilian Weng โ€” Large Transformer Model Inference Optimization](https://lilianweng.github.io/posts/2023-01-10-inference-optimization/) Covers vocabulary size tradeoffs and how tokenization choices affect inference throughput and model capacity. ## Related High-Level Overview ยท Embeddings ยท Positional Encoding ยท MLP & Matmul ยท Self-Attention --- --- title: "Embeddings" part: "The Transformer" number: 2 emoji: "๐Ÿ“Š" subtitle: "Turning token IDs into meaningful vectors" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐Ÿ“Š Embeddings > Turning token IDs into meaningful vectors > [!question] Key Question > Why is 'king' - 'man' + 'woman' = 'queen'? โ† Tokenization | โ†’ Positional Encoding ## Key Insights > [!tip] Insight > Notice "cat", "sat", and "mat" have similar vectors (high cosine similarity) because they appear in similar contexts. "on" looks very different — it's a function word, not a noun/verb. > [!tip] Insight > Think of it this way: one-hot encoding puts each word on its own axis (50K dimensions, no similarity between any pair). Embeddings compress those 50K axes down to 768-4096 dimensions where similar words end up near each other. It's learned dimensionality reduction. > [!tip] Insight > The logit for token is the dot product{" "} — literally measuring how close the hidden state is to that token's embedding. Higher similarity = higher probability after softmax. > [!tip] Insight > Llama-3 expanded its vocab from 32K to 128K tokens, increasing embedding parameters from 131M to 525M. {" "} Larger vocab = fewer tokens per text = faster inference, but the embedding table becomes a bigger fraction of total params. This is a real engineering tradeoff. > [!tip] Insight > The famous "king โˆ’ man + woman โ‰ˆ queen" result works in{" "} Word2Vec because the static vectors directly encode semantic relationships. In GPT-style models, this arithmetic is weaker at the embedding table level โ€” instead, the model distributes semantic reasoning across attention and FFN layers rather than compressing it all into a single lookup table.{" "} Reimers & Gurevych (SBERT, 2019) {" "} extended BERT with contrastive fine-tuning to produce sentence embeddings well-calibrated for cosine similarity โ€” the foundation of modern embedding APIs. ## Code Examples ```python import torch.nn as nn class TransformerWithTiedEmbeddings(nn.Module): def __init__(self, vocab_size, d_model): super().__init__() # Embedding lookup table: vocab_size rows, d_model columns self.embedding = nn.Embedding(vocab_size, d_model) # Output projection reuses embedding weights (weight tying) self.output_proj = nn.Linear(d_model, vocab_size, bias=False) self.output_proj.weight = self.embedding.weight # TIED! def forward(self, token_ids): # token_ids: (batch, seq_len) -> integers x = self.embedding(token_ids) # (batch, seq_len, d_model) logits = self.output_proj(x) # (batch, seq_len, vocab_size) return logits # GPT-2 small: 50257 vocab, 768 dim model = TransformerWithTiedEmbeddings(vocab_size=50257, d_model=768) print(f"Embedding params: {50257 * 768:,}") # 38,597,376 ``` ## Interview Questions ### โ˜…โ˜†โ˜† _(Google, OpenAI)_ **Q:** Why do large language models use embedding dimensions of 768-4096 instead of, say, 64 or 16384?
Answer Embedding dimension is a capacity-compute tradeoff. Too small (64): the vector space can
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain weight tying between input embeddings and the output projection layer. Why does it help?
Answer Weight tying means the embedding matrix W_embed (vocab_size x d_model) is reused as the output projection matrix W_output (transposed: d_model x vocab_size). Benefits: (1) Cuts parameters significantly โ€” for GPT-2, the embedding matrix is 50257 x 768 = ~38M params, so tying saves 38M params. (2) Creates a shared semantic space โ€” a token
### โ˜…โ˜†โ˜† _(OpenAI, Google)_ **Q:** How do subword tokenizers like BPE handle out-of-vocabulary (OOV) words at the embedding level?
Answer BPE and similar subword tokenizers eliminate OOV entirely at the tokenization stage โ€” any word is decomposed into known subword tokens. For example,
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** How would you measure whether trained embeddings capture semantic similarity? What would you expect?
Answer Use cosine similarity between embedding vectors. Expected: semantically related words (king/queen, cat/dog) have high cosine similarity (0.6-0.9), while unrelated words (king/banana) have low similarity (near 0 or negative). Classic tests: (1) word analogy tasks โ€” king - man + woman = queen, (2) clustering โ€” plot embeddings with t-SNE/UMAP and check if semantic groups form clusters, (3) word similarity benchmarks (SimLex-999, WordSim-353). Important caveat: raw embeddings from LLMs are contextual only after attention layers โ€” the initial embedding table captures mostly syntactic/frequency patterns, not deep semantics.
### โ˜…โ˜†โ˜† _(Google, Anthropic)_ **Q:** Why are learned embeddings superior to one-hot encodings for representing tokens?
Answer One-hot vectors are sparse, high-dimensional (vocab_size, e.g., 50K), and encode zero semantic information โ€” every pair of tokens is equidistant (cosine similarity = 0). Learned embeddings are dense, low-dimensional (768-4096), and encode semantic relationships โ€” similar tokens have similar vectors. One-hot: 50K dims, no similarity. Embedding: 768 dims, rich similarity. One-hot also wastes memory: a matrix multiply with a one-hot vector is equivalent to a table lookup, so nn.Embedding is literally the efficient version of (one-hot @ weight_matrix). The embedding IS the learned dense replacement for one-hot.
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** How do positional encodings interact with token embeddings, and why can
Answer Token embeddings are position-agnostic: the embedding for
## Further Reading - [Efficient Estimation of Word Representations in Vector Space (Word2Vec)](https://arxiv.org/abs/1301.3781) Mikolov et al. 2013 โ€” introduced Skip-gram and CBOW, the foundation of modern word embeddings. - [GloVe: Global Vectors for Word Representation](https://nlp.stanford.edu/pubs/glove.pdf) Pennington et al. 2014 โ€” combines count-based and predictive methods for learning word vectors. - [Using the Output Embedding to Improve Language Models (Weight Tying)](https://arxiv.org/abs/1608.05859) Press & Wolf 2017 โ€” shows tying input and output embedding weights improves perplexity and saves parameters. - [The Illustrated Transformer โ€” Jay Alammar](https://jalammar.github.io/illustrated-transformer/) Visual explanation of how token embeddings are constructed and combined with positional encodings. - [LLM Visualization โ€” Brendan Bycroft](https://bbycroft.net/llm) 3D walkthrough showing exactly how the embedding matrix maps token IDs to vectors in a real GPT model. - [Andrej Karpathy โ€” Let](https://www.youtube.com/watch?v=kCc8FmEb1nY) Builds the token embedding table and positional encoding from scratch in PyTorch โ€” best hands-on introduction to the embedding layer. - [3Blue1Brown โ€” But what is a GPT? Visual intro to Transformers](https://www.youtube.com/watch?v=wjZofJX0v4M) Visual explanation of how token embeddings work and how meaning is encoded in high-dimensional vector space โ€” geometric intuition. - [The Illustrated BERT](https://jalammar.github.io/illustrated-bert/) Jay Alammar โ€” how BERT reuses the Transformer encoder with bidirectional attention and masked language modeling, producing contextual embeddings that outperform static word vectors. - [Chris Olah โ€” Deep Learning, NLP, and Representations](https://colah.github.io/posts/2014-07-NLP-RNNs-Representations/) Olah 2014 โ€” visual intuition for how neural networks learn representations of language, connecting word embeddings to deeper network features. - [Chris Olah โ€” Neural Networks, Manifolds, and Topology](https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/) Olah 2014 โ€” how neural networks warp data manifolds to make them linearly separable, foundational for understanding embedding geometry. ## Related High-Level Overview ยท Tokenization ยท Positional Encoding ยท MLP & Matmul ยท Self-Attention --- --- title: "Positional Encoding" part: "The Transformer" number: 3 emoji: "๐Ÿ“" subtitle: "Attention has no sense of order โ€” how do we fix that?" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐Ÿ“ Positional Encoding > Attention has no sense of order โ€” how do we fix that? > [!question] Key Question > "cat ate fish" = "fish ate cat" without this โ† Embeddings | โ†’ MLP & Matmul ## Key Insights > [!tip] Insight > Try clicking two different positions above โ€” you'll see their PE vector dot product. Closer positions have larger dot products; farther positions have smaller ones. This is how the model perceives "distance". > [!tip] Insight > Because the rotation matrix depends only on{" "} and not on , is purely a function of k. The model can learn relative position relationships like "3 tokens ago" regardless of the absolute position. > [!tip] Insight > Almost no modern model still uses the original sinusoidal PE. RoPE has become the de facto standard (Llama, Mistral, Qwen, DeepSeek all use RoPE). The value of sinusoidal PE lies in understanding the core idea โ€” position information encoded through frequencies, relative position achieved through rotation. RoPE is essentially this idea applied directly to Q/K vectors. > [!tip] Insight > RoPE naturally decays attention scores with distance: as{" "} grows, high-frequency dimensions rotate rapidly and their contribution to the dot product averages to zero. This gives RoPE a built-in locality bias without any explicit masking โ€” distant tokens are automatically de-emphasized (Su et al. 2021). ## Code Examples ```python import math import torch import torch.nn as nn class SinusoidalPE(nn.Module): def __init__(self, d_model, max_len=5000): super().__init__() pe = torch.zeros(max_len, d_model) position = torch.arange(0, max_len).unsqueeze(1).float() div_term = torch.exp( torch.arange(0, d_model, 2).float() * -(math.log(10000.0) / d_model) ) pe[:, 0::2] = torch.sin(position * div_term) # even dims pe[:, 1::2] = torch.cos(position * div_term) # odd dims self.register_buffer('pe', pe.unsqueeze(0)) # (1, max_len, d_model) def forward(self, x): return x + self.pe[:, :x.size(1)] ``` ```python def apply_rotary_emb(x, freqs): """Apply RoPE to query or key tensors. x: (batch, seq_len, n_heads, d_k) freqs: (seq_len, d_k // 2) """ # Split into pairs and rotate x_complex = torch.view_as_complex(x.float().reshape(*x.shape[:-1], -1, 2)) freqs_complex = torch.polar(torch.ones_like(freqs), freqs) # Broadcast freqs to match x: (seq_len, d/2) โ†’ (1, seq_len, 1, d/2) freqs_complex = freqs_complex[None, :, None, :] x_rotated = torch.view_as_real(x_complex * freqs_complex).flatten(-2) return x_rotated.type_as(x) ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** Sin/cos vs learned PE โ€” tradeoffs?
Answer Sinusoidal PE: no learned parameters, generalizes to unseen lengths in theory (but poorly in practice), deterministic. Learned PE: more expressive for fixed-length tasks, can capture task-specific position patterns, but cannot extrapolate beyond training length. In practice, most modern models (GPT, Llama) use learned absolute PE or RoPE โ€” sinusoidal PE is mainly historical. The original Transformer paper found no significant difference between the two for their task.
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** How does sin/cos encode RELATIVE position? Derive using product-to-sum.
Answer Key insight: PE(pos+k) can be expressed as a linear transformation of PE(pos). For a single frequency: sin(a)cos(b) + cos(a)sin(b) = sin(a+b) and cos(a)cos(b) - sin(a)sin(b) = cos(a+b). So PE(pos+k, 2i) = sin(w_i(pos+k)) = sin(w_iยทpos)cos(w_iยทk) + cos(w_iยทpos)sin(w_iยทk). This means [PE(pos+k, 2i), PE(pos+k, 2i+1)] is a rotation matrix applied to [PE(pos, 2i), PE(pos, 2i+1)]. The rotation depends only on k (the offset), not pos โ€” so the model can learn to attend to
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** Why different frequencies for different dimensions?
Answer Each dimension pair (2i, 2i+1) uses wavelength ฮป = 2ฯ€ ยท 10000^(2i/d). Low dimensions: short wavelength (high freq) โ†’ fine-grained position discrimination for nearby tokens. High dimensions: long wavelength (low freq) โ†’ coarse position signal spanning the whole sequence. Think of it as a
### โ˜…โ˜…โ˜† _(Google, OpenAI, Anthropic, Meta, Databricks)_ **Q:** Length extrapolation problem โ†’ RoPE / ALiBi
Answer Sinusoidal/learned PE fail at lengths beyond training: attention scores become unreliable for unseen positions. RoPE (Rotary Position Embedding): encodes position by rotating Q and K vectors โ€” the dot product QK^T naturally depends on relative position. Supports length extrapolation via NTK-aware scaling or YaRN. ALiBi (Attention with Linear Biases): no PE at all โ€” instead adds a linear bias -mยท|i-j| to attention scores, where m differs per head. Extrapolates perfectly because the bias formula works for any distance. Trade-off: RoPE is more expressive but needs scaling tricks; ALiBi is simpler but slightly less performant on some benchmarks.
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** How does RoPE (Rotary Position Embedding) differ from sinusoidal PE? Why has it become standard?
Answer Sinusoidal PE is added to the input embedding once. RoPE applies rotation directly to Q and K vectors at every attention layer. Key advantage: the dot product qยทk naturally becomes a function of relative position (m-n) only, not absolute positions. This makes length extrapolation more natural. Used in Llama, Mistral, Qwen, DeepSeek. RoPE also decays attention for distant tokens because high-frequency rotations decorrelate.
### โ˜…โ˜…โ˜† _(Databricks, Anthropic)_ **Q:** What is ALiBi (Attention with Linear Biases)? How does it compare to RoPE?
Answer ALiBi adds a linear bias to attention scores: score -= m * |i-j| where m is a head-specific slope. No position embeddings at all. Advantages: simpler, no extra parameters, good length extrapolation. Disadvantage: the linear decay may be too rigid for complex position patterns. MPT used ALiBi; most others chose RoPE. The different slopes per head let some heads focus locally and others globally.
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** If you had to design a position encoding from scratch for very long sequences (1M+ tokens), what would you consider?
Answer Key challenges: (1) Position info must not vanish or explode at extreme positions, (2) Relative position should matter more than absolute, (3) Must compose well with attention. Current approaches: NTK-aware RoPE scaling, YaRN, or hybrid approaches combining local windows (sliding window attention) with global position info. The key insight is that at 1M tokens, most positions are
## Further Reading - [Attention Is All You Need](https://arxiv.org/abs/1706.03762) Vaswani et al. 2017 โ€” introduced sinusoidal positional encodings in the original Transformer. - [RoFormer: Enhanced Transformer with Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) Su et al. 2021 โ€” rotary embeddings used by LLaMA, Mistral, and most modern LLMs. Encodes relative position via rotation matrices. - [Train Short, Test Long: Attention with Linear Biases (ALiBi)](https://arxiv.org/abs/2108.12409) Press et al. 2022 โ€” adds linear bias to attention scores instead of positional embeddings. Enables length extrapolation. - [The Illustrated Transformer โ€” Jay Alammar](https://jalammar.github.io/illustrated-transformer/) Visual walkthrough of sinusoidal positional encodings โ€” shows how sine/cosine patterns encode position. - [Rotary Embeddings: A Relative Revolution (EleutherAI blog)](https://blog.eleuther.ai/rotary-embeddings/) Intuitive introduction to RoPE with derivations โ€” explains why rotation in complex space encodes relative position elegantly. - [YaRN: Efficient Context Window Extension of Large Language Models](https://arxiv.org/abs/2309.00071) Peng et al. 2023 โ€” extends RoPE context windows without full fine-tuning via interpolation. Technique used by Llama models for context extension. ## Related High-Level Overview ยท Tokenization ยท Embeddings ยท MLP & Matmul ยท Self-Attention --- --- title: "MLP & Matmul" part: "The Transformer" number: 4 emoji: "๐Ÿงฎ" subtitle: "Matrix multiplication, weight initialization, and the universal approximation theorem" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐Ÿงฎ MLP & Matmul > Matrix multiplication, weight initialization, and the universal approximation theorem > [!question] Key Question > A 2-layer MLP can approximate any function โ€” so why do we need 96 layers? โ† Positional Encoding | โ†’ Self-Attention ## Key Insights > [!tip] Insight > Matrix multiplication is the single most important operation in deep learning. Everything โ€” attention, FFN, embedding lookup, output projection โ€” is matmul. Modern GPUs are essentially matmul engines with memory. > [!tip] Insight > Interview pattern: "Why is batch size important?" โ€” larger batches amortize fixed GPU launch overhead and improve arithmetic intensity (ratio of FLOPs to memory bytes). Below ~32, the GPU is mostly idle waiting on memory. > [!tip] Insight > Bad initialization can make learning{" "} literally impossible โ€” not just slow. If pre-activations saturate (sigmoid outputs โ‰ˆ 1 or 0), gradients are zero and the network cannot learn from the first batch. > [!tip] Insight > PyTorch's nn.Linear uses Kaiming uniform by default. Call{" "} torch.nn.init.xavier_uniform_(layer.weight) {" "} when using tanh or sigmoid activations. Forget this and your 10-layer network may not train at all. > [!tip] Insight > UAT guarantees existence, not learnability. {" "} Gradient descent may never find the solution. The network needed could require exponentially many neurons. UAT is a lower bound on expressibility, not a recipe for architecture design. ## Code Examples ```python import torch import torch.nn as nn class MLP(nn.Module): def __init__(self, in_dim: int, hidden_dim: int, out_dim: int): super().__init__() self.net = nn.Sequential( nn.Linear(in_dim, hidden_dim), # W1, b1 โ€” Kaiming uniform init nn.ReLU(), # ฯƒ: kills negatives, keeps positives nn.Linear(hidden_dim, out_dim), # W2, b2 ) # For tanh activations, override with Xavier: # nn.init.xavier_uniform_(self.net[0].weight) def forward(self, x: torch.Tensor) -> torch.Tensor: return self.net(x) # h = ReLU(W1 x + b1), y_hat = W2 h + b2 model = MLP(in_dim=512, hidden_dim=2048, out_dim=256) # Count parameters total = sum(p.numel() for p in model.parameters()) print(f"Parameters: {total:,}") # 1,050,624 + 524,544 = 1,575,168 # Layer 0: 512*2048 + 2048 = 1,050,624 # Layer 2: 2048*256 + 256 = 524,544 # Total: 1,575,168 x = torch.randn(32, 512) # batch of 32 y = model(x) print(y.shape) # (32, 256) ``` ```python import torch import torch.nn as nn def build_mlp_with_init(in_dim, hidden_dim, out_dim, activation="relu"): layer1 = nn.Linear(in_dim, hidden_dim) layer2 = nn.Linear(hidden_dim, out_dim) if activation == "relu": # He/Kaiming: std = sqrt(2 / fan_in) nn.init.kaiming_normal_(layer1.weight, nonlinearity="relu") else: # Xavier: std = sqrt(2 / (fan_in + fan_out)) nn.init.xavier_uniform_(layer1.weight) nn.init.zeros_(layer1.bias) nn.init.zeros_(layer2.bias) act = nn.ReLU() if activation == "relu" else nn.Tanh() return nn.Sequential(layer1, act, layer2) model = build_mlp_with_init(512, 2048, 256) # Check activation stats after first layer x = torch.randn(1000, 512) with torch.no_grad(): h = torch.relu(model[0](x)) # post-activation print(f"Mean: {h.mean():.4f}") # should be ~0.5-0.8 with good init print(f"Std: {h.std():.4f}") # should be ~1.0 print(f"Dead neurons: {(h == 0).float().mean():.1%}") # ideally ~50% ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Meta, Anthropic)_ **Q:** Why does Xavier init use 1/โˆšfan_in? What problem does it solve, and when do you use Kaiming instead?
Answer Xavier (Glorot & Bengio 2010) initializes weights as Uniform(โˆ’1/โˆšfan_in, 1/โˆšfan_in) so that the variance of activations stays constant across layers. The derivation: if each weight is iid with variance ฯƒยฒ and fan_in inputs are summed, the output variance scales by fan_in ยท ฯƒยฒ. Setting ฯƒยฒ = 1/fan_in keeps variance โ‰ˆ 1. This prevents activations from exploding or vanishing through depth. However, ReLU kills ~half the neurons (negative outputs become 0), halving the effective variance. He/Kaiming init doubles the scale to 2/fan_in to compensate. Rule: Xavier for tanh/sigmoid, Kaiming for ReLU/LeakyReLU.
### โ˜…โ˜†โ˜† _(Meta, OpenAI)_ **Q:** What is the difference between nn.Linear and torch.matmul? When would you prefer one over the other?
Answer nn.Linear is a module wrapping a weight matrix W and bias b, applying y = xWแต€ + b. It holds parameters as nn.Parameter (tracked by autograd and the optimizer), initializes them with Kaiming uniform by default, and handles batched inputs automatically. torch.matmul is a raw operation โ€” no parameters, no bias, no initialization. You
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** Explain the dying ReLU problem. What causes it, how do you detect it, and what are the fixes?
Answer A ReLU neuron
### โ˜…โ˜…โ˜… _(Google, Anthropic, OpenAI)_ **Q:** What does the Universal Approximation Theorem actually guarantee? What does it NOT tell you?
Answer The UAT (Hornik et al. 1989) states: a single hidden-layer MLP with a non-polynomial activation can approximate any continuous function on a compact set to arbitrary precision, given enough neurons. What it guarantees: existence of such a network. What it does NOT guarantee: (1) the number of neurons needed (could be exponential), (2) that gradient descent will find it, (3) generalization โ€” you can approximate the training set perfectly and still overfit, (4) efficiency โ€” deep networks can represent the same function exponentially more compactly than shallow ones (expressive efficiency argument). In practice, UAT is a theoretical sanity check, not a design guide.
### โ˜…โ˜…โ˜† _(Meta, Google)_ **Q:** A 3-layer MLP has input dim 512, hidden dim 2048, output dim 256. How many parameters? How does batch size affect compute?
Answer Layer 1 (512โ†’2048): 512ร—2048 + 2048 bias = 1,050,624. Layer 2 (2048โ†’2048): 2048ร—2048 + 2048 = 4,196,352. Layer 3 (2048โ†’256): 2048ร—256 + 256 = 524,544. Total: ~5.77M parameters. Compute (FLOPs): each linear layer is a matrix multiply โ€” for batch size B, layer 1 costs 2ยทBยท512ยท2048 = 2,097,152ยทB FLOPs. Batch size scales compute linearly but does NOT change parameter count. This is why batching is efficient: fixed parameter memory, fully parallelized matmul across batch. GPU utilization improves with batch size until memory bandwidth saturates.
## Further Reading - [Andrej Karpathy โ€” makemore Part 3: MLP](https://www.youtube.com/watch?v=TCH_1BHY58I) Karpathy builds an MLP character-level language model from scratch โ€” covers weight init, batch norm, learning rate tuning, and the vanishing gradient problem. - [3Blue1Brown โ€” Neural Networks series](https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi) Visual intuition for how neurons, layers, and matrix multiplication combine to form a neural network. The gold standard for building geometric intuition. - [Understanding the Difficulty of Training Deep Feedforward Neural Networks (Xavier Init)](https://proceedings.mlr.press/v9/glorot10a.html) Glorot & Bengio 2010 โ€” derives the โˆš(2/(fan_in+fan_out)) initialization by analyzing variance flow through layers. The theoretical foundation for Xavier init. - [Delving Deep into Rectifiers (He/Kaiming Init)](https://arxiv.org/abs/1502.01852) He et al. 2015 โ€” extends Xavier init for ReLU activations by accounting for the halved variance from the rectification. Standard init for modern deep nets. - [Approximation by Superpositions of a Sigmoidal Function (Universal Approximation)](https://cognitivemedium.com/magic_paper/assets/Hornik1989.pdf) Hornik, Stinchcombe, White 1989 โ€” proves that a single hidden-layer MLP can approximate any continuous function on a compact domain given enough neurons. ## Related High-Level Overview ยท Tokenization ยท Embeddings ยท Positional Encoding ยท Self-Attention --- --- title: "Self-Attention" part: "The Transformer" number: 5 emoji: "๐ŸŽฏ" subtitle: "The core of Transformers โ€” derive this on a whiteboard" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐ŸŽฏ Self-Attention > The core of Transformers โ€” derive this on a whiteboard > [!question] Key Question > Why does dividing by โˆšd_k prevent attention from collapsing? โ† MLP & Matmul | โ†’ Multi-Head Attention ## Key Insights > [!tip] Insight > What are Q, K, V? Query = "what am I looking for?" Key = "what do I contain?" Value = "what information do I carry?" Think of it as a soft dictionary lookup: Q is the search query, K is the index, V is the content. > [!tip] Insight > Why must we divide by ? {" "} Assume each component of{" "} , then{" "} . When{" "} , dot products can reach +/-15, pushing softmax into saturation (near one-hot) and gradients approach 0, stalling training. Click "Remove scaling" above to see the effect yourself. > [!tip] Insight > Flash Attention does not reduce FLOPs โ€” the number of multiply-adds is identical. The speedup is entirely from fewer HBM round-trips. On A100 GPUs, HBM bandwidth is{" "} roughly 2 TB/s {" "} while compute throughput is{" "} roughly 312 TFLOP/s , so memory access is the bottleneck for attention, not arithmetic.{" "} FlashAttention-2 (Dao 2023) reports about 50โ€“73% of theoretical A100 throughput. > [!tip] Insight > Ring Attention is exact rather than approximate. The communication pattern overlaps with compute, so a large part of the inter-GPU cost can be hidden behind the attention work itself. > [!tip] Insight > PagedAttention also enables efficient KV cache sharing across requests that reuse the same prefix, such as a shared system prompt. > [!tip] Insight > In Llama-3 70B, the attention matrix for each head per layer is seq_len x seq_len. At seq_len=4096, that is 4096x4096 = 16M floats โ€” multiply by 64 heads x 80 layers, and you see why Flash Attention is so important. ## Code Examples ```python # Scaled Dot-Product Attention import torch import torch.nn.functional as F def attention(Q, K, V, d_k): scores = Q @ K.transpose(-2, -1) / d_k**0.5 # [n, n] weights = F.softmax(scores, dim=-1) # [n, n] return weights @ V # [n, d_v] # In practice: Q, K, V come from linear projections d_model, d_k = 768, 64 W_q = torch.nn.Linear(d_model, d_k, bias=False) W_k = torch.nn.Linear(d_model, d_k, bias=False) W_v = torch.nn.Linear(d_model, d_k, bias=False) x = torch.randn(10, d_model) # 10 tokens, 768-dim Q, K, V = W_q(x), W_k(x), W_v(x) # each [10, 64] out = attention(Q, K, V, d_k) # [10, 64] ``` ```python def scaled_dot_product_attention(Q, K, V, mask=None): """ Q, K, V: (batch, seq_len, d_k) mask: (batch, 1, 1, seq_len) or None """ d_k = Q.size(-1) scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(d_k) if mask is not None: scores = scores.masked_fill(mask == 0, float('-inf')) attn_weights = F.softmax(scores, dim=-1) output = torch.matmul(attn_weights, V) return output, attn_weights # Usage with PyTorch's built-in (may dispatch to FlashAttention backend): # output = F.scaled_dot_product_attention(Q, K, V, attn_mask=mask) ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, OpenAI, Anthropic, Meta, Databricks)_ **Q:** Derive the attention mechanism from scratch on a whiteboard.
Answer Start with:
### โ˜…โ˜…โ˜… _(Google, OpenAI, Anthropic)_ **Q:** Why do we scale by โˆšd_k? Walk through the variance argument.
Answer If q_i, k_j ~ N(0,1) independently, then qยทk = ฮฃ q_mยทk_m is a sum of d_k terms each with E=0, Var=1. So Var(qยทk) = d_k. When d_k is large (e.g. 128), dot products can be ยฑ15+, pushing softmax into saturation (near one-hot). Gradients โ†’ 0 โ†’ training stalls. Dividing by โˆšd_k normalizes variance back to 1.
### โ˜…โ˜…โ˜† _(Google, OpenAI, Anthropic, Meta, Databricks)_ **Q:** What is the time and space complexity of self-attention?
Answer Time: O(nยฒd) โ€” the QK^T matrix multiplication is nร—d times dร—n = O(nยฒd). Space: O(nยฒ) โ€” we must store the full nร—n attention matrix. This is why long-context models are challenging, and why Flash Attention (O(n) memory) is crucial.
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** If d_k = 1, what does attention degrade to?
Answer When d_k=1, Q and K are nร—1 vectors. QK^T is an outer product โ€” each entry is q_iยทk_j, a simple scalar multiplication. Softmax of a rank-1 matrix gives very limited attention patterns. Essentially degrades to a simple content-based addressing with no capacity for complex matching.
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain attention as a
Answer Traditional dictionary: exact key match โ†’ return value. Attention: compute similarity between query and ALL keys โ†’ return weighted average of all values. When temperature โ†’ 0, softmax approaches argmax and attention becomes a hard lookup. The
### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** What does each row of softmax(QK^T/โˆšd_k) summing to 1 mean semantically?
Answer Each row represents one token
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** What are the limitations of standard attention? What alternatives exist?
Answer Limitations: O(nยฒ) complexity limits context length. Memory-bound at inference. All tokens attend to all tokens (no inductive bias for locality). Alternatives: Linear attention (O(n)), sparse attention (local windows + global tokens), Mamba/state-space models (O(n) recurrent), sliding window attention (Mistral). Flash Attention doesn
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** How does causal masking work in decoder-only models? Why is it needed?
Answer Add -inf to the upper triangle of QK^T before softmax. Without it, tokens can
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** What is the difference between additive attention (Bahdanau) and dot-product attention? Why did Transformers choose dot-product?
Answer Additive: score = v^T tanh(W_1 q + W_2 k) โ€” uses a small MLP. Dot-product: score = qยทk โ€” just a dot product. Dot-product is much faster because it
### โ˜…โ˜…โ˜† _(Anthropic, Meta)_ **Q:** Can you apply attention to non-sequence data? Give an example.
Answer Yes. Attention works on any set of vectors. Vision Transformers (ViT) apply attention to image patches. Graph attention networks apply it to node features. Point cloud transformers apply it to 3D points. The key insight is that attention is permutation-equivariant โ€” it works on sets, not just sequences. Positional encoding is what imposes order when needed.
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** What is cross-attention and how does it differ from self-attention?
Answer In self-attention, Q, K, V all come from the same input. In cross-attention, Q comes from one source (e.g., decoder) while K and V come from another (e.g., encoder output). Used in encoder-decoder models (T5, BART, original Transformer) for tasks like translation where the decoder needs to attend to the input. Also used in multimodal models where text attends to image features.
## Further Reading - [Attention Is All You Need](https://arxiv.org/abs/1706.03762) Vaswani et al. 2017 โ€” the paper that introduced scaled dot-product attention and the Transformer architecture. - [The Illustrated Transformer](https://jalammar.github.io/illustrated-transformer/) Jay Alammar - [The Illustrated BERT](https://jalammar.github.io/illustrated-bert/) Jay Alammar โ€” how BERT reuses the Transformer encoder with bidirectional attention and masked language modeling for pretraining. - [3Blue1Brown โ€” Attention in Transformers](https://www.3blue1brown.com/lessons/attention) Grant Sanderson - [Lilian Weng โ€” Attention? Attention!](https://lilianweng.github.io/posts/2018-06-24-attention/) Lilian Weng - [Transformer Explainer (Georgia Tech)](https://poloclub.github.io/transformer-explainer/) Interactive visual explanation of GPT-2 running live in the browser โ€” great for seeing attention weights. - [LLM Visualization โ€” Brendan Bycroft](https://bbycroft.net/llm) Step-by-step 3D walkthrough of a GPT model โ€” trace every tensor through the forward pass. - [Andrej Karpathy โ€” Let](https://www.youtube.com/watch?v=kCc8FmEb1nY) Codes scaled dot-product self-attention from scratch in ~50 lines of PyTorch โ€” essential companion for internalizing Q, K, V. - [A Mathematical Framework for Transformer Circuits (Elhage et al.)](https://transformer-circuits.pub/2021/framework/index.html) Anthropic interpretability team - [In-Context Learning and Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) Olsson et al. 2022 โ€” induction heads as the mechanism behind in-context learning, emerging as a phase change during training. - [Chris Olah โ€” Attention and Augmented Recurrent Neural Networks](https://distill.pub/2016/augmented-rnns/) Olah & Carter 2016 โ€” the original visual explainer of attention mechanisms before transformers. ## Related High-Level Overview ยท Tokenization ยท Embeddings ยท Positional Encoding ยท MLP & Matmul --- --- title: "Multi-Head Attention" part: "The Transformer" number: 6 emoji: "๐Ÿง " subtitle: "One head isn't enough โ€” each head learns different patterns" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐Ÿง  Multi-Head Attention > One head isn't enough โ€” each head learns different patterns > [!question] Key Question > One head looks at syntax, another at meaning โ€” how? โ† Self-Attention | โ†’ FFN & Activations ## Key Insights > [!tip] Insight > is not just a dimension-reduction after concatenation โ€” it is the only channel for cross-head information interaction. Without it, each head's output is confined to its own subspace with no way to merge. > [!tip] Insight > FFN accounts for roughly 67% of each layer's parameters in a standard dense transformer. GQA shrinks the KV cache by 8x, which is key to making 70B model inference feasible on a single machine. > [!tip] Insight > GQA can be retrofitted from a trained MHA checkpoint by mean-pooling the KV head weights within each group, then fine-tuning briefly โ€” this is the "GQA from MHA checkpoint" technique from Ainslie et al. 2023, which avoids training from scratch. ## Code Examples ```python # Multi-Head Attention import torch.nn as nn class MultiHeadAttention(nn.Module): def __init__(self, d_model=768, n_heads=12): super().__init__() self.n_heads = n_heads self.d_k = d_model // n_heads self.W_qkv = nn.Linear(d_model, 3 * d_model, bias=False) self.W_o = nn.Linear(d_model, d_model, bias=False) def forward(self, x): B, T, C = x.shape qkv = self.W_qkv(x).reshape(B, T, 3, self.n_heads, self.d_k) q, k, v = qkv.permute(2, 0, 3, 1, 4) # each [B, h, T, d_k] att = (q @ k.transpose(-2, -1)) / self.d_k**0.5 att = att.softmax(dim=-1) out = (att @ v).transpose(1, 2).reshape(B, T, C) # [B, T, d_model] return self.W_o(out) ``` ```python # SwiGLU FFN (used in Llama) import torch.nn.functional as F class SwiGLU_FFN(nn.Module): def __init__(self, d_model=768, d_ff=2048): super().__init__() self.w1 = nn.Linear(d_model, d_ff, bias=False) self.w3 = nn.Linear(d_model, d_ff, bias=False) # gate self.w2 = nn.Linear(d_ff, d_model, bias=False) def forward(self, x): return self.w2(F.silu(self.w1(x)) * self.w3(x)) ``` ```python import math import torch import torch.nn as nn import torch.nn.functional as F class MultiHeadAttention(nn.Module): def __init__(self, d_model=512, n_heads=8): super().__init__() self.n_heads = n_heads self.d_k = d_model // n_heads self.W_q = nn.Linear(d_model, d_model) self.W_k = nn.Linear(d_model, d_model) self.W_v = nn.Linear(d_model, d_model) self.W_o = nn.Linear(d_model, d_model) def forward(self, x, mask=None): B, T, C = x.shape # Project and reshape: (B, T, C) -> (B, n_heads, T, d_k) Q = self.W_q(x).view(B, T, self.n_heads, self.d_k).transpose(1, 2) K = self.W_k(x).view(B, T, self.n_heads, self.d_k).transpose(1, 2) V = self.W_v(x).view(B, T, self.n_heads, self.d_k).transpose(1, 2) # Scaled dot-product attention per head scores = (Q @ K.transpose(-2, -1)) / math.sqrt(self.d_k) if mask is not None: scores = scores.masked_fill(mask == 0, float('-inf')) attn = F.softmax(scores, dim=-1) # Combine heads: (B, n_heads, T, d_k) -> (B, T, C) out = (attn @ V).transpose(1, 2).contiguous().view(B, T, C) return self.W_o(out) ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** MHA vs single-head attention with the same total parameters โ€” what changes?
Answer With h heads and d_k = d/h, total param count for Q/K/V projections is identical: 3dยฒ either way. But MHA lets each head learn a different attention pattern (syntactic, semantic, positional, etc.). Single-head must compress all patterns into one set of weights. Empirically, MHA converges faster and generalizes better because the heads specialize.
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** What is W^O
Answer W^O โˆˆ R^{dร—d} is the output projection that mixes information across heads. Without it, each head
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic, Databricks)_ **Q:** Grouped Query Attention (GQA) โ€” why is it the new standard in Llama-2 70B and beyond?
Answer GQA shares K/V heads across groups of Q heads. With h=64 query heads and g=8 KV groups, KV cache shrinks 8x while quality barely drops (<0.5% on benchmarks). MQA (g=1) saves more memory but loses quality. GQA is the sweet spot: Llama-2 70B, Mistral, Gemma all use it. The key insight: K/V patterns are more redundant across heads than Q patterns.
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** Why is the output projection W^O necessary? What would happen without it?
Answer W^O mixes information across heads. Without it, each head
### โ˜…โ˜…โ˜† _(Google)_ **Q:** How does GQA reduce memory bandwidth while preserving quality?
Answer Grouped Query Attention (GQA) shares K/V heads across groups of Q heads. With h=64 query heads and g=8 KV groups, the KV cache shrinks 8ร— (one K/V pair per group vs. one per head). This cuts memory bandwidth during decoding proportionally. Quality stays within <0.5% of full MHA on most benchmarks because K/V patterns are more redundant across heads than Q patterns. LLaMA-2 70B, Mistral, and Gemma all use GQA as the default.
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What patterns do different attention heads learn to specialize in?
Answer Empirical analysis (Clark et al. 2019, Voita et al. 2019) identifies distinct specializations: positional heads attend to adjacent tokens (n-1 or n+1); syntactic heads track subject-verb or noun-modifier dependencies; coreference heads link pronouns to their antecedents (
### โ˜…โ˜…โ˜… _(Meta)_ **Q:** Derive the memory savings of MQA vs MHA for a model with 32 heads.
Answer MHA: each of the 32 heads has its own K and V matrices (each d_head ร— seq_len). Total KV cache = 2 ร— 32 ร— d_head ร— seq_len = 2 ร— d ร— seq_len (since 32 ร— d_head = d). MQA: one shared K and one shared V. Total KV cache = 2 ร— d_head ร— seq_len = 2 ร— (d/32) ร— seq_len. Memory saving = 32ร—. For LLaMA-2 7B (d=4096, seq=4096, float16): MHA KV cache โ‰ˆ 512MB per batch item vs MQA โ‰ˆ 16MB. GQA with g=4 groups: 8ร— saving, landing between MQA quality and MHA quality.
## Further Reading - [Attention Is All You Need](https://arxiv.org/abs/1706.03762) Vaswani et al. 2017 โ€” introduced multi-head attention with 8 heads, each projecting to d_k = 64. - [GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints](https://arxiv.org/abs/2305.13245) Ainslie et al. 2023 โ€” grouped-query attention used by LLaMA 2 70B. Balances MHA quality with MQA speed. - [Fast Transformer Decoding: One Write-Head is All You Need (MQA)](https://arxiv.org/abs/1911.02150) Shazeer 2019 โ€” multi-query attention shares a single KV head across all query heads. Used by PaLM and Falcon. - [The Illustrated Transformer โ€” Jay Alammar](https://jalammar.github.io/illustrated-transformer/) Visual walkthrough of multi-head attention โ€” shows how heads split dimensions and recombine outputs. - [3Blue1Brown โ€” Attention in Transformers](https://www.3blue1brown.com/lessons/attention) Grant Sanderson - [Transformer Explainer (Georgia Tech)](https://poloclub.github.io/transformer-explainer/) Interactive visualization โ€” see all attention heads running simultaneously on real text. - [Are Sixteen Heads Really Better than One?](https://arxiv.org/abs/1905.10650) Michel et al. 2019 โ€” shows most attention heads are redundant and can be pruned with minimal quality loss. Challenges assumptions about head count. - [In-context Learning and Induction Heads (Olsson et al.)](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) Anthropic research showing a specific two-head circuit is the mechanism behind in-context learning โ€” the clearest example of head specialization. ## Related High-Level Overview ยท Tokenization ยท Embeddings ยท Positional Encoding ยท MLP & Matmul --- --- title: "FFN & Activations" part: "The Transformer" number: 7 emoji: "โš™๏ธ" subtitle: "Where 67% of parameters live โ€” and what they memorize" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # โš™๏ธ FFN & Activations > Where 67% of parameters live โ€” and what they memorize > [!question] Key Question > Where does GPT store the fact that Paris is in France? โ† Multi-Head Attention | โ†’ LayerNorm & Residuals ## Key Insights > [!tip] Insight > Think of the FFN as a giant lookup table. Attention figures out WHAT to look at (routing). The FFN then processes each token through its memory bank โ€” retrieving facts, applying transformations, and updating the representation. Attention moves information between tokens; FFN transforms information within each token. > [!tip] Insight > Three matrices (W1, V, W2) instead of two. To keep parameter count constant, reduce from{" "} to{" "} . Llama-2 uses{" "} 11008 for d=4096 (ratio = 2.69x) . > [!tip] Insight > Notice the trend: the original 4x expansion ratio (GPT-2) gave way to ~2.7x with SwiGLU (Llama-2 7B) to keep parameter count constant with three matrices. Larger models like Llama-2 70B and PaLM use higher ratios because they can afford the extra parameters. Mixtral replaces each dense FFN with 8 sparse expert FFNs โ€”{" "} 46.7B total parameters, 12.9B active per token {" "} โ€” same active compute as a 13B dense model,{" "} matching Llama-2 70B quality at ~6ร— lower inference cost . > [!tip] Insight > The key-value memory view explains a counterintuitive observation: larger FFN width improves factual recall more than it improves reasoning. More memory slots = more facts stored. This is distinct from attention, which improves relational reasoning (who attends to whom) but attention primarily routes information between positions, while FFN layers are a major locus of factual knowledge storage. ## Code Examples ```python import torch import torch.nn as nn import torch.nn.functional as F class SwiGLU_FFN(nn.Module): """FFN with SwiGLU activation (Llama-2, Mistral, PaLM).""" def __init__(self, d_model: int, d_ff: int | None = None): super().__init__() if d_ff is None: d_ff = int(2 * (4 * d_model) / 3) d_ff = 256 * ((d_ff + 255) // 256) # round up to 256 self.w1 = nn.Linear(d_model, d_ff, bias=False) # gate proj self.v = nn.Linear(d_model, d_ff, bias=False) # up proj self.w2 = nn.Linear(d_ff, d_model, bias=False) # down proj def forward(self, x: torch.Tensor) -> torch.Tensor: # SwiGLU: (Swish(xW1) โŠ™ xV) W2 return self.w2(F.silu(self.w1(x)) * self.v(x)) # Example: Llama-2 7B dimensions ffn = SwiGLU_FFN(d_model=4096, d_ff=11008) x = torch.randn(1, 128, 4096) # (batch, seq_len, d_model) out = ffn(x) # (1, 128, 4096) โ€” same shape as input ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** Why do FFN layers contain ~67% of a Transformer
Answer In a standard Transformer block, the FFN has two weight matrices: W1 (d_model x 4*d_model) and W2 (4*d_model x d_model), totaling 8*d_model^2 parameters. Attention has four projections (Q, K, V, O), each d_model x d_model, totaling 4*d_model^2. So FFN has 2x the parameters of attention. Attention handles token-to-token routing โ€” deciding WHAT information to move WHERE. FFN processes each token independently, acting as a learned key-value memory that stores and retrieves factual knowledge. Attention is the communication layer; FFN is the computation/memory layer.
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain SwiGLU and why it replaced GELU/ReLU in modern Transformers. What is the parameter cost?
Answer SwiGLU (Shazeer, 2020) combines the Swish activation with a Gated Linear Unit: SwiGLU(x) = (Swish(xW1) โŠ™ xV) W2, where โŠ™ is element-wise multiplication. It introduces a third weight matrix V (the gate), giving the network a multiplicative gating mechanism that can selectively suppress or amplify features. Empirically, SwiGLU achieves better loss than GELU or ReLU at the same compute budget. The tradeoff: three matrices instead of two. To keep total parameters constant, the hidden dimension is reduced from 4*d_model to (8/3)*d_model โ‰ˆ 2.67*d_model. Llama-2 uses 11008 hidden for d_model=4096, which is ~2.69x (close to 8/3).
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** What is the
Answer Geva et al. (2021) showed that FFN layers act as key-value memories. W1 rows are
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** What is the expansion ratio in FFN and why is 4x standard? How does it change with SwiGLU?
Answer The expansion ratio is d_ff / d_model. The original Transformer used 4x (d_model=512, d_ff=2048). The intuition: expanding to a higher dimension lets the network represent more features in a sparse, disentangled way โ€” most neurons are inactive for any given input (especially with ReLU). With SwiGLU
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** How does Mixture of Experts (MoE) relate to FFN layers? Why are experts always FFN blocks?
Answer In MoE architectures (Mixtral, DeepSeek), each Transformer layer replaces the single dense FFN with multiple parallel FFN
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** If you remove the FFN layers entirely from a Transformer, what happens? What about replacing them with linear layers (no activation)?
Answer Without FFN: the model retains attention (token mixing) but loses per-token computation. Empirically, performance drops catastrophically โ€” the model can still learn some positional and syntactic patterns but fails at factual knowledge and complex reasoning. It
## Further Reading - [GLU Variants Improve Transformer](https://arxiv.org/abs/2002.05202) Shazeer 2020 โ€” shows SwiGLU and GeGLU outperform standard ReLU FFNs. SwiGLU is now the default in LLaMA and PaLM. - [Transformer Feed-Forward Layers Are Key-Value Memories](https://arxiv.org/abs/2012.14913) Geva et al. 2021 โ€” interprets FFN layers as implicit key-value stores where keys match input patterns and values store output distributions. - [LLM Visualization โ€” Brendan Bycroft](https://bbycroft.net/llm) 3D walkthrough showing the FFN block - [The Illustrated Transformer โ€” Jay Alammar](https://jalammar.github.io/illustrated-transformer/) Visual walkthrough of the FFN sublayer and how it complements the attention block within each Transformer layer. - [Mixture of Experts Explained (Hugging Face blog)](https://huggingface.co/blog/moe) How Mixtral replaces dense FFNs with sparse MoE layers โ€” concrete explanation of routing, expert selection, and capacity factors. - [3Blue1Brown โ€” Transformers (What they are and what they do)](https://www.youtube.com/watch?v=eMlx5fFNoYc) Visual breakdown of the MLP (FFN) layers in Transformers โ€” intuition for what the expansion and contraction matrices compute. - [3Blue1Brown โ€” How might LLMs store facts (Chapter 7)](https://www.youtube.com/watch?v=9-Jl0dxWQs8) Grant Sanderson 2024 โ€” visual explanation of how MLP layers act as key-value memories storing factual associations, with the connection to superposition. ## Related High-Level Overview ยท Tokenization ยท Embeddings ยท Positional Encoding ยท MLP & Matmul --- --- title: "LayerNorm & Residuals" part: "The Transformer" number: 8 emoji: "๐Ÿ”—" subtitle: "The glue that makes deep transformers trainable" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐Ÿ”— LayerNorm & Residuals > The glue that makes deep transformers trainable > [!question] Key Question > Delete one line and a 96-layer model becomes untrainable โ† FFN & Activations | โ†’ The Full Forward Pass ## Key Insights > [!tip] Insight > Think of residual connections as express lanes on a highway. Without them, every car (gradient) must exit at every toll booth (layer) -- most get stuck. With skip connections, gradients can take the express lane straight to early layers. > [!tip] Insight > RMSNorm removes both mean subtraction and bias, reducing parameters and compute by{" "} ~7โ€“10%. Experiments show negligible accuracy difference. > [!tip] Insight > In Pre-Norm, the residual path has no LN -- gradients backpropagate freely. In Post-Norm, gradients must pass through LN at every layer. This is why almost every model since GPT-2 uses Pre-Norm. > [!tip] Insight > The trend: LayerNorm (2017) to RMSNorm (2023+). Pre-Norm became the practical default for most deep LLMs, though alternatives like DeepNorm (Wang et al. 2022) can stabilize very deep Post-LN models up to 1000 layers. The field converged because clean residual paths beat the marginal quality gain from Post-Norm in most settings. > [!tip] Insight > DeepNorm (Wang et al. 2022) is a hybrid: scale the residual branch down by and initialize weights scaled by , then apply Post-Norm. This keeps the expected gradient norm at{" "} while retaining Post-Norm's final-quality advantage. It enables stable training of 1000-layer Transformers, but the tuning complexity means most practitioners still prefer Pre-RMSNorm. ## Code Examples ```python import torch import torch.nn as nn # Built-in LayerNorm ln = nn.LayerNorm(d_model) # learnable gamma, beta output = ln(x) # x: [batch, seq, d_model] # Built-in RMSNorm (PyTorch 2.4+) rms = nn.RMSNorm(d_model) # learnable gamma only output = rms(x) # Manual RMSNorm (for older PyTorch) class RMSNorm(nn.Module): def __init__(self, d_model, eps=1e-6): super().__init__() self.weight = nn.Parameter(torch.ones(d_model)) self.eps = eps def forward(self, x): rms = torch.sqrt(torch.mean(x ** 2, dim=-1, keepdim=True) + self.eps) return x / rms * self.weight # Pre-Norm residual pattern x = x + self.attn(self.ln1(x)) # norm before sublayer x = x + self.ffn(self.ln2(x)) # residual adds back original ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, OpenAI, Anthropic)_ **Q:** LayerNorm vs BatchNorm -- why do Transformers use LayerNorm?
Answer BatchNorm normalizes across the batch dimension -- statistics depend on other samples in the batch. Problems for Transformers: (1) variable sequence lengths mean batch statistics are unstable, (2) at inference with batch=1, BN degrades to running stats which may not match. LayerNorm normalizes across the feature dimension of each individual token -- no batch dependency. This also enables trivial parallelism since each token is independent.
### โ˜…โ˜…โ˜… _(Google, OpenAI, Anthropic)_ **Q:** Pre-Norm vs Post-Norm -- what are the training dynamics differences?
Answer Post-Norm (original Transformer): y = LN(x + Sublayer(x)). Gradient flows through LN, which can cause instability in deep models -- requires careful warmup. Pre-Norm (GPT-2+): y = x + Sublayer(LN(x)). The residual path is clean (no LN in the gradient highway), so gradients flow freely. Pre-Norm trains more stably but may converge to slightly worse final quality. Virtually all modern LLMs use Pre-Norm. Some recent work (DeepNorm) combines both.
### โ˜…โ˜…โ˜† _(Google, Meta, Anthropic)_ **Q:** RMSNorm -- what is it and why do Llama/Gemma use it?
Answer RMSNorm removes the mean-centering step from LayerNorm: RMSNorm(x) = x / RMS(x) * gamma, where RMS(x) = sqrt(mean(x^2)). Saves ~7-10% compute per norm operation by skipping the mean subtraction. Empirically, re-centering contributes little to performance. Llama, Gemma, and most modern models use RMSNorm. The key insight: normalization
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Why are residual connections critical for gradient flow?
Answer Without residual connections, gradients must flow through every sublayer sequentially: dL/dx0 = dL/dxn * product(dxi+1/dxi) -- a product of many Jacobians that tends to vanish or explode. With residual connections: xi+1 = xi + f(xi), so dxi+1/dxi = I + df/dxi. The identity matrix I provides a gradient highway -- gradients can flow directly from layer 80 to layer 1 without attenuation.
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** What happens if you apply LayerNorm after the residual connection instead of before?
Answer This is Post-Norm: y = LN(x + Sublayer(x)). The residual path passes through LN before reaching subsequent layers. During backprop, gradients must pass through the LN Jacobian at every layer, which can distort gradient magnitude and direction. In Pre-Norm: y = x + Sublayer(LN(x)), the residual path is clean -- gradients flow through I (identity) on the skip connection. Post-Norm can sometimes yield slightly better final quality but is much harder to train at depth (60+ layers). This is why GPT-2, Llama, Gemma, and Mistral all use Pre-Norm.
## Further Reading - [Layer Normalization](https://arxiv.org/abs/1607.06450) Ba et al. 2016 โ€” the original LayerNorm paper. Normalizes across features instead of batch dimension. - [Root Mean Square Layer Normalization (RMSNorm)](https://arxiv.org/abs/1910.07467) Zhang & Sennrich 2019 โ€” drops the mean-centering step for a simpler, faster norm. Used by LLaMA and Mistral. - [On Layer Normalization in the Transformer Architecture](https://arxiv.org/abs/2002.04745) Xiong et al. 2020 โ€” analyzes Pre-LN vs Post-LN placement. Pre-LN enables stable training without warmup. - [The Illustrated Transformer โ€” Jay Alammar](https://jalammar.github.io/illustrated-transformer/) Visual Transformer walkthrough โ€” shows where layer norm fits in the residual stream around attention and FFN blocks. - [Batch Normalization: Accelerating Deep Network Training (Ioffe & Szegedy 2015)](https://arxiv.org/abs/1502.03167) The original BatchNorm paper โ€” understanding why BN works for images clarifies why LayerNorm is needed for variable-length sequences. - [DeepNorm: Scaling Transformers to 1,000 Layers](https://arxiv.org/abs/2203.00555) Wang et al. 2022 โ€” combines Pre-LN and Post-LN with scaled initialization to train 1000-layer transformers stably. Used in GLM-130B. ## Related High-Level Overview ยท Tokenization ยท Embeddings ยท Positional Encoding ยท MLP & Matmul --- --- title: "The Full Forward Pass" part: "The Transformer" number: 9 emoji: "๐Ÿ—๏ธ" subtitle: "Watch a token travel through a complete Transformer block" tags: ["transformer", "ml", "ai-engineering", "interview-prep"] --- # ๐Ÿ—๏ธ The Full Forward Pass > Watch a token travel through a complete Transformer block > [!question] Key Question > What happens to the word "cat" in 0.003 seconds? โ† LayerNorm & Residuals ## Key Insights > [!tip] Insight > Training sees the whole movie at once (parallel). Inference watches one frame at a time (sequential). The causal mask is what makes parallel training equivalent to sequential generation -- each position can only see the past. > [!tip] Insight > Common interview question: "Why can training run in parallel but inference must be sequential?" Key answer: During training, all target tokens are known, and the causal mask simulates autoregressive constraints while allowing parallel computation. During inference, each token depends on the previous output. ## Code Examples ```python import torch import torch.nn as nn import torch.nn.functional as F class TransformerBlock(nn.Module): def __init__(self, d_model=4096, n_heads=32, d_ff=11008): super().__init__() self.ln1 = nn.RMSNorm(d_model) self.attn = MultiHeadAttention(d_model, n_heads) self.ln2 = nn.RMSNorm(d_model) self.ffn = SwiGLU_FFN(d_model, d_ff) def forward(self, x, kv_cache=None): # Pre-Norm + Attention + Residual h = x + self.attn(self.ln1(x), kv_cache=kv_cache) # Pre-Norm + FFN + Residual return h + self.ffn(self.ln2(h)) # Full decoder language model class DecoderLM(nn.Module): def __init__(self, vocab_size=32000, d_model=4096, n_layers=32, n_heads=32): super().__init__() self.embed = nn.Embedding(vocab_size, d_model) self.layers = nn.ModuleList([TransformerBlock(d_model, n_heads) for _ in range(n_layers)]) self.final_ln = nn.RMSNorm(d_model) self.lm_head = nn.Linear(d_model, vocab_size, bias=False) def forward(self, input_ids): x = self.embed(input_ids) # [B, T, d] for layer in self.layers: x = layer(x) # N transformer blocks logits = self.lm_head(self.final_ln(x)) # [B, T, vocab] return logits ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, OpenAI, Anthropic, Meta)_ **Q:** Training is parallel but inference is sequential -- why?
Answer Training (teacher forcing): all target tokens are known, so we can compute loss for all positions in parallel using a causal mask. The mask prevents attending to future tokens while allowing parallel computation. Inference (autoregressive): each token depends on the previous output -- we must generate token i before we can feed it as input to generate token i+1. This is why inference latency scales with sequence length, and why KV cache (avoiding recomputation) is critical.
### โ˜…โ˜…โ˜† _(Google, OpenAI, Anthropic, Meta)_ **Q:** Walk through the complete forward pass of a single token through one Transformer block.
Answer Input x enters the block. Step 1: LayerNorm (or RMSNorm) normalizes x. Step 2: Multi-head attention computes Q, K, V projections, applies causal mask, computes weighted sum. Step 3: Residual add -- output = x + Attention(LN(x)). Step 4: Another LayerNorm on the result. Step 5: FFN (two linear layers with activation, or SwiGLU). Step 6: Another residual add -- output = prev + FFN(LN(prev)). This repeats for N layers. After all layers, a final LayerNorm and linear projection to vocabulary logits.
### โ˜…โ˜…โ˜† _(Google, OpenAI, Anthropic)_ **Q:** What is teacher forcing and what is its main drawback?
Answer Teacher forcing: during training, always feed the ground-truth previous token as input, even if the model would have predicted wrong. This allows parallel computation (all tokens known) and stable training. The drawback is exposure bias -- the model never sees its own mistakes during training, so at inference when it feeds its own (possibly wrong) outputs, errors can compound. Mitigations: scheduled sampling (gradually replace ground-truth inputs with model predictions during training), and sequence-level training (optimize full sequences with RL objectives like REINFORCE). In practice, most LLM pre-training just uses teacher forcing -- the exposure bias is small enough at scale that these mitigations are rarely applied.
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic, Meta)_ **Q:** Explain KV cache and why it matters for inference performance.
Answer During autoregressive inference, generating token t requires attending to all tokens 1..t-1. Without caching, we
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** What is the residual stream view of Transformers and why is it useful?
Answer The residual stream view (Elhage et al., Anthropic) sees the residual connection as the main communication channel. Each layer reads from and writes to a shared
## Further Reading - [Attention Is All You Need](https://arxiv.org/abs/1706.03762) Vaswani et al. 2017 โ€” defines the full encoder-decoder Transformer forward pass with residual connections and layer norms. - [Language Models are Unsupervised Multitask Learners (GPT-2)](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) Radford et al. 2019 โ€” decoder-only forward pass. Demonstrates that a single left-to-right pass can perform many NLP tasks. - [The Illustrated Transformer โ€” Jay Alammar](https://jalammar.github.io/illustrated-transformer/) Step-by-step visual walkthrough of every operation in the forward pass, from embedding to output logits. - [Andrej Karpathy โ€” Let](https://www.youtube.com/watch?v=kCc8FmEb1nY) 2-hour video coding a GPT from scratch โ€” best way to internalize the full forward pass end-to-end. - [Transformer Explainer (Georgia Tech)](https://poloclub.github.io/transformer-explainer/) Interactive GPT-2 running live in the browser โ€” click any token to trace the full forward pass. - [LLM Visualization โ€” Brendan Bycroft](https://bbycroft.net/llm) 3D step-by-step tensor visualization of the entire forward pass through a GPT model. - [3Blue1Brown โ€” But what is a GPT? Visual intro to Transformers](https://www.youtube.com/watch?v=wjZofJX0v4M) Animated walkthrough of the complete forward pass from token embeddings to output logits โ€” geometric intuition for each operation. - [Andrej Karpathy โ€” nanoGPT](https://github.com/karpathy/nanoGPT) The cleanest GPT implementation in ~300 lines of PyTorch โ€” read model.py to see the exact forward pass used in practice. ## Related High-Level Overview ยท Tokenization ยท Embeddings ยท Positional Encoding ยท MLP & Matmul --- --- title: "Backpropagation" part: "Training" number: 10 emoji: "๐Ÿ”™" subtitle: "Chain rule, computation graphs, and autograd โ€” how gradients flow backward" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”™ Backpropagation > Chain rule, computation graphs, and autograd โ€” how gradients flow backward > [!question] Key Question > One forward pass to predict, one backward pass to learn โ€” that's all of deep learning โ†’ Optimizers ## Key Insights > [!tip] Insight > Key insight: Each node multiplies the upstream gradient by its local derivative โ€” this IS the chain rule. No special “backprop algorithm” exists beyond applying the chain rule once per node, in reverse topological order. > [!tip] Insight > What you're seeing: Each node stores its output value (forward pass) and will receive a gradient (backward pass). Edges represent data flow โ€” and gradient flow goes{" "} in the opposite direction. > [!tip] Insight > Key result: To decrease the loss, nudge{" "} a by +1 and it shifts L by{" "} โˆ’24. Nudge c by +1 and{" "} L shifts by +8. This is exactly what gradient descent uses โ€” the negative gradient points toward lower loss. > [!tip] Insight > Backprop is just the chain rule applied to a computation graph โ€” nothing more. The “magic” is that intermediate results computed during the forward pass can be reused to compute gradients in the backward pass, so each node is visited exactly once in each direction.{" "} Rumelhart, Hinton & Williams (1986) popularized this algorithm for training multi-layer networks in their landmark Nature paper. > [!tip] Insight > Interview insight: Reverse-mode AD (backprop) works from the output backward, accumulating a{" "} row vector times Jacobian at each step. This is efficient when outputs are scalar (losses). Forward-mode AD accumulates a{" "} Jacobian times column vector โ€” efficient when inputs are few. Most ML frameworks use reverse-mode because loss functions map R^n โ†’ R.{" "} Gradient norm at initialization is O(1) for Pre-LN transformers but O(dโˆš(ln d)) for Post-LN (Xiong et al., 2020) โ€” the reason Post-LN training requires careful learning-rate warmup to avoid divergence. > [!tip] Insight > Common pitfall: Always call{" "} optimizer.zero_grad() before loss.backward() . PyTorch accumulates gradients (+=) into .grad{" "} โ€” if you forget to zero them, gradients from previous batches corrupt the update. This is intentional design for gradient accumulation across micro-batches, but a common source of bugs.{" "} Adam defaults (Kingma & Ba, 2014) are ฮฒโ‚=0.9, ฮฒโ‚‚=0.999, ฮต=1e-8; LLM recipes commonly lower ฮฒโ‚‚ to 0.95 for faster second- moment adaptation. > [!tip] Insight > Gradient clipping in practice:{" "} The standard gradient clipping threshold is max-norm 1.0 โ€” if the global gradient norm exceeds 1.0, all gradients are rescaled so the norm equals 1.0 exactly. This convention is used in GPT-2, LLaMA, and most LLM training recipes. {" "} In fp16 mixed-precision training, gradient values above 65,504 (the fp16 max) overflow to Inf; loss scaling (e.g., dynamic scale starting at 2ยนโถ) keeps gradients in range before the optimizer step. > [!tip] Insight > Why backward โ‰ˆ 2ร— forward:{" "} The backward pass traverses the same graph as the forward pass, but at each node it must (1) look up the saved activation from the forward pass and (2) multiply by the upstream gradient. Two operations per node vs one in the forward pass โ€” hence ~2ร—. {" "} Gradient checkpointing (Chen et al., 2016) trades some of this back by recomputing activations on the fly; in their โˆšn-checkpoint schedule for an n-layer network, memory drops to O(โˆšn) with roughly one extra forward pass (~30% overhead). {" "} Mixed-precision training (bf16/fp16 compute with fp32 master weights) delivers roughly 2ร— end-to-end throughput over pure fp32 on A100/H100, because tensor-core FLOP/s for bf16 is ~16ร— faster than fp32. ## Code Examples ```python import torch # Define inputs with gradient tracking a = torch.tensor(2.0, requires_grad=True) b = torch.tensor(-3.0, requires_grad=True) c = torch.tensor(10.0, requires_grad=True) # Forward pass โ€” PyTorch builds the computation graph dynamically d = a * b # d = -6 e = d + c # e = 4 L = e ** 2 # L = 16 # Backward pass โ€” one call computes ALL gradients via reverse-mode AD L.backward() print(f"dL/da = {a.grad.item()}") # -24.0 (chain rule: 2e * b = 2*4*-3) print(f"dL/db = {b.grad.item()}") # 16.0 (2e * a = 2*4*2) print(f"dL/dc = {c.grad.item()}") # 8.0 (2e * 1 = 2*4) # Training loop: zero โ†’ forward โ†’ loss โ†’ backward โ†’ step optimizer = torch.optim.AdamW(model.parameters(), lr=3e-4) for x_batch, y_batch in dataloader: optimizer.zero_grad() # MUST clear before each step logits = model(x_batch) loss = torch.nn.functional.cross_entropy(logits, y_batch) loss.backward() # populate .grad for every parameter torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() # w = w - lr * grad ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Meta, OpenAI)_ **Q:** Why is backpropagation O(n) in the number of parameters, not O(nยฒ)?
Answer Each parameter
### โ˜…โ˜…โ˜† _(Google, Anthropic, Meta, OpenAI)_ **Q:** What is the vanishing gradient problem and when does it occur?
Answer When gradients are multiplied together through many layers (chain rule), they can shrink exponentially if each local gradient is < 1. With sigmoid activations, the max gradient is 0.25, so a 10-layer network can see gradients shrink by 0.25^10 โ‰ˆ 10^-6. This means early layers barely update. Solutions: ReLU activations (gradient is 1 in positive range), residual connections (gradient highway that bypasses multiplication), gradient clipping, and batch/layer normalization. Transformers avoid this primarily through residual streams and careful initialization.
### โ˜…โ˜…โ˜… _(Google, Anthropic, OpenAI)_ **Q:** Compare forward-mode vs reverse-mode automatic differentiation. When would you use each?
Answer Forward-mode AD computes one column of the Jacobian per pass (derivative of all outputs w.r.t. one input). It
### โ˜…โ˜…โ˜… _(Meta, Anthropic, Google)_ **Q:** Explain gradient checkpointing and the tradeoff it makes.
Answer During training, the forward pass must store all intermediate activations for use in the backward pass. For a deep network, this memory cost is O(n) in the number of layers. Gradient checkpointing (also called activation recomputation) discards some intermediate activations after the forward pass and recomputes them on the fly during backprop. The tradeoff: memory drops from O(n) to O(โˆšn), at the cost of ~33% extra compute (one additional forward pass per backward pass). Used when fine-tuning very large models where GPU memory is the bottleneck โ€” common in LLAMA fine-tuning setups.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** What does it mean for a gradient to be numerically stable, and why do log-space operations help?
Answer Numerical instability occurs when intermediate values overflow (e.g., exp(1000) = Inf) or underflow (e.g., exp(-1000) = 0), making gradient computations return NaN or 0. Log-space operations help by keeping values in a range where floats have precision. For example, log-softmax = x_i - log(sum(exp(x_j))) is computed as x_i - (max_x + log(sum(exp(x_j - max_x)))) โ€” subtracting the max before exp prevents overflow. PyTorch
## Further Reading - [Andrej Karpathy โ€” micrograd (GitHub)](https://github.com/karpathy/micrograd) 100-line autodiff engine and neural network library โ€” the clearest possible implementation of backprop from scratch. - [Andrej Karpathy โ€” The spelled-out intro to neural networks (YouTube)](https://www.youtube.com/watch?v=VMj-3S1tku0) 2.5-hour walkthrough building micrograd step by step โ€” every chain rule application shown explicitly. - [3Blue1Brown โ€” Backpropagation, visually explained](https://www.youtube.com/watch?v=Ilg3gGewQ5U) Animated geometric intuition for why the chain rule distributes gradient across a computation graph. - [Rumelhart, Hinton & Williams (1986) โ€” Learning representations by back-propagating errors](https://www.nature.com/articles/323533a0) The original 1986 Nature paper that popularized backprop for training multi-layer networks. - [Lilian Weng โ€” A Peek Into the Math of Neural Networks](https://lilianweng.github.io/posts/2017-06-21-overview/) Rigorous treatment of the chain rule, Jacobians, and multivariate gradient flow. - [Chris Olah โ€” Calculus on Computational Graphs: Backpropagation](https://colah.github.io/posts/2015-08-Backprop/) Olah 2015 โ€” the clearest visual explanation of backprop as reverse-mode autodiff on computational graphs, with step-by-step chain rule diagrams. ## Related Optimizers ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws ยท GPU & Mixed Precision --- --- title: "Optimizers" part: "Training" number: 11 emoji: "๐Ÿ“" subtitle: "SGD โ†’ Momentum โ†’ Adam โ†’ AdamW, learning rate schedules, and weight decay" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ“ Optimizers > SGD โ†’ Momentum โ†’ Adam โ†’ AdamW, learning rate schedules, and weight decay > [!question] Key Question > AdamW fixes a 5-year-old bug in Adam that silently hurts generalization โ† Backpropagation | โ†’ Pre-training & Loss ## Key Insights > [!tip] Insight > Interview hook: The interviewer says “just use Adam”. You say “actually we use AdamW โ€” Adam's weight decay is broken because the adaptive scaling shrinks it inconsistently across parameters. AdamW applies weight decay directly to weights, outside the adaptive term.” That's the answer that gets you the job. > [!tip] Insight > LLM defaults:{" "} AdamW with{" "} (not 0.999) {" "} is standard for LLMs. Lower ฮฒโ‚‚ makes the second moment adapt faster to gradient changes โ€” important when training on diverse token distributions where gradient statistics shift rapidly. > [!tip] Insight > Pattern to memorize: Many modern decoder-only LLM training recipes use{" "} ฮฒโ‚‚ = 0.95 {" "} instead of the Adam default 0.999, because a lower ฮฒโ‚‚ lets the second moment adapt more quickly to changing gradient statistics. PaLM is the notable contrast here: it uses Adafactor to cut optimizer-state memory at very large scale. ## Interview Questions ### โ˜…โ˜…โ˜† _(OpenAI, Google, Anthropic)_ **Q:** Why is AdamW preferred over Adam for training large language models? What bug does it fix?
Answer Adam has a subtle weight decay bug. In Adam, the standard way to add weight decay is L2 regularization: you add ฮปฮธ to the gradient before computing the adaptive update. But the adaptive learning rate then shrinks this regularization too โ€” dimensions with large gradient variance get a small effective weight decay, and dimensions with small gradient variance get large weight decay. This is inconsistent and wrong. AdamW (Loshchilov & Hutter 2019) fixes this by decoupling weight decay from the gradient update: the adaptive step is computed from gradients alone, then weight decay is applied directly to the parameters (ฮธ = ฮธ - lrยทmฬ‚/(โˆšvฬ‚+ฮต) - lrยทฮปฮธ). This makes weight decay uniform and predictable across all parameters. In practice, AdamW consistently outperforms Adam for LLMs and is now the standard optimizer.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** What is the learning rate warmup phase and why is it necessary for transformer training?
Answer Warmup linearly ramps the learning rate from near-zero to the target LR over the first N steps (typically 1โ€“4% of total training). It is necessary because at initialization, the parameter estimates in Adam
### โ˜…โ˜…โ˜… _(Meta, Google)_ **Q:** Derive the bias correction terms in Adam. Why are mฬ‚ and vฬ‚ needed?
Answer Adam maintains exponential moving averages: m_t = ฮฒโ‚ยทm_{t-1} + (1-ฮฒโ‚)ยทg_t and v_t = ฮฒโ‚‚ยทv_{t-1} + (1-ฮฒโ‚‚)ยทg_tยฒ. Both are initialized to zero, so they are biased toward zero early in training. Unrolling m_t: m_t = (1-ฮฒโ‚)ยทโˆ‘_{i=1}^{t} ฮฒโ‚^{t-i}ยทg_i. Taking expectation: E[m_t] = E[g_t]ยท(1-ฮฒโ‚^t) (assuming stationary gradients). So m_t underestimates E[g_t] by a factor of (1-ฮฒโ‚^t). The bias-corrected estimate is mฬ‚_t = m_t/(1-ฮฒโ‚^t). Same logic for vฬ‚_t = v_t/(1-ฮฒโ‚‚^t). As tโ†’โˆž, both correction factorsโ†’1 so they matter only in early training. Without bias correction, early steps use a nearly-zero vฬ‚ in the denominator, causing enormous initial steps. This is one reason Adam without warmup can still diverge.
### โ˜…โ˜…โ˜† _(Meta, OpenAI)_ **Q:** When would you prefer SGD with momentum over Adam for training? What are the tradeoffs?
Answer SGD with momentum is preferred for: (1) Computer vision CNNs โ€” empirically achieves better generalization than Adam, likely because Adam
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic, Google)_ **Q:** What determines the optimal learning rate? How do practitioners find it?
Answer The optimal LR balances convergence speed against stability โ€” too high causes divergence or loss spikes, too low wastes compute. Key determinants: (1) Model scale: larger models generally need lower LR; (2) Batch size: LR often scales with sqrt(batch_size) (linear scaling rule for SGD, sqrt for Adam); (3) Optimizer: Adam tolerates higher LR than SGD; (4) Loss landscape curvature: steeper landscapes need smaller steps. Practical finding methods: (1) LR range test (fast.ai / Smith 2015): sweep LR logarithmically over one epoch, plot loss vs LR, pick LR just before loss rises; (2) Grid search over {1e-5, 3e-5, 1e-4, 3e-4, 1e-3}; (3) Theory-based: max LR โ‰ˆ 0.1ยทโˆš(1/fan_out) per Kaiming; (4) Karpathy
## Further Reading - [Andrej Karpathy โ€” Neural Networks: Zero to Hero (Optimization lecture)](https://www.youtube.com/watch?v=P6sfmUTpUmc) Karpathy - [Decoupled Weight Decay Regularization (Loshchilov & Hutter, 2019)](https://arxiv.org/abs/1711.05101) The paper that introduced AdamW โ€” shows why L2 regularization and weight decay are NOT equivalent in adaptive optimizers. - [An Overview of Gradient Descent Optimization Algorithms โ€” Sebastian Ruder](https://ruder.io/optimizing-gradient-descent/) The definitive pedagogical guide to SGD, momentum, AdaGrad, RMSProp, Adam, and their variants with clear math. - [Cyclical Learning Rates for Training Neural Networks (Smith, 2017)](https://arxiv.org/abs/1506.01186) Introduces the LR range test โ€” the fastest practical method to find a good learning rate. ## Related Backpropagation ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws ยท GPU & Mixed Precision --- --- title: "Pre-training & Loss" part: "Training" number: 12 emoji: "๐Ÿ“‰" subtitle: "Next-token prediction, cross-entropy, and perplexity" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ“‰ Pre-training & Loss > Next-token prediction, cross-entropy, and perplexity > [!question] Key Question > Predict the next word โ€” that's literally the entire training objective โ† Optimizers | โ†’ Data Curation ## Key Insights > [!tip] Insight > Why does predicting the next token teach everything? Because language is a compression of human knowledge. To predict well, you must model the data-generating process โ€” which includes physics, psychology, logic, and every other pattern humans write about. > [!tip] Insight > GPT-2 achieves 8.63 perplexity on LAMBADA (zero-shot) . This means at each token, the model is effectively choosing from ~8โ€“9 plausible candidates. Human perplexity on English text is estimated around 12โ€“20 (varies by task and methodology). > [!tip] Insight > Perplexity keeps improving with scale, but the gains are logarithmic. Going from PPL 100 to 20 is "easy" (10x more compute). Going from 20 to 5 takes orders of magnitude more. This is the scaling laws story (Module 11).{" "} Chinchilla (Hoffmann et al., 2022) showed the compute-optimal token count is ~20× the parameter count ; models like Llama-3 deliberately exceed this to reduce per-query inference cost.{" "} Kaplan et al. (2020) estimated ~6N FLOPs per token per gradient step for a model with N non-embedding parameters , giving a closed-form handle on total training compute. ## Code Examples ```python import torch import torch.nn.functional as F def train_step(model, batch, optimizer): """Causal LM pretraining step with teacher forcing.""" input_ids = batch["input_ids"] # (B, T) # Teacher forcing: input = [0..T-1], target = [1..T] x = input_ids[:, :-1] # (B, T-1) targets = input_ids[:, 1:] # (B, T-1) # Forward pass logits = model(x) # (B, T-1, V) # Cross-entropy: predict next token at every position loss = F.cross_entropy( logits.reshape(-1, logits.size(-1)), # (B*(T-1), V) targets.reshape(-1), # (B*(T-1),) ignore_index=-100, # skip padding ) # Backward + gradient clip + step optimizer.zero_grad() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() return {"loss": loss.item(), "ppl": torch.exp(loss).item()} ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** Why does next-token prediction work as a universal training objective? What linguistic and world knowledge does it force the model to learn?
Answer To predict the next token well, the model must learn syntax (grammar rules), semantics (word meanings), world knowledge (facts), reasoning (causal chains), and even sentiment. For example, predicting
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Why do we use cross-entropy loss instead of MSE (mean squared error) for language modeling?
Answer Language modeling is a classification problem over a discrete vocabulary, not regression. Cross-entropy measures how well the predicted probability distribution matches the true (one-hot) distribution. MSE on probabilities would (1) not properly penalize confident wrong answers (assigning 0.01 vs 0.001 to the correct token matters a lot), (2) not correspond to the log-likelihood objective we actually want to maximize, (3) have worse gradient properties โ€” cross-entropy gradients are proportional to (predicted - target), giving clean learning signals. MSE would also require choosing how to represent the target (one-hot vectors), and the gradients would be scaled by the predicted probability, weakening the signal when the model is very wrong.
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** A model has perplexity 10 on a test set. What does this mean intuitively? How does perplexity relate to bits-per-character?
Answer Perplexity 10 means that, on average, the model is as confused as if it were choosing uniformly among 10 equally likely options at each step. Lower is better โ€” a perfect model would have perplexity 1 (always assigns probability 1.0 to the correct token). Relationship to bits: perplexity = 2^(bits-per-token), so PPL 10 = 2^3.32, meaning ~3.32 bits per token. For bits-per-character, divide by the average characters per token (~4 for English with BPE). GPT-2 had ~20 PPL on WebText, meaning at each position it was effectively choosing among ~20 candidates.
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain teacher forcing. What is the exposure bias problem, and what are alternatives?
Answer Teacher forcing: during training, the model always receives the ground truth token as input for the next step, regardless of its own prediction. This makes training stable and fast (the model never drifts from the data distribution). Exposure bias: at inference time, the model uses its own predictions, not ground truth. If it makes an error, subsequent predictions condition on that error, leading to compounding mistakes (error accumulation). Alternatives: (1) Scheduled sampling โ€” gradually replace ground truth with model predictions during training, (2) Sequence-level training โ€” use REINFORCE/RL objectives that optimize full sequences, (3) Curriculum learning โ€” start with teacher forcing and gradually reduce it. In practice, teacher forcing works well enough for large models because they become accurate enough that the distribution mismatch is small.
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** What is curriculum learning in the context of pre-training? Does the order of training data matter?
Answer Curriculum learning presents training data in a structured order โ€” typically starting with
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** What causes loss spikes during pre-training and how are they handled in practice?
Answer Loss spikes are sudden increases in training loss, common in large-scale pre-training. Causes: (1) bad data batches โ€” corrupted or extremely out-of-distribution samples, (2) learning rate too high โ€” especially near the beginning or during warmup, (3) numerical instability โ€” fp16/bf16 overflow, especially in attention logits or layer norms, (4) hardware failures โ€” GPU errors causing NaN gradients. Mitigations: (1) gradient clipping (typically max norm 1.0), (2) skipping batches with anomalous loss, (3) bf16 instead of fp16 (larger dynamic range), (4) restarting from a checkpoint before the spike with a lower learning rate, (5) z-loss regularization (PaLM) that penalizes large logits. Google
## Further Reading - [Language Models are Unsupervised Multitask Learners (GPT-2)](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) Radford et al. 2019 โ€” showed large-scale autoregressive pretraining produces strong zero-shot performance. - [Training Compute-Optimal Large Language Models (Chinchilla)](https://arxiv.org/abs/2203.15556) Hoffmann et al. 2022 โ€” proved most LLMs were undertrained. Optimal ratio: ~20 tokens per parameter. - [Scaling Laws for Neural Language Models](https://arxiv.org/abs/2001.08361) Kaplan et al. 2020 โ€” empirical power laws relating compute, data, and parameters to loss. Foundation of modern training budgets. - [Andrej Karpathy โ€” Let](https://www.youtube.com/watch?v=kCc8FmEb1nY) Implements GPT pretraining end-to-end on Shakespeare โ€” best hands-on companion to understanding the training loop. - [Lilian Weng](https://lilianweng.github.io/) Deep technical posts on LLM pretraining, scaling, and optimization โ€” essential reference for training infrastructure. - [Language Models are Few-Shot Learners (GPT-3)](https://arxiv.org/abs/2005.14165) Brown et al. 2020 โ€” the GPT-3 paper showing that large-scale pretraining enables strong few-shot task performance without fine-tuning. - [The Llama-3 Herd of Models](https://arxiv.org/abs/2407.21783) Meta 2024 โ€” detailed pretraining recipe for Llama-3, including data curation, 15T tokens, and the multi-stage training pipeline. ## Related Backpropagation ยท Optimizers ยท Data Curation ยท Scaling Laws ยท GPU & Mixed Precision --- --- title: "Data Curation" part: "Training" number: 13 emoji: "๐Ÿ—ƒ๏ธ" subtitle: "FineWeb, filtering, dedup โ€” data quality beats data quantity" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ—ƒ๏ธ Data Curation > FineWeb, filtering, dedup โ€” data quality beats data quantity > [!question] Key Question > LIMA trained on 1,000 examples and matched GPT-3.5 โ† Pre-training & Loss | โ†’ Scaling Laws ## Key Insights > [!tip] Insight > LIMA (2023) showed that 1,000 carefully chosen SFT examples {" "} can be competitive with much larger low-quality instruction sets. Pre-training already teaches most of the knowledge; SFT mainly teaches format and style. Quality of alignment data can matter more than raw example count. > [!tip] Insight > Typical settings: 5-gram shingles, 128 hash functions, Jaccard threshold 0.8. {" "} This catches template pages, mirrors, and scraped content while preserving legitimate similar-but-different documents. > [!tip] Insight > The trend is clear: datasets are growing (300B to 15T tokens in 3 years), but filtering is also getting stricter. More raw data in does not automatically mean better pretraining data out. The pipeline โ€” not just the crawl โ€” is a major competitive advantage. ## Code Examples ```typescript type RawDocument = { id: string; text: string; langConfidence: number; symbolRatio: number; repetitionRatio: number; domain: "web" | "code" | "books" | "wiki" | "other"; }; function curateDataset(rawDocuments: RawDocument[]) { // Stage 1: language detection let docs = rawDocuments.filter( (doc) => fastTextLangDetect(doc.text) === "en" && doc.langConfidence > 0.65, ); // Stage 2: quality filtering const qualityModel = loadClassifier("quality-scorer"); docs = docs.filter( (doc) => qualityModel.score(doc.text) > 0.5 && doc.text.split(/\s+/).length > 50 && doc.symbolRatio < 0.1 && doc.repetitionRatio < 0.3, ); // Stage 3: deduplication (MinHash + LSH) const minhashIndex = new MinHashLSH({ threshold: 0.8, numPerm: 128 }); const uniqueDocs: RawDocument[] = []; for (const doc of docs) { const signature = computeMinhash(doc.text, { ngram: 5 }); if (!minhashIndex.query(signature).length) { minhashIndex.insert(doc.id, signature); uniqueDocs.push(doc); } } // Stage 4: benchmark decontamination docs = uniqueDocs.filter( (doc) => !has13GramOverlap(doc.text, BENCHMARK_SET), ); // Stage 5: domain mixing return sampleByDomain(docs, { web: 0.5, code: 0.15, books: 0.15, wiki: 0.1, other: 0.1, }); } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Design a data pipeline for pre-training a 70B parameter LLM. What stages would you include and why?
Answer A production pipeline: (1) Crawl โ€” Common Crawl or custom scraper, (2) Extraction โ€” HTML to text with boilerplate removal (trafilatura, resiliparse), (3) Language filtering โ€” fasttext lid model, keep target languages, (4) Quality filtering โ€” perplexity filter (small LM trained on Wikipedia), heuristics (text length, symbol ratio, repetition), classifier trained on curated vs. random web data, (5) Deduplication โ€” exact dedup (URL + hash) then fuzzy dedup (MinHash LSH at document level, n-gram dedup at paragraph level), (6) PII removal โ€” regex + NER for emails, phone numbers, (7) Toxicity filtering โ€” classifier, but careful not to remove all discussion of sensitive topics, (8) Domain mixing โ€” blend web, code, books, papers, Wikipedia at target ratios. The key insight: each stage can be done independently and parallelized with MapReduce. FineWeb processes 15T tokens this way.
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** Why is deduplication critical for LLM training? What happens without it?
Answer Without dedup, (1) the model memorizes duplicated content, reducing generalization, (2) training loss is artificially low on duplicates (the model
### โ˜…โ˜…โ˜† _(Meta, OpenAI)_ **Q:** Can synthetic data replace real data for pre-training? What are the limits?
Answer Partially. Microsoft
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** How do data mixing ratios affect model performance? How would you determine the optimal mix?
Answer Mixing ratios dramatically impact downstream capabilities. More code data improves reasoning and code generation but can hurt conversational ability. More books/papers improve factual knowledge. Key findings: (1) Llama-1 used ~67% web, 15% code, rest books/Wikipedia/papers. Llama-3 significantly increased code and math data and scaled total tokens to 15T, but exact ratios are not publicly documented. (2) DoReMi (Xie et al., 2023) uses a small proxy model to automatically optimize domain weights โ€” upweight domains where the model is worse, downweight easy domains. (3) The Pile used 22 diverse sources with manually tuned weights. (4) You can do small-scale ablations: train 1B models with different mixes, evaluate on target benchmarks, then scale up the best mixture. The optimal mix depends on your target use case.
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** What is benchmark contamination and how do you detect and prevent it?
Answer Benchmark contamination occurs when test set examples appear in training data, inflating evaluation scores. Detection: (1) n-gram overlap โ€” check if 13-gram or longer matches exist between training data and benchmarks (GPT-3 paper method), (2) canary strings โ€” embed unique strings in test sets and check if the model memorizes them, (3) perplexity analysis โ€” if the model has suspiciously low perplexity on specific benchmark examples but not similar non-benchmark examples. Prevention: (1) decontamination step โ€” remove training documents that overlap with major benchmarks, (2) hold-out decontamination โ€” remove any document with significant overlap, (3) use dynamic benchmarks that post-date the training data. This is a growing problem: many public datasets contain benchmark data, and as training corpora grow, contamination becomes harder to avoid.
### โ˜…โ˜…โ˜† _(Meta, Google)_ **Q:** Explain LIMA
Answer LIMA (Less Is More for Alignment, 2023) showed that a 65B Llama model fine-tuned on just 1,000 high-quality examples performed comparably to models trained on 52K+ examples (Alpaca, Databricks-dolly). The key insight: pre-training already teaches the model almost everything โ€” SFT just teaches the format and style of interaction. With 1,000 diverse, high-quality examples covering different tasks and response styles, the model learns the
## Further Reading - [The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale](https://arxiv.org/abs/2406.17557) Penedo et al. 2024 โ€” HuggingFace - [LIMA: Less Is More for Alignment](https://arxiv.org/abs/2305.11206) Zhou et al. 2023 โ€” 1,000 carefully curated examples match far larger datasets. Quality over quantity for SFT. - [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027) Gao et al. 2020 โ€” influential open dataset combining 22 diverse sources. Used to train GPT-Neo and GPT-J. - [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) Gunasekar et al. 2023 โ€” Microsoft - [Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research](https://arxiv.org/abs/2402.00159) AI2 2024 โ€” describes the full curation pipeline for OLMo - [Deduplicating Training Data Makes Language Models Better](https://arxiv.org/abs/2107.06499) Lee et al. 2022 โ€” rigorous study showing deduplication improves perplexity and reduces verbatim memorization risk across multiple LMs. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Scaling Laws ยท GPU & Mixed Precision --- --- title: "Scaling Laws" part: "Training" number: 14 emoji: "๐Ÿ“ˆ" subtitle: "How big? How much data? Chinchilla has the answer" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ“ˆ Scaling Laws > How big? How much data? Chinchilla has the answer > [!question] Key Question > Why Llama-2 beats GPT-3 with half the parameters โ† Data Curation | โ†’ GPU & Mixed Precision ## Key Insights > [!tip] Insight > Scaling laws turned LLM training from alchemy into engineering. You can now predict the final loss โ€” and therefore the compute budget โ€” before training a single step. This is why labs run small-scale experiments first and extrapolate. > [!tip] Insight > Rule of thumb: train on ~20 tokens per parameter. A 7B model needs ~140B tokens. A 70B model needs ~1.4T tokens. Modern practice pushes beyond this for inference efficiency. > [!tip] Insight > The Chinchilla insight changed the industry: GPT-3 was 3ร— too large for its data budget. For the same compute, a 70B model trained on 1.4T tokens beats a 175B model trained on 300B tokens. > [!tip] Insight > Notice the trend: tokens/param ratio keeps increasing. Labs are intentionally over-training relative to Chinchilla because inference cost dominates. A smaller, longer-trained model is cheaper to serve at scale. ## Code Examples ```python # Chinchilla scaling law: C = 6ND, D_opt = 20 * N_opt # Substituting: C = 6 * N * 20N = 120Nยฒ โ†’ N_opt = sqrt(C / 120) def chinchilla_optimal(flops_budget): """Given a FLOP budget, compute optimal model size and data.""" N = (flops_budget / 120) ** 0.5 # optimal params D = 20 * N # optimal tokens (20 tokens per param) return {"params": N, "tokens": D, "flops": flops_budget} # Example: 1e24 FLOPs (roughly Chinchilla's budget) result = chinchilla_optimal(1e24) # โ†’ ~91B params, ~1.8T tokens # GPT-3 budget: 3.15e23 FLOPs gpt3 = chinchilla_optimal(3.15e23) # โ†’ ~51B params, ~1.0T tokens # GPT-3 used 175B params + 300B tokens โ€” model was ~3.4ร— too large for the data budget ``` ```python import math def get_lr(step, warmup_steps, total_steps, max_lr, min_lr): """Linear warmup + cosine decay schedule.""" if step < warmup_steps: # Linear warmup return max_lr * step / warmup_steps # Cosine decay progress = (step - warmup_steps) / (total_steps - warmup_steps) return min_lr + 0.5 * (max_lr - min_lr) * (1 + math.cos(math.pi * progress)) # Example: GPT-3 style schedule (units are steps, not tokens) # max_lr=6e-5, min_lr=6e-6 # warmup_steps=375, total_steps=300_000 # (assumes batch_size=1M tokens: 375M-token warmup / 1M = 375 steps) ``` ```python # Chinchilla scaling loss: L(N, D) = E + A/N^alpha + B/D^beta # Hoffmann et al. 2022 fitted constants E = 1.69 # irreducible entropy loss A = 406.4 B = 410.7 alpha = 0.34 beta = 0.28 def chinchilla_loss(N, D): """Predict cross-entropy loss given model params N and data tokens D.""" return E + A / (N ** alpha) + B / (D ** beta) def optimal_allocation(C): """Chinchilla-optimal N and D for a fixed FLOP budget C = 6ND.""" N = (A * alpha / (B * beta)) ** (1 / (alpha + beta)) * (C / 6) ** (beta / (alpha + beta)) D = C / (6 * N) return N, D N_opt, D_opt = optimal_allocation(3.15e23) # GPT-3 compute budget print(f"Optimal: {N_opt/1e9:.1f}B params, {D_opt/1e12:.1f}T tokens") print(f"Predicted loss: {chinchilla_loss(N_opt, D_opt):.3f}") ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain the Chinchilla scaling law. How does it differ from the original Kaplan scaling law?
Answer Kaplan et al. (2020) found that loss scales as a power law of model size: L(N) ~ N^{-0.076}. They suggested scaling model size faster than data, leading to large but undertrained models like GPT-3 (175B params, only 300B tokens). Chinchilla (Hoffmann et al., 2022) showed that for a fixed compute budget, params and data should scale equally: N_opt ~ C^{0.5} and D_opt ~ C^{0.5}. This means GPT-3 should have been trained on ~3.5T tokens, not 300B. Chinchilla (70B, 1.4T tokens) outperformed the 4x larger Gopher (280B, 300B tokens) using the same compute.
### โ˜…โ˜…โ˜† _(Meta, OpenAI)_ **Q:** Why does Llama-2 70B outperform GPT-3 175B despite being smaller?
Answer GPT-3 was trained before Chinchilla scaling laws were understood. It used 175B parameters but only 300B tokens โ€” severely undertrained by Chinchilla standards (optimal would be ~3.5T tokens). Llama-2 70B was trained on 2T tokens โ€” far beyond the Chinchilla-optimal amount for its size. This
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** Derive the approximate training compute formula C = 6ND. Where does the 6 come from?
Answer For each token in training: (1) Forward pass: each parameter participates in ~1 multiply-add = 2 FLOPs per param, (2) Backward pass: computing gradients requires ~2x the forward FLOPs (one pass for loss gradient, one for weight gradient) = 4 FLOPs per param. Total: 6 FLOPs per parameter per token. For N parameters and D tokens: C = 6ND. Example: GPT-3 with N=175B and D=300B: C = 6 * 175e9 * 300e9 = 3.15e23 FLOPs. This is an approximation โ€” it ignores embedding layers, attention masking overhead, and activation recomputation. But it
### โ˜…โ˜…โ˜† _(Meta, Databricks)_ **Q:** What is quantization and how do FP16, INT8, and INT4 compare for inference?
Answer Quantization reduces the precision of model weights to use less memory and compute. FP16/BF16: 2 bytes per param, standard training precision, negligible quality loss. INT8: 1 byte per param, 2x memory reduction, <1% quality degradation for most models. Used by LLM.int8() (Dettmers et al.). INT4: 0.5 bytes per param, 4x memory reduction from FP16, 1-3% quality loss. Used by GPTQ, AWQ, and QLoRA. For a 70B model: FP16 = 140GB, INT8 = 70GB, INT4 = 35GB. INT4 fits on a single 48GB GPU. The key tradeoff: aggressive quantization (INT4) enables serving on cheaper hardware but may degrade quality on reasoning-heavy tasks.
### โ˜…โ˜…โ˜… _(Google, Databricks)_ **Q:** Explain continuous batching and PagedAttention. Why are they critical for LLM serving?
Answer Continuous batching (Orca, Yu et al. 2022): instead of waiting for all sequences in a batch to finish before starting new ones, insert new requests as soon as any sequence completes. This eliminates the
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** What is the learning rate schedule for training large language models? Why warmup + cosine decay?
Answer Standard schedule: linear warmup for the first 0.1-1% of steps, then cosine decay to ~10% of peak LR. Warmup: at initialization, gradients are noisy and the loss landscape is poorly conditioned. A large LR would cause divergence. Warmup gradually increases the LR, letting the optimizer find a stable region first. Cosine decay: smoothly decreases the LR, allowing the model to fine-tune in a narrower loss basin as training progresses. Why cosine over linear or step decay? Cosine is smoother (no sudden LR drops that cause loss spikes) and empirically produces better final loss. Peak LR scales with batch size: LR ~ sqrt(batch_size). For GPT-3: peak LR = 6e-5, warmup = 375M tokens.
## Further Reading - [Chinchilla: Training Compute-Optimal Large Language Models](https://arxiv.org/abs/2203.15556) DeepMind - [Scaling Laws for Neural Language Models (Kaplan et al.)](https://arxiv.org/abs/2001.08361) The original OpenAI scaling laws paper establishing power-law relationships between compute, data, parameters, and loss. - [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774) OpenAI - [Lilian Weng](https://lilianweng.github.io/) Technical posts on scaling behavior, emergent abilities, and LLM training dynamics. - [Emergent Abilities of Large Language Models](https://arxiv.org/abs/2206.07682) Wei et al. 2022 โ€” documents capabilities that appear unpredictably at scale, raising questions about whether scaling produces continuous or discontinuous improvements. - [Are Emergent Abilities of Large Language Models a Mirage?](https://arxiv.org/abs/2304.15004) Schaeffer et al. 2023 โ€” argues apparent emergence is an artifact of discontinuous evaluation metrics, not a fundamental property of scale. - [Andrej Karpathy โ€” The State of GPT (Microsoft Build 2023)](https://www.youtube.com/watch?v=bZQun8Y4L2A) Covers scaling laws, training recipes, and how compute budgets inform modern LLM development decisions in practice. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Data Curation ยท GPU & Mixed Precision --- --- title: "GPU & Mixed Precision" part: "Training" number: 15 emoji: "๐Ÿ”ฅ" subtitle: "CUDA memory hierarchy, fp16/bf16/fp8, loss scaling, and torch.autocast" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”ฅ GPU & Mixed Precision > CUDA memory hierarchy, fp16/bf16/fp8, loss scaling, and torch.autocast > [!question] Key Question > bf16 training uses half the memory with zero accuracy loss โ€” why wasn't this the default? โ† Scaling Laws | โ†’ Distributed Training ## Key Insights > [!tip] Insight > Attention is memory-bandwidth-bound, not compute-bound. {" "} Standard self-attention reads/writes O(Nยฒ) elements for O(Nยฒ) ops โ€”{" "} arithmetic intensity โ‰ˆ 1 FLOP/byte . An A100's ridge point is{" "} ~156 FLOPs/byte. Flash Attention fixes this by keeping tiles in{" "} SRAM, achieving a{" "} 2โ€“4ร— wall-clock speedup. > [!tip] Insight > Interview trap: bf16 has{" "} less mantissa precision than fp16 (7 vs 10 bits), yet it's preferred for training. Why? Because the 8-bit exponent gives it fp32's dynamic range โ€”{" "} fp16's 5-bit exponent caps at 65,504 , overflow is extremely rare for bf16. Precision matters far less than avoiding NaN explosions during training. > [!tip] Insight > Why bf16 doesn't need this: bf16 shares fp32's 8-bit exponent, so it can represent values down to ~1.2ร—10โปยณโธ. Gradients of 1e-8 are representable. fp16's 5-bit exponent only reaches ~6ร—10โปโธ โ€” borderline at best, zero at worst. > [!tip] Insight > H200 vs H100: Same compute (identical die), but{" "} HBM3e gives 4.8 TB/s vs 3.35 TB/s โ€” a 43% bandwidth increase . This directly speeds up memory-bound workloads like attention and token generation. Compute-bound workloads (large matmuls) see no improvement. ## Interview Questions ### โ˜…โ˜†โ˜† _(Google, Anthropic)_ **Q:** Why is bf16 preferred over fp16 for training large language models?
Answer bf16 has the same 8-bit exponent as fp32, giving it the same dynamic range (~1e-38 to ~3e38). This means gradients and activations never overflow, and loss scaling is unnecessary. fp16 has only a 5-bit exponent (max ~65504), so large activations or gradients overflow to NaN โ€” a common training instability. bf16 does sacrifice mantissa precision (7 bits vs 10 for fp16), but for neural network training the dynamic range matters far more than fractional precision. Google
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain the roofline model and what it predicts about attention computation.
Answer The roofline model bounds compute performance by two limits: peak FLOP/s (compute-bound) and peak memory bandwidth ร— arithmetic intensity (memory-bound). Arithmetic intensity = FLOPs / bytes of memory accessed. For an A100: peak ~312 TFLOP/s BF16, peak HBM bandwidth ~2 TB/s. The ridge point is 312e12 / 2e12 โ‰ˆ 156 FLOPs/byte. Operations below this intensity are memory-bound. Standard attention reads/writes O(Nยฒ) elements for O(Nยฒ) operations โ€” intensity ~1, far below the ridge. This means attention is memory-bandwidth-bound, not compute-bound. Flash Attention exploits tiling to keep data in SRAM and dramatically reduce HBM traffic, approaching the compute-bound regime.
### โ˜…โ˜…โ˜† _(Meta, OpenAI)_ **Q:** What is arithmetic intensity, and how does it govern GPU utilization?
Answer Arithmetic intensity (AI) is the ratio of floating-point operations to bytes of memory accessed: AI = FLOPs / bytes. It determines whether a kernel is compute-bound (AI > ridge point) or memory-bandwidth-bound (AI < ridge point). The ridge point = peak FLOP/s รท peak memory bandwidth. For an H100: ~989 TFLOP/s BF16, ~3.35 TB/s HBM โ†’ ridge โ‰ˆ 295 FLOPs/byte. A matrix multiply of large square matrices has AI โ‰ˆ N/2, which for N=4096 is ~2048 โ€” well compute-bound. Element-wise ops (ReLU, layer norm) have AI โ‰ˆ 1 โ€” memory-bound. Profiling tools like Nsight Compute show the measured AI versus the roofline to identify bottlenecks.
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** Why does Flash Attention help with the memory bandwidth bottleneck?
Answer Standard attention materializes the full Nร—N attention matrix in HBM (global GPU memory). For sequence length N=8192 and batch=32, this is 32 ร— 8192ยฒ ร— 2 bytes โ‰ˆ 4 GB per layer โ€” written and read repeatedly (for softmax, dropout, etc.). Flash Attention (Dao et al. 2022) tiles the computation into SRAM-sized blocks (~192KB per SM on A100). It fuses the QKV matmul, softmax, and output matmul into one kernel that streams data through SRAM without ever materializing the full Nร—N matrix in HBM. HBM reads/writes drop from O(Nยฒ) to O(Nยฒdยฒ/M) where M is SRAM size โ€” subquadratic in practice since M is large relative to dยฒ, making the operation ~5โ€“20ร— faster on memory-bandwidth-bound workloads. The math is identical โ€” it
### โ˜…โ˜†โ˜† _(Google, Meta)_ **Q:** How does gradient accumulation help with limited GPU memory, and what are its tradeoffs?
Answer Gradient accumulation lets you simulate a large effective batch size without storing multiple batches
## Further Reading - [Andrej Karpathy โ€” Zero To Hero: Building GPT (Device + Precision chapters)](https://www.youtube.com/watch?v=l8pRSuU81PU) Karpathy - [NVIDIA Mixed Precision Training Guide](https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html) Official NVIDIA guide covering loss scaling, tensor cores, and the AMP workflow with benchmarks. - [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness](https://arxiv.org/abs/2205.14135) Dao et al. 2022 โ€” the paper that introduced tiled SRAM attention and made the roofline model central to ML systems work. - [FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning](https://arxiv.org/abs/2307.08691) Dao 2023 โ€” extends Flash Attention with improved thread block partitioning, achieving ~2ร— additional speedup on H100. - [Lilian Weng โ€” Large Transformer Model Inference Optimization](https://lilianweng.github.io/posts/2023-01-10-inference-optimization/) Detailed breakdown of memory hierarchy, arithmetic intensity, and precision tradeoffs with worked numerical examples. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws --- --- title: "Distributed Training" part: "Training" number: 16 emoji: "๐Ÿ–ฅ๏ธ" subtitle: "DDP, ZeRO, FSDP โ€” training across thousands of GPUs" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ–ฅ๏ธ Distributed Training > DDP, ZeRO, FSDP โ€” training across thousands of GPUs > [!question] Key Question > Llama-2 70B took 1.7 million GPU-hours โ€” how do you not waste them? โ† GPU & Mixed Precision | โ†’ Fine-tuning & LoRA ## Key Insights > [!tip] Insight > What to try: Think about which strategy works when the model doesn't fit on one GPU at all (tensor or pipeline), vs. when it fits but you want faster training (data parallel). Real systems combine all three โ€” that's 3D parallelism. > [!tip] Insight > Think of it as splitting along three axes: data parallel splits the batch, tensor parallel splits the layers horizontally (within each layer), pipeline parallel splits layers vertically (across layers). 3D parallelism slices along all three axes simultaneously. > [!tip] Insight > The scale is staggering: Llama-3.1 405B used 16,384 GPUs for almost 2 months. At that scale, hardware failures happen every few hours โ€” fault tolerance and checkpointing are as important as the training algorithm itself. ## Code Examples ```python import os import torch import torch.distributed as dist from torch.nn.parallel import DistributedDataParallel as DDP # Initialize process group (one process per GPU) dist.init_process_group("nccl") local_rank = int(os.environ["LOCAL_RANK"]) torch.cuda.set_device(local_rank) # Wrap model with DDP โ€” handles gradient sync automatically model = MyModel().cuda(local_rank) model = DDP(model, device_ids=[local_rank]) # Training loop is identical to single-GPU for batch in dataloader: loss = model(batch).loss loss.backward() # DDP all-reduces gradients here optimizer.step() optimizer.zero_grad() ``` ```python import torch from torch.distributed.fsdp import ( FullyShardedDataParallel as FSDP, MixedPrecision, ShardingStrategy, ) from torch.distributed.fsdp.wrap import size_based_auto_wrap_policy # FSDP shards parameters, gradients, and optimizer states model = FSDP( MyModel().cuda(), sharding_strategy=ShardingStrategy.FULL_SHARD, # ZeRO-3 # SHARD_GRAD_OP = ZeRO-2, NO_SHARD = DDP auto_wrap_policy=size_based_auto_wrap_policy, mixed_precision=MixedPrecision( param_dtype=torch.bfloat16, reduce_dtype=torch.float32, ), ) # Parameters gathered on-demand for forward/backward, then freed ``` ```python # DDP training step โ€” full end-to-end with gradient sync import torch, os import torch.distributed as dist from torch.nn.parallel import DistributedDataParallel as DDP dist.init_process_group("nccl") rank = int(os.environ["LOCAL_RANK"]) torch.cuda.set_device(rank) model = MyTransformer().cuda(rank) model = DDP(model, device_ids=[rank]) # wraps gradient all-reduce optimizer = torch.optim.AdamW(model.parameters(), lr=3e-4) for batch in dataloader: input_ids = batch["input_ids"].cuda(rank) labels = batch["labels"].cuda(rank) loss = model(input_ids, labels=labels).loss loss.backward() # DDP all-reduces gradients here torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() optimizer.zero_grad() ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Compare DDP and FSDP. When would you choose one over the other?
Answer DDP (DistributedDataParallel) replicates the entire model on each GPU and only synchronizes gradients via all-reduce after each backward pass. It
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain the pipeline bubble problem and how micro-batches help reduce it.
Answer In pipeline parallelism, the model is split across GPUs by layers. With naive scheduling, GPU k is idle while GPUs 0..k-1 process the forward pass, and again idle during the backward pass of later stages. This idle time is the
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** What is activation checkpointing and what tradeoff does it make?
Answer Activation checkpointing (gradient checkpointing) trades compute for memory. Normally, all intermediate activations are stored during the forward pass for use during backpropagation. For large models, this can use more memory than the model parameters themselves. With checkpointing, only activations at selected layers are saved; the others are recomputed during the backward pass. This reduces activation memory from O(L) to O(sqrt(L)) with optimal placement, but increases compute by ~33% (one extra forward pass). Critical for training large models where activation memory is the bottleneck.
### โ˜…โ˜†โ˜† _(Google, Meta)_ **Q:** Why does gradient accumulation help in distributed training, and how does it relate to effective batch size?
Answer Gradient accumulation lets you simulate larger batch sizes without needing more GPU memory. Instead of updating weights after every micro-batch, you accumulate gradients over K steps, then do one optimizer step. The effective batch size becomes: batch_per_gpu * num_gpus * accumulation_steps. This matters because: (1) large batches are more stable for training large models (LLMs often use 1-4M tokens per batch), (2) communication cost is amortized โ€” all-reduce happens once per K steps, not every step, (3) you can hit target batch sizes even with limited GPU memory. The tradeoff: K steps of latency before each parameter update.
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** Design a 3D parallelism strategy for training a 175B parameter model on 1024 GPUs. Explain your choices.
Answer A 175B model (like GPT-3) with fp16 needs ~350GB for parameters alone, plus optimizer states and activations. Strategy: (1) Tensor parallelism (TP=8) within each 8-GPU node โ€” splits individual layers across GPUs connected by fast NVLink (~600 GB/s). (2) Pipeline parallelism (PP=16) across 16 stages โ€” splits 96 transformer layers into groups of 6, using micro-batches to reduce bubble. (3) Data parallelism (DP=8) with ZeRO-1 โ€” 1024/(8*16)=8 replicas, sharding optimizer states. This gives 8*16*8=1024 GPUs. TP is kept within nodes because it requires all-to-all communication on every layer. PP spans nodes within a rack. DP spans racks with infrequent gradient syncs.
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** How do you handle fault tolerance in large-scale distributed training?
Answer At 1000+ GPU scale, hardware failures are routine (mean time between failures ~hours). Key strategies: (1) Periodic checkpointing โ€” save model state, optimizer state, and data loader position every N steps to persistent storage. (2) Elastic training โ€” frameworks like TorchElastic can restart training when nodes fail, redistributing work across remaining GPUs. (3) Redundant computation โ€” some systems duplicate critical pipeline stages. (4) Asynchronous checkpointing โ€” overlap checkpoint writes with training to minimize pause time. (5) In-memory checkpointing โ€” save to other GPUs
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** How would you debug a single slow rank in a 1,000-GPU training job?
Answer A single straggler blocks all-reduce, slowing the entire job. Debugging steps: (1) Check input pipeline โ€” data loading skew across ranks (one rank reading from a slow disk/shard). Fix: pre-shuffle data, balance shard sizes. (2) Check NCCL topology โ€” one rank may have a bad NIC, cross-switch connection, or PCIe bottleneck. Use NCCL_DEBUG=INFO logs. (3) Profile per-rank โ€” compare kernel timing across ranks. The straggler will show longer compute or communication. (4) Check for load imbalance โ€” if using MoE, one rank
## Further Reading - [ZeRO: Memory Optimizations Toward Training Trillion Parameter Models](https://arxiv.org/abs/1910.02054) Microsoft - [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/abs/1909.08053) NVIDIA - [PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel](https://arxiv.org/abs/2304.11277) Production lessons from scaling FSDP across thousands of GPUs at Meta. - [GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism](https://arxiv.org/abs/1811.06965) Huang et al. 2019 โ€” introduces micro-batching to reduce pipeline bubble overhead, the foundation of modern pipeline parallelism. - [Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM](https://arxiv.org/abs/2104.04473) Narayanan et al. 2021 โ€” combining data, tensor, and pipeline parallelism (3D parallelism) with concrete recipes for scaling to trillions of parameters. - [Lilian Weng โ€” Large Transformer Model Inference Optimization](https://lilianweng.github.io/posts/2023-01-10-inference-optimization/) Detailed breakdown of distributed training and inference strategies โ€” parallelism, memory, and communication tradeoffs with worked examples. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws --- --- title: "Fine-tuning & LoRA" part: "Training" number: 17 emoji: "๐Ÿ”ง" subtitle: "Adapt a model with 0.1% of parameters" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”ง Fine-tuning & LoRA > Adapt a model with 0.1% of parameters > [!question] Key Question > LoRA trains 0.1% of parameters and matches full fine-tuning โ† Distributed Training | โ†’ SFT & Post-Training Pipeline ## Key Insights > [!tip] Insight > Think of LoRA as adding a small correction lens to a telescope. The main mirror (pre-trained weights) stays fixed. The correction lens (LoRA matrices) is tiny but precisely shaped to fix the specific aberration (task adaptation) you care about. > [!tip] Insight > B initializes to zero so the adapter starts as a no-op (ฮ”W = BA = 0) โ€” training begins from the pre-trained model's exact output, not a random perturbation. > [!tip] Insight > LoRA is typically applied to the attention projection matrices (Q, K, V, O). For a model with layers and 4 projections each, total trainable params ={" "} . > [!tip] Insight > Llama-2 7B full fine-tuning requires ~112GB VRAM with standard mixed-precision training (FP16 weights + FP32 Adam states). The ~56GB figure in the table assumes 2-GPU FSDP sharding. QLoRA cuts single-GPU memory to ~6GB โ€” the same task on a consumer RTX 4090. For 70B models: full FT needs ~560GB (8ร—A100), QLoRA fits in ~48GB (1ร—A100). > [!tip] Insight > LoRA adapters are tiny files (10โ€“50MB) that can be hot-swapped at serving time. This enables multi-tenant setups: one base model, many task-specific adapters loaded on demand.{" "} Libraries like LoRAX and S-LoRA serve hundreds of LoRA adapters concurrently from a single GPU , making per-customer fine-tuned models economically viable. ## Code Examples ```python class LoRALinear(nn.Module): def __init__(self, in_dim, out_dim, rank=8, alpha=16): super().__init__() self.W = nn.Linear(in_dim, out_dim, bias=False) self.W.weight.requires_grad_(False) # Freeze base self.A = nn.Linear(in_dim, rank, bias=False) self.B = nn.Linear(rank, out_dim, bias=False) nn.init.zeros_(self.B.weight) # B=0 โ†’ adapter is a no-op at init (ฮ”W=0) self.scale = alpha / rank def forward(self, x): return self.W(x) + self.B(self.A(x)) * self.scale ``` ```python class LoRALinear(nn.Module): def __init__(self, in_dim, out_dim, rank=16, alpha=32): super().__init__() self.W = nn.Linear(in_dim, out_dim, bias=False) self.W.weight.requires_grad = False # freeze base self.A = nn.Linear(in_dim, rank, bias=False) # down-project self.B = nn.Linear(rank, out_dim, bias=False) # up-project self.scale = alpha / rank nn.init.kaiming_uniform_(self.A.weight) nn.init.zeros_(self.B.weight) # start at zero delta def forward(self, x): return self.W(x) + self.B(self.A(x)) * self.scale def merge(self): """Merge adapter into base weight โ€” zero inference overhead.""" self.W.weight.data += (self.B.weight @ self.A.weight) * self.scale self.W.weight.requires_grad = False # After merge, A and B can be discarded ``` ```python # LoRA: inject low-rank adapters into attention projections import torch, torch.nn as nn class LoRALinear(nn.Module): """Drop-in replacement for nn.Linear with LoRA adaptation.""" def __init__(self, in_dim, out_dim, rank=8, alpha=16): super().__init__() self.W0 = nn.Linear(in_dim, out_dim, bias=False) self.W0.weight.requires_grad_(False) # freeze base weights self.A = nn.Linear(in_dim, rank, bias=False) self.B = nn.Linear(rank, out_dim, bias=False) nn.init.kaiming_uniform_(self.A.weight) nn.init.zeros_(self.B.weight) # B=0: adapter contributes zero update at init self.scale = alpha / rank def forward(self, x): return self.W0(x) + self.B(self.A(x)) * self.scale # W0 + BA def merge(self): """Merge into W0 for zero-overhead inference.""" self.W0.weight.data += (self.B.weight @ self.A.weight) * self.scale ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Meta, OpenAI)_ **Q:** Explain LoRA. Why does it work despite training so few parameters?
Answer LoRA (Low-Rank Adaptation) freezes the pre-trained weight matrix W and adds a low-rank decomposition W
### โ˜…โ˜…โ˜… _(Meta, Databricks)_ **Q:** What is QLoRA and how does it enable fine-tuning 70B models on a single GPU?
Answer QLoRA combines three techniques: (1) 4-bit NormalFloat (NF4) quantization of the base model โ€” reduces 70B params from 140GB (FP16) to ~35GB (4-bit), (2) LoRA adapters in FP16/BF16 on top of the quantized base, (3) paged optimizers that offload optimizer states to CPU RAM when GPU memory is full. The base model weights are frozen and quantized, so they use minimal GPU memory. Only the small LoRA matrices (typically 0.1% of params) need gradients and optimizer states. This fits a 70B model fine-tuning setup into a single 48GB A6000 or A100 GPU.
### โ˜…โ˜…โ˜† _(OpenAI, Meta)_ **Q:** How do you choose the LoRA rank r? What are the tradeoffs?
Answer Rank r controls the expressiveness of the adapter. Too low (r=1-4): underfitting, the adapter can
### โ˜…โ˜†โ˜† _(Google, OpenAI)_ **Q:** Compare full fine-tuning, LoRA, and prompt tuning. When would you use each?
Answer Full fine-tuning: updates all parameters. Best quality but most expensive, requires multiple GPUs for large models, risk of catastrophic forgetting. Use for: critical production models, large domain shifts, when you have abundant data. LoRA: trains 0.01-1% of params via low-rank adapters. Near full-FT quality at a fraction of the cost. Use for: most practical fine-tuning, domain adaptation, instruction tuning. Prompt tuning: prepends learnable soft tokens to the input, trains only those embeddings (~0.001% of params). Cheapest but least expressive. Use for: simple task adaptation, multi-tenant setups where each customer gets their own soft prompt. The field has largely converged on LoRA/QLoRA as the default approach.
### โ˜…โ˜…โ˜† _(OpenAI, Databricks)_ **Q:** How do you prepare data for fine-tuning? What are common pitfalls?
Answer Data format: instruction/response pairs in the model
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** How do you detect and prevent overfitting during fine-tuning?
Answer Detection: (1) monitor train vs. validation loss โ€” divergence means overfitting, (2) evaluate on held-out prompts qualitatively every N steps, (3) check if the model starts repeating training examples verbatim. Prevention: (1) use LoRA instead of full fine-tuning (implicit regularization from low rank), (2) early stopping based on val loss, (3) reduce learning rate (1e-5 to 5e-5 for LoRA, 1e-6 to 2e-5 for full FT), (4) increase dropout on LoRA layers (lora_dropout=0.05-0.1), (5) reduce epochs โ€” most fine-tuning needs only 1-3 epochs, (6) augment/expand the dataset. For OpenAI
## Further Reading - [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) The original LoRA paper โ€” freeze base weights, train low-rank decomposition matrices A and B. - [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314) 4-bit NormalFloat quantization + LoRA adapters, enabling 65B model fine-tuning on a single 48GB GPU. - [PEFT: Parameter-Efficient Fine-Tuning (Hugging Face docs)](https://huggingface.co/docs/peft) Hugging Face library implementing LoRA, prefix tuning, prompt tuning, and other PEFT methods. - [Lilian Weng](https://lilianweng.github.io/) In-depth posts on fine-tuning methods, PEFT variants, and alignment techniques. - [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691) Lester et al. 2021 โ€” prompt tuning trains only soft prompt tokens while freezing the entire model. Matches full fine-tuning at 11B scale. - [Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning](https://arxiv.org/abs/2303.15647) Xu et al. 2023 โ€” comprehensive survey comparing LoRA, adapters, prefix tuning, and prompt tuning across benchmarks. - [Andrej Karpathy โ€” The State of GPT (Microsoft Build 2023)](https://www.youtube.com/watch?v=bZQun8Y4L2A) Covers the SFT and RLHF fine-tuning pipeline end-to-end, including practical data requirements and training tips. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws --- --- title: "SFT & Post-Training Pipeline" part: "Training" number: 18 emoji: "๐ŸŽฏ" subtitle: "Loss masking, chat templates, rejection sampling, distillation" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽฏ SFT & Post-Training Pipeline > Loss masking, chat templates, rejection sampling, distillation > [!question] Key Question > The recipe between pre-training and RLHF that nobody talks about โ† Fine-tuning & LoRA | โ†’ RL Foundations ## Key Insights > [!tip] Insight > The complete modern recipe: Pretrain (trillions of tokens) โ†’ SFT ( 1Kโ€“50K high-quality examples ) โ†’ Rejection Sampling (amplify quality) โ†’ DPO/RLHF (align to preferences) โ†’ Eval โ†’ Deploy. Each step is cheaper than the last but contributes disproportionately to user experience. > [!tip] Insight > In practice, set labels to -100 for masked positions โ€” PyTorch's CrossEntropyLoss ignores these by default. No need for an explicit mask tensor. > [!tip] Insight > The number of SFT examples across production models ranges from 1K to 50K โ€” orders of magnitude less than pre-training data. SFT teaches style, not knowledge. Quality and diversity of examples matter far more than raw count. ## Code Examples ```python # SFT training loop with instruction masking # Only compute loss on assistant tokens (labels=-100 for user/system turns) import torch, torch.nn.functional as F def sft_loss(model, input_ids, labels, attention_mask): """labels: -100 for user/system tokens, token_id for assistant tokens.""" logits = model(input_ids, attention_mask=attention_mask).logits # (B, T, V) shift_logits = logits[:, :-1].contiguous() # predict next token shift_labels = labels[:, 1:].contiguous() # ignore_index=-100 automatically masks non-assistant positions return F.cross_entropy( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1), ignore_index=-100, ) # Mask helper: only keep assistant-turn token ids in labels def mask_user_turns(input_ids, assistant_ranges): labels = input_ids.clone() mask = torch.zeros_like(input_ids, dtype=torch.bool) for start, end in assistant_ranges: mask[:, start:end] = True labels[~mask] = -100 return labels ``` ```python import torch import torch.nn.functional as F def sft_train_step(model, batch, optimizer): """SFT step with loss masking on assistant tokens only.""" input_ids = batch["input_ids"] # (B, T) labels = batch["labels"] # (B, T) โ€” user/system tokens = -100 attention_mask = batch["attention_mask"] # (B, T) # Forward pass outputs = model(input_ids=input_ids, attention_mask=attention_mask) logits = outputs.logits # (B, T, V) # Shift for next-token prediction shift_logits = logits[:, :-1, :].contiguous() shift_labels = labels[:, 1:].contiguous() # Cross-entropy with ignore_index=-100 (auto-masks user turns) loss = F.cross_entropy( shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1), ignore_index=-100, ) optimizer.zero_grad() loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() return loss.item() # Label preparation: mask everything except assistant responses def prepare_labels(input_ids, assistant_mask): """Set labels to -100 for non-assistant tokens.""" labels = input_ids.clone() labels[~assistant_mask] = -100 return labels ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** What is the role of SFT vs RLHF in the post-training pipeline? Why do you need both?
Answer SFT teaches the model format and instruction-following โ€” how to respond as an assistant rather than completing text. RLHF/DPO teaches which responses humans prefer among those the model can already produce. SFT narrows the output distribution to reasonable responses; RLHF further refines within that space. Without SFT, the model doesn
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain loss masking in SFT. Why do we only compute loss on assistant turns, not user turns?
Answer In a multi-turn conversation, the training data contains both user messages and assistant responses. We mask (zero out) the loss on user tokens because the model should not learn to generate user messages โ€” it only needs to generate assistant responses. Training on user turns wastes gradient signal on an irrelevant distribution (human queries) and can cause the model to generate user-like text. Technically, we set the labels to -100 (ignored by cross-entropy) for all non-assistant tokens, including system prompts and special tokens.
### โ˜…โ˜†โ˜† _(OpenAI, Meta)_ **Q:** Compare ChatML and Llama chat template formats. Why do chat templates matter?
Answer ChatML uses <|im_start|>role\\n content <|im_end|> delimiters. Llama uses [INST] and <> tags. Templates matter because the model learns to condition on these exact tokens during SFT โ€” using the wrong template at inference means the model sees an out-of-distribution input. Mismatched templates cause degraded performance, ignored system prompts, or the model treating instructions as content to complete. This is why you must use the exact template the model was fine-tuned with.
### โ˜…โ˜…โ˜† _(Meta, Anthropic)_ **Q:** How does rejection sampling improve post-training? Walk through the process.
Answer For each prompt, generate N completions (e.g., N=64) from the current model, score each with a reward model, and keep only the highest-scoring one. This creates a dataset of (prompt, best_response) pairs that are higher quality than what the model produces on average. These pairs can be used for further SFT or as the
### โ˜…โ˜…โ˜† _(Meta, Google)_ **Q:** Data quality vs quantity in SFT: LIMA used 1,000 examples and matched models trained on 52K. Why?
Answer The LIMA paper
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** When should you SFT a model vs just use prompt engineering? What are the tradeoffs?
Answer Prompt engineering when: (1) you have <100 examples, (2) the task is well-described in natural language, (3) you need to iterate quickly, (4) the base model is already good at the task. SFT when: (1) you need consistent format/behavior that prompting can
## Further Reading - [Training language models to follow instructions (InstructGPT)](https://arxiv.org/abs/2203.02155) The foundational paper on the SFT + RLHF pipeline. ~13K SFT demonstrations, ~33K preference comparisons. - [LIMA: Less Is More for Alignment](https://arxiv.org/abs/2305.11206) Shows 1,000 carefully curated SFT examples can match far larger datasets. Quality over quantity. - [Alpaca: A Strong, Replicable Instruction-Following Model](https://crfm.stanford.edu/2023/03/13/alpaca.html) 52K instruction-response pairs generated via Self-Instruct from GPT-3.5. Popularized SFT for open-source. - [Self-Instruct: Aligning LMs with Self-Generated Instructions](https://arxiv.org/abs/2212.10560) The method behind Alpaca โ€” use a model to generate its own instruction-tuning data. - [Llama 2: Open Foundation and Fine-Tuned Chat Models](https://arxiv.org/abs/2307.09288) Detailed post-training recipe: SFT (27K examples), rejection sampling, 5 rounds of RLHF. - [Orca 2: Teaching Small Language Models How to Reason](https://arxiv.org/abs/2311.11045) Mitra et al. 2023 โ€” smaller models trained on carefully synthesized step-by-step reasoning data can match much larger models; shows why data quality beats scale for SFT. - [Tulu 3: Pushing Frontiers in Open Language Model Post-Training](https://arxiv.org/abs/2411.15124) Lambert et al. 2024 โ€” end-to-end open post-training recipe covering data curation, SFT, DPO, and RLVR with full ablations and reproducible results. - [Karpathy โ€” State of GPT (YouTube)](https://www.youtube.com/watch?v=bZQun8Y4L2A) Clear walkthrough of how pretraining, SFT, and RLHF chain together โ€” intuition for why each stage matters and what data is needed. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws --- --- title: "RL Foundations" part: "Training" number: 19 emoji: "๐ŸŽฎ" subtitle: "MDPs, policy gradient, PPO โ€” the math before RLHF" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽฎ RL Foundations > MDPs, policy gradient, PPO โ€” the math before RLHF > [!question] Key Question > Policy = LLM, action = next token, reward = human preference โ† SFT & Post-Training Pipeline | โ†’ RLHF & Reward Models ## Key Insights > [!tip] Insight > Key connection: In standard RL (games, robotics), the environment gives intermediate rewards. In LLM training, reward comes only at the end of the full generation โ€” making credit assignment (which tokens were good?) much harder. > [!tip] Insight > REINFORCE asks: "did the whole trajectory get high reward?" and nudges all actions equally. PPO asks: "was this specific action better than average?" and makes conservative updates. That's why PPO is the standard for LLM training โ€” it's more sample-efficient and stable. > [!tip] Insight > The trend:{" "} PPO (InstructGPT, 2022) was the gold standard but required 4 models in memory (policy, reference, reward, value) . GRPO (DeepSeek, 2024){" "} dropped the value model. DPO dropped the reward model too. Each simplification trades some theoretical generality for practical stability and lower cost. ## Code Examples ```python def reinforce_loss(log_probs, rewards, gamma=1.0): """REINFORCE with baseline (mean return).""" # log_probs: [T] log-probabilities of chosen actions # rewards: [T] reward at each step (often 0 except last) # Compute returns (cumulative future reward) returns = torch.zeros_like(rewards) R = 0 for t in reversed(range(len(rewards))): R = rewards[t] + gamma * R returns[t] = R # Subtract baseline to reduce variance baseline = returns.mean() advantages = returns - baseline # Policy gradient: -log_prob * advantage loss = -(log_probs * advantages).mean() return loss ``` ```python def ppo_loss(new_log_probs, old_log_probs, advantages, epsilon=0.2): """PPO clipped surrogate objective.""" # Probability ratio: pi_new / pi_old ratio = torch.exp(new_log_probs - old_log_probs) # Clipped ratio clipped_ratio = torch.clamp(ratio, 1 - epsilon, 1 + epsilon) # PPO loss: min(ratio * A, clipped_ratio * A) loss = -torch.min( ratio * advantages, clipped_ratio * advantages, ).mean() return loss ``` ```python # REINFORCE with advantage (policy gradient for LLM fine-tuning) import torch def reinforce_step(model, prompt_ids, reward, gamma=1.0): """Single REINFORCE update. reward: scalar for the full generation.""" # Sample a completion from the current policy output = model.generate(prompt_ids, do_sample=True, max_new_tokens=200) gen_ids = output[:, prompt_ids.size(1):] # generated tokens only # Get log-probabilities for each generated token logits = model(output).logits[:, prompt_ids.size(1)-1:-1] log_probs = torch.log_softmax(logits, dim=-1) token_log_probs = log_probs.gather(-1, gen_ids.unsqueeze(-1)).squeeze(-1) # Advantage = reward - mean_reward_baseline (reduces variance) advantage = reward - reward.mean() # Policy gradient loss: -E[log ฯ€(a|s) * A] loss = -(token_log_probs.sum(-1) * advantage).mean() loss.backward() return loss.item() ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** Formulate language generation as an MDP. What are the states, actions, transitions, and rewards?
Answer State: the prompt + all tokens generated so far (the sequence context). Action: selecting the next token from the vocabulary (|V| ~ 32K-128K actions). Transition: deterministic โ€” appending the chosen token to the sequence (new state = old state + token). Reward: typically 0 for all intermediate steps, with a reward at the end of the sequence from a reward model or human evaluation. This is a large action space, long horizon MDP with sparse rewards โ€” which is why credit assignment is hard and why advantage estimation matters so much. The discount factor gamma is usually 1.0 (undiscounted, finite horizon).
### โ˜…โ˜…โ˜† _(OpenAI, Meta)_ **Q:** Compare REINFORCE and PPO. Why is vanilla REINFORCE impractical for LLM training?
Answer REINFORCE uses the gradient: nabla J = E[nabla log pi(a|s) * R]. The full return R has high variance because it includes all future randomness. This means you need many samples per update, and each update can change the policy drastically (no constraint on update size). PPO fixes both: (1) it uses advantages A_t instead of returns R, subtracting a baseline to reduce variance; (2) the clipped objective prevents large policy changes โ€” if the ratio pi_new/pi_old goes outside [1-epsilon, 1+epsilon], the gradient is clipped. For LLMs, REINFORCE would require thousands of completions per prompt to get stable gradients, while PPO works with 4-16 completions. The clipping also prevents catastrophic forgetting.
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** Explain advantage estimation. Why use GAE (Generalized Advantage Estimation)?
Answer The advantage A(s,a) = Q(s,a) - V(s) measures how much better action a is compared to the average action from state s. Using advantages instead of raw returns dramatically reduces gradient variance (since V(s) acts as a baseline). GAE (Generalized Advantage Estimation) computes A_t = sum of (gamma*lambda)^l * delta_{t+l} where delta_t = r_t + gamma*V(s_{t+1}) - V(s_t). The lambda parameter trades off bias and variance: lambda=0 gives the 1-step TD advantage (low variance, high bias), lambda=1 gives the Monte Carlo advantage (high variance, low bias). In practice, lambda=0.95 with gamma=1.0 works well for LLM training.
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is the role of the KL penalty in RLHF, and how does it relate to PPO
Answer Both the KL penalty and PPO clipping serve the same goal: preventing the policy from changing too much. But they operate differently. The KL penalty adds beta * KL(pi || pi_ref) to the loss, penalizing divergence from the reference (SFT) model โ€” this is a soft constraint. PPO clipping restricts how much the policy can change per update step โ€” this is a hard constraint on the optimization. In RLHF, you typically use BOTH: PPO
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** What is the value function
Answer In PPO, the value function V(s) serves as a baseline for advantage estimation: A_t = R_t - V(s_t). A good baseline reduces variance without introducing bias. However, training a value function for LLMs is expensive โ€” it requires a separate model (or head) the same size as the policy, doubling memory. The value function can also be inaccurate, introducing bias. GRPO (Group Relative Policy Optimization) replaces the learned value function with a simple group-based baseline: for each prompt, sample K completions, compute their rewards, and use the group mean as the baseline. Advantages are (reward - group_mean) / group_std. This is simpler, uses no extra memory, and works well in practice because the per-prompt normalization handles reward scale differences.
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** Why is RL training for LLMs unstable, and what techniques stabilize it?
Answer LLM RL training is unstable for several reasons: (1) Huge action space (32K-128K tokens) with sparse rewards โ€” hard credit assignment. (2) The reward model is imperfect โ€” the policy can exploit its weaknesses (reward hacking). (3) Small policy changes can cause large output distribution shifts (one different token changes the whole generation). (4) The value function is hard to train accurately for long sequences. Stabilization techniques: PPO clipping (limit per-step changes), KL penalty (limit drift from reference), reward model ensembles (reduce exploitation), large batch sizes (reduce gradient variance), gradient clipping, learning rate warmup, and careful beta/epsilon tuning. DeepSeek-R1 found that GRPO was more stable than PPO because removing the value function eliminated a source of approximation error.
## Further Reading - [Proximal Policy Optimization Algorithms (Schulman et al. 2017)](https://arxiv.org/abs/1707.06347) The PPO paper โ€” clipped surrogate objective that became the default RL algorithm for RLHF. - [Reinforcement Learning: An Introduction (Sutton & Barto, 2nd ed.)](http://incompleteideas.net/book/the-book-2nd.html) The definitive RL textbook covering MDPs, policy gradients, temporal-difference learning, and more. - [High-Dimensional Continuous Control Using Generalized Advantage Estimation](https://arxiv.org/abs/1506.02438) Schulman et al. 2016 โ€” introduces GAE, the variance-reduction technique that PPO relies on for stable RLHF training. - [OpenAI Spinning Up in Deep RL](https://spinningup.openai.com/en/latest/) Practical deep RL resource covering policy gradients, PPO, and SAC with working implementations โ€” the best entry point before diving into RLHF. - [Lilian Weng โ€” Policy Gradient Algorithms](https://lilianweng.github.io/posts/2018-04-08-policy-gradient/) Comprehensive derivation of REINFORCE, Actor-Critic, and PPO โ€” essential prerequisite for understanding the RLHF training loop. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws --- --- title: "RLHF & Reward Models" part: "Training" number: 20 emoji: "๐ŸŽฏ" subtitle: "Teaching models what humans prefer โ€” the 3-stage pipeline" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽฏ RLHF & Reward Models > Teaching models what humans prefer โ€” the 3-stage pipeline > [!question] Key Question > InstructGPT used just 40 human labelers to align GPT-3 โ† RL Foundations | โ†’ DPO, GRPO & Alternatives ## Key Insights > [!tip] Insight > Think of RLHF as teaching a student with a rubric (reward model) and a tutor (PPO). DPO is like giving the student paired examples: “this answer is better than that one” โ€” and letting them learn the rubric implicitly. > [!tip] Insight > GRPO is essentially “best-of-N sampling turned into a gradient signal.” The group mean acts as a dynamic baseline โ€” outputs above average get positive advantage, below average get negative. No critic required. > [!tip] Insight > The practical split emerging across labs: AI feedback for volume (millions of examples), human feedback for calibration and adversarial edge cases. Neither alone is sufficient โ€” human feedback anchors the AI's judgment, and AI feedback scales it. > [!tip] Insight > When , no constraint โ€” pure reward maximization (leads to reward hacking). When{" "} , the model can't deviate from the reference at all (no learning). > [!tip] Insight > The 1.3B InstructGPT model outperformed the 175B GPT-3 base model in human preference studies โ€” alignment quality matters more than raw scale. This is the core argument for RLHF as a capability multiplier, not just a safety technique. > [!tip] Insight > The trend is clear: PPO-based RLHF is being replaced by simpler alternatives. DPO for offline training, GRPO for online training, and Constitutional AI for scaling feedback. The core idea โ€” learning from preferences โ€” remains the same. ## Code Examples ```python import torch.nn.functional as F def dpo_loss(pi_logps_w, pi_logps_l, ref_logps_w, ref_logps_l, beta=0.1): """Direct Preference Optimization loss.""" # Log-probability ratios (implicit rewards) pi_ratio_w = pi_logps_w - ref_logps_w # preferred pi_ratio_l = pi_logps_l - ref_logps_l # dispreferred # DPO loss: -log sigmoid(beta * (preferred - dispreferred)) logits = beta * (pi_ratio_w - pi_ratio_l) loss = -F.logsigmoid(logits).mean() return loss ``` ```python import torch def ppo_clip_loss(new_log_probs, old_log_probs, ref_log_probs, advantages, eps=0.2, beta=0.05): """PPO-Clip objective for RLHF. new_log_probs: log ฯ€_ฮธ(y|x) under current policy old_log_probs: log ฯ€_old(y|x) from the rollout policy ref_log_probs: log ฯ€_ref(y|x) from frozen SFT checkpoint advantages: r(x,y) - V(x) โ€” reward minus value baseline """ ratio = (new_log_probs - old_log_probs).exp() clipped = torch.clamp(ratio, 1 - eps, 1 + eps) policy_loss = -torch.min(ratio * advantages, clipped * advantages).mean() # KL divergence penalty against reference (SFT) policy kl_penalty = beta * (new_log_probs - ref_log_probs).mean() return policy_loss + kl_penalty ``` ```python # PPO clipped surrogate objective for RLHF import torch, torch.nn.functional as F def ppo_rlhf_step(policy, ref_policy, reward_model, prompts, eps=0.2, beta=0.05): """One PPO update step in the RLHF loop.""" # 1. Sample completions from current policy with torch.no_grad(): completions = policy.generate(prompts, do_sample=True, max_new_tokens=256) rewards = reward_model(prompts, completions) # scalar per sample old_log_probs = policy.log_prob(completions) # log ฯ€_old ref_log_probs = ref_policy.log_prob(completions) # log ฯ€_ref (frozen SFT) # 2. Compute advantages (reward - value baseline; simplified: reward - mean) advantages = (rewards - rewards.mean()) / (rewards.std() + 1e-8) # 3. PPO-clip + KL penalty new_log_probs = policy.log_prob(completions) ratio = (new_log_probs - old_log_probs).exp() clipped = torch.clamp(ratio, 1 - eps, 1 + eps) policy_loss = -torch.min(ratio * advantages, clipped * advantages).mean() kl_loss = beta * (new_log_probs - ref_log_probs).mean() return policy_loss + kl_loss ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** Walk through the full RLHF pipeline. What are the three stages and why is each necessary?
Answer Stage 1: Supervised Fine-Tuning (SFT) โ€” train the base model on high-quality instruction/response pairs so it learns to follow instructions. Stage 2: Reward Model training โ€” collect human preference data (labelers rank multiple outputs for the same prompt) and train a model to predict which output humans prefer. Stage 3: RL optimization (PPO) โ€” use the reward model as a signal to further fine-tune the SFT model, maximizing reward while staying close to the SFT policy (KL penalty). Each stage builds on the previous: SFT gives a reasonable starting point, the reward model captures nuanced human preferences that can
### โ˜…โ˜…โ˜… _(Anthropic, Meta)_ **Q:** Explain DPO and how it differs from PPO-based RLHF. What are its advantages?
Answer DPO (Direct Preference Optimization) skips the reward model entirely. Instead of training a separate reward model and then running RL, DPO directly optimizes the policy using preference pairs. The key insight: the optimal policy under the KL-constrained reward maximization objective has a closed-form relationship to the reward function. DPO rearranges this to create a loss that only depends on the policy
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** What is reward hacking and how does the KL penalty prevent it?
Answer Reward hacking occurs when the RL-optimized model finds outputs that score highly with the reward model but aren
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What is Constitutional AI (CAI) and how does it reduce reliance on human labelers?
Answer Constitutional AI (Anthropic, 2022) replaces human preference labelers with AI-generated feedback. The process: (1) generate responses, (2) ask the AI to critique its own response against a set of principles (
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** How does GRPO (Group Relative Policy Optimization) differ from PPO?
Answer GRPO (introduced in DeepSeekMath, Shao et al. 2024; later adopted by DeepSeek-R1, 2025) eliminates the critic/value network that PPO requires. Instead of estimating advantages with a learned value function, GRPO samples a group of outputs for each prompt, scores them with the reward model, and computes advantages relative to the group mean. This simplifies training (no value head to train), reduces memory (one fewer model in the pipeline), and reportedly stabilizes training. The relative scoring within a group naturally normalizes rewards across different prompts, avoiding the reward scale issues that plague PPO.
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** How is preference data collected? What are the challenges and biases?
Answer Preference data collection: present labelers with a prompt and 2+ model outputs, ask them to rank by quality. Challenges: (1) inter-annotator disagreement โ€” different people have different preferences, typically only 60-75% agreement; (2) position bias โ€” labelers tend to prefer the first response shown; (3) length bias โ€” longer responses often rated higher regardless of quality; (4) sycophancy โ€” responses that agree with the user are preferred even when wrong; (5) cost and scale โ€” InstructGPT used ~40 labelers, which limits diversity of preferences. Mitigations: multiple annotators per example, randomize ordering, explicit rubrics, stratify by annotator demographics.
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is the role of the reference model in RLHF/DPO and what happens if you remove it?
Answer The reference model (pi_ref) is typically the SFT checkpoint, frozen during RL training. It serves as an anchor: the KL(pi || pi_ref) penalty prevents the policy from drifting too far. Without it (or with beta=0), the model is free to maximize reward without constraint. In practice this leads to: (1) mode collapse โ€” model produces similar outputs for all prompts, (2) reward hacking โ€” exploiting reward model weaknesses, (3) catastrophic forgetting โ€” losing language capabilities learned during pre-training. In DPO, the reference model appears directly in the loss function as the log-probability ratios.
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Compare RLHF, DPO, and RLAIF. When would you choose each approach?
Answer RLHF (PPO): most general, requires reward model + RL training. Choose when you need a reusable reward model or online data generation. DPO: simpler, no reward model, offline training on preference data. Choose when you have a good static preference dataset and want simplicity. RLAIF: uses AI feedback instead of human labelers. Choose when scaling preference data is the bottleneck or when you want explicit, auditable alignment principles. In practice, the field is moving toward DPO variants for simplicity and RLAIF for scale. PPO-based RLHF remains relevant for frontier models where online exploration matters, but its complexity makes it impractical for most teams.
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is sycophancy in RLHF and how does it emerge?
Answer Sycophancy is when the model learns to agree with the user
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** What is reward overoptimization? At what point does optimizing against a reward model hurt quality?
Answer Reward overoptimization is Goodhart
## Further Reading - [Training language models to follow instructions with human feedback (InstructGPT)](https://arxiv.org/abs/2203.02155) OpenAI - [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073) Anthropic - [Fine-Tuning Language Models from Human Preferences (Ziegler et al. 2019)](https://arxiv.org/abs/1909.08593) The original RLHF paper applying reward learning from human preferences to language models. - [Lilian Weng](https://lilianweng.github.io/posts/2023-01-10-reinforcement-learning-human-feedback/) Comprehensive posts on RLHF, reward modeling, and alignment โ€” covers reward hacking, Goodhart - [Reward Model Ensembles Help Mitigate Overoptimization](https://arxiv.org/abs/2310.02743) Coste et al. 2023 โ€” shows that reward hacking can be reduced by ensembling multiple reward models, with quantitative analysis of the overoptimization curve. - [Andrej Karpathy โ€” The State of GPT (Microsoft Build 2023)](https://www.youtube.com/watch?v=bZQun8Y4L2A) Practical walkthrough of the full RLHF pipeline from SFT through reward modeling to PPO โ€” includes real data requirements and compute costs. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws --- --- title: "DPO, GRPO & Alternatives" part: "Training" number: 21 emoji: "โš–๏ธ" subtitle: "Skip the reward model โ€” direct preference optimization" tags: ["training", "ml", "ai-engineering", "interview-prep", "transformer"] --- # โš–๏ธ DPO, GRPO & Alternatives > Skip the reward model โ€” direct preference optimization > [!question] Key Question > DPO collapses reward model + PPO into a single loss function โ† RLHF & Reward Models ## Key Insights > [!tip] Insight > The evolution: PPO asks "how good is this output?" (reward model) then "how do I improve?" (PPO). DPO asks "which output is better?" directly. GRPO asks "how does this output compare to its siblings?" Each step removes a component while preserving the core signal: human preferences. > [!tip] Insight > controls divergence from the reference.{" "} Typical values: 0.1 to 0.5.{" "} This is the single most important hyperparameter in DPO. > [!tip] Insight > DPO typically costs 2-4x less than PPO because it needs only 2 models in memory (not 4) and uses standard supervised training (no rollout generation). {" "} GRPO falls in between โ€” needs a reward model but no critic. ## Code Examples ```python import torch.nn.functional as F def dpo_loss(pi_logps_w, pi_logps_l, ref_logps_w, ref_logps_l, beta=0.1): """Direct Preference Optimization loss. All inputs: (batch,) log-probabilities summed over tokens. """ # Implicit rewards: log-ratio of policy vs reference reward_w = pi_logps_w - ref_logps_w # preferred reward_l = pi_logps_l - ref_logps_l # dispreferred # DPO loss = -log sigmoid(beta * (preferred_reward - dispreferred_reward)) logits = beta * (reward_w - reward_l) return -F.logsigmoid(logits).mean() ``` ```python # DPO training loop โ€” no reward model, no PPO import torch, torch.nn.functional as F def compute_log_probs(model, input_ids, labels): """Sum log-probs over assistant tokens (labels != -100).""" logits = model(input_ids).logits[:, :-1] # (B, T-1, V) shift_labels = labels[:, 1:] # (B, T-1) log_probs = F.log_softmax(logits, dim=-1) token_lp = log_probs.gather(-1, shift_labels.clamp(0).unsqueeze(-1)).squeeze(-1) mask = (shift_labels != -100).float() return (token_lp * mask).sum(-1) # (B,) def dpo_loss(policy, ref_policy, batch, beta=0.1): """batch contains: chosen_ids, rejected_ids, chosen_labels, rejected_labels.""" pi_w = compute_log_probs(policy, batch["chosen_ids"], batch["chosen_labels"]) pi_l = compute_log_probs(policy, batch["rejected_ids"], batch["rejected_labels"]) ref_w = compute_log_probs(ref_policy, batch["chosen_ids"], batch["chosen_labels"]) ref_l = compute_log_probs(ref_policy, batch["rejected_ids"],batch["rejected_labels"]) logits = beta * ((pi_w - ref_w) - (pi_l - ref_l)) return -F.logsigmoid(logits).mean() ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, Meta)_ **Q:** Why is DPO simpler than PPO-based RLHF? What does it eliminate?
Answer DPO eliminates two entire components: the reward model and the RL optimizer (PPO). In PPO-based RLHF, you need 4 models in memory (policy, reference, reward, value/critic). DPO needs only 2 (policy and reference). The key mathematical insight: the optimal policy under KL-constrained reward maximization has a closed-form solution that maps rewards to log-probability ratios. DPO inverts this โ€” instead of learning a reward then optimizing a policy, it directly adjusts the policy using preference pairs. The loss is a simple binary cross-entropy over log-probability ratios, no sampling rollouts, no advantage estimation, no clipping.
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** When would you choose PPO-based RLHF over DPO? When would you choose DPO?
Answer Choose PPO when: (1) you need online exploration โ€” PPO generates new samples during training and gets reward feedback, letting it discover novel good behaviors; (2) you want a reusable reward model for monitoring or filtering; (3) your preference data is sparse and you need the reward model to generalize. Choose DPO when: (1) you have a large static preference dataset; (2) you want simplicity and stability โ€” DPO has far fewer hyperparameters; (3) compute budget is limited โ€” DPO needs only 2 models vs 4; (4) you want deterministic, reproducible training. In practice, most teams choose DPO because PPO is notoriously hard to tune.
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** What are the advantages of GRPO over both PPO and DPO?
Answer GRPO (Group Relative Policy Optimization) from DeepSeek eliminates the critic/value network that PPO requires. For each prompt, GRPO samples a group of K outputs, scores them with a reward model, and computes advantages as (reward - group_mean) / group_std. Advantages over PPO: (1) no value network to train (saves memory and complexity), (2) relative scoring within groups naturally normalizes across prompts, (3) more stable training. Advantages over DPO: (1) online โ€” generates fresh samples each iteration, enabling exploration; (2) can use any reward signal (not just pairwise preferences); (3) works better when preference data is limited. The tradeoff: GRPO still needs a reward model (unlike DPO) but is much simpler than full PPO.
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What is Constitutional AI (RLAIF) and why does it matter for scaling alignment?
Answer Constitutional AI (Anthropic, 2022) replaces human preference labelers with AI-generated feedback โ€” hence RLAIF (RL from AI Feedback). Process: (1) generate responses, (2) ask the AI to critique its response against a set of explicit principles (
### โ˜…โ˜…โ˜† _(Anthropic, Meta)_ **Q:** What does beta control in DPO, and what happens if you set it too high or too low?
Answer Beta controls how much the policy can deviate from the reference model โ€” it
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Compare offline DPO vs online DPO. What is the distribution mismatch problem?
Answer Offline DPO trains on a fixed preference dataset collected from some behavior policy (often the SFT model). The problem: as the policy improves during training, it diverges from the behavior policy that generated the training data. The preference pairs no longer represent the model
### โ˜…โ˜…โ˜… _(Meta, Google)_ **Q:** What is the offline-to-online gap in DPO? When does DPO underperform PPO?
Answer DPO trains on a fixed preference dataset collected from a behavior policy (typically the SFT model). As DPO training progresses, the policy moves away from that behavior policy, but the training data doesn
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** How does iterative DPO / online DPO address DPO
Answer Iterative DPO (online DPO) closes the distribution gap by repeating a loop: (1) generate new completions using the current policy, (2) obtain preferences on those completions (via human labelers or an AI judge), (3) add the new preference pairs to the dataset and retrain with DPO. This gives RLHF-style online generation combined with DPO-style optimization โ€” simpler than PPO (no value network, no advantage estimation, no clipping) but more adaptive than offline DPO. Each iteration
## Further Reading - [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://arxiv.org/abs/2305.18290) The DPO paper โ€” eliminates the reward model by directly optimizing policy from preference pairs. - [SimPO: Simple Preference Optimization with a Reference-Free Reward](https://arxiv.org/abs/2405.14734) Meng et al. 2024 โ€” removes the reference model from DPO entirely, using sequence-average log-probability as the implicit reward. - [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948) DeepSeek - [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073) Anthropic - [Lilian Weng](https://lilianweng.github.io/) In-depth posts on preference learning, RLHF variants, and alignment techniques including DPO analysis. - [A General Theoretical Paradigm to Understand Learning from Human Feedback (IPO)](https://arxiv.org/abs/2310.12036) Azar et al. 2024 โ€” introduces IPO to address DPO - [ORPO: Monolithic Preference Optimization without Reference Model](https://arxiv.org/abs/2403.07691) Hong et al. 2024 โ€” combines SFT and preference optimization in a single pass using an odds ratio penalty, eliminating the need for a reference model entirely. - [Zephyr: Direct Distillation of LM Alignment](https://arxiv.org/abs/2310.16944) Tunstall et al. 2023 โ€” the first widely-adopted DPO model (Mistral-7B base), with a concrete recipe for distilled preference data generation. ## Related Backpropagation ยท Optimizers ยท Pre-training & Loss ยท Data Curation ยท Scaling Laws --- --- title: "KV Cache & Memory" part: "Inference" number: 22 emoji: "๐Ÿ’พ" subtitle: "Why generation is memory-bound and how to fix it" tags: ["inference", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ’พ KV Cache & Memory > Why generation is memory-bound and how to fix it > [!question] Key Question > Llama-70B needs 10 GB of KV cache per request at 4K context โ†’ Flash Attention ## Key Insights > [!tip] Insight > Without cache:{" "} {" "} KV projections. With cache:{" "} . For a 2048-token generation that is ~2.1M vs 2K operations โ€” a{" "} 1000x reduction in KV projection work. > [!tip] Insight > KV cache is why serving LLMs is fundamentally different from serving traditional ML models. A single Llama-2 70B request at 4K context needs 1.34 GB just for the cache โ€” multiply by batch size to see why GPU memory is the primary constraint. > [!tip] Insight > At batch=32 with GQA:{" "} 32 x 1.34 GB = 43 GB just for KV cache . With MHA it would be 340+ GB. This is why GQA is not optional for production serving of 70B+ models. > [!tip] Insight > The key insight: a 7B model with MHA has a larger KV cache (2.1 GB) than a 70B model with GQA ( 1.3 GB) at the same sequence length. KV cache size depends on architecture choices, not just model size. ## Code Examples ```python import torch import torch.nn.functional as F class KVCache: """Simple KV cache for autoregressive generation.""" def __init__(self, n_layers, n_kv_heads, d_k, max_seq, dtype=torch.float16): # Pre-allocate for max sequence length self.k = torch.zeros(n_layers, 1, n_kv_heads, max_seq, d_k, dtype=dtype) self.v = torch.zeros(n_layers, 1, n_kv_heads, max_seq, d_k, dtype=dtype) self.seq_len = 0 def update(self, layer_idx, k_new, v_new): """Append new K, V for one token at one layer.""" # k_new, v_new: [batch, n_kv_heads, 1, d_k] self.k[layer_idx, :, :, self.seq_len:self.seq_len+1, :] = k_new self.v[layer_idx, :, :, self.seq_len:self.seq_len+1, :] = v_new def get(self, layer_idx): """Return cached K, V up to current position.""" return ( self.k[layer_idx, :, :, :self.seq_len+1, :], self.v[layer_idx, :, :, :self.seq_len+1, :], ) def step(self): self.seq_len += 1 def cached_attention(q, K_cached, V_cached, d_k): """Single-token attention with KV cache. q: [B, n_heads, 1, d_k] (just the new token) K_cached: [B, n_kv_heads, seq, d_k] V_cached: [B, n_kv_heads, seq, d_k] """ # GQA: repeat KV heads to match Q heads # (omitted for clarity โ€” expand n_kv_heads -> n_q_heads) attn = (q @ K_cached.transpose(-2, -1)) / (d_k ** 0.5) attn = F.softmax(attn, dim=-1) return attn @ V_cached # [B, n_heads, 1, d_k] # Memory: 2 * 80 * 8 * 128 * 4096 * 1 * 2 = 1.34 GB (Llama-2 70B) ``` ```python # KV cache: accumulate past_key_values during autoregressive generation import torch def generate_with_kv_cache(model, input_ids, max_new_tokens=50): past_key_values = None # grows each step: list of (k, v) per layer for _ in range(max_new_tokens): outputs = model( input_ids=input_ids, past_key_values=past_key_values, use_cache=True, ) # outputs.past_key_values: tuple of (key, value) per layer # shape: (batch, n_kv_heads, seq_len_so_far, d_k) past_key_values = outputs.past_key_values next_token = outputs.logits[:, -1, :].argmax(dim=-1, keepdim=True) input_ids = next_token # only pass NEW token; cache handles the rest yield next_token ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(OpenAI, Databricks)_ **Q:** Calculate KV cache memory for Llama-2 70B at 4K context, FP16, batch=1.
Answer KV cache = 2 (K+V) x 80 layers x 8 KV heads (GQA) x 128 d_k x 4096 seq x 1 batch x 2 bytes = 1.34 GB. With MHA (64 heads instead of 8 GQA heads) it would be 10.7 GB. GQA provides an 8x reduction, which is why it
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** Why is autoregressive generation memory-bound rather than compute-bound?
Answer During generation, each step produces only ONE token. The compute per step is a single matrix-vector multiply (query x all keys), which has very low arithmetic intensity (FLOPs per byte loaded). The GPU spends most time loading model weights and KV cache from HBM, not computing. This is the opposite of training, where large batch sizes give high arithmetic intensity. The roofline model makes this clear: generation sits in the memory-bandwidth-limited region.
### โ˜…โ˜…โ˜… _(Databricks, OpenAI)_ **Q:** Explain PagedAttention (vLLM). What problem does it solve and how?
Answer Standard KV cache pre-allocates contiguous memory for max_seq_len per request. This wastes memory when actual sequences are shorter, and causes fragmentation. PagedAttention borrows from OS virtual memory: stores KV cache in non-contiguous
### โ˜…โ˜…โ˜† _(Google, Meta, Anthropic)_ **Q:** Compare MHA vs MQA vs GQA. When would you choose each?
Answer MHA: each head has its own K, V projections. Full expressivity but KV cache = n_heads x d_k per layer. MQA (Multi-Query): all heads share ONE K, V โ€” KV cache shrinks by n_heads x. Fast inference but quality drops ~1%. GQA (Grouped-Query): heads grouped into G groups sharing K, V. G=1 is MQA, G=n_heads is MHA. Llama-2 70B uses G=8 (8 KV heads for 64 Q heads) โ€” 8x KV reduction with negligible quality loss. Choose GQA for production LLM serving; MHA only for small models where memory isn
### โ˜…โ˜…โ˜… _(Databricks, Google)_ **Q:** What is continuous batching and why is it critical for serving?
Answer Traditional static batching waits until a full batch is ready, then processes all sequences together until the longest one finishes. Shorter sequences waste GPU cycles waiting. Continuous batching (vLLM, TGI): when a sequence finishes, immediately insert a new request into the batch at the next decode step. This eliminates head-of-line blocking and can improve throughput 2-5x by keeping the GPU busy. Also called
### โ˜…โ˜…โ˜… _(Databricks, Meta)_ **Q:** How does KV cache quantization work and what are the tradeoffs?
Answer KV cache quantization stores cached K, V vectors in lower precision (FP8 or INT4) instead of FP16. This can reduce KV cache memory by 2-4x with minimal quality degradation (<0.1% perplexity increase for FP8 KV). The key insight: KV cache values have a smaller dynamic range than model weights, making them more amenable to quantization. KIVI (2024) showed INT2 KV quantization is possible with per-channel quantization. This directly increases max batch size or max sequence length for the same GPU memory.
## Further Reading - [Efficient Memory Management for Large Language Model Serving with PagedAttention](https://arxiv.org/abs/2309.06180) The vLLM paper โ€” virtual memory paging for KV cache, eliminating fragmentation and enabling continuous batching. - [GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints](https://arxiv.org/abs/2305.13245) Grouped-query attention โ€” interpolates between MHA and MQA to reduce KV cache size with minimal quality loss. - [KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache](https://arxiv.org/abs/2402.02750) Per-channel INT2 KV cache quantization โ€” 2.35x memory reduction with negligible quality loss, enabling longer contexts on the same GPU. - [Fast Transformer Decoding: One Write-Head is All You Need (MQA)](https://arxiv.org/abs/1911.02150) Shazeer 2019 โ€” multi-query attention reduces KV cache size by sharing one KV head across all query heads. Direct precursor to GQA. - [Lilian Weng](https://lilianweng.github.io/) In-depth posts on KV cache optimization, memory management, and efficient LLM serving. ## Related Flash Attention ยท Sampling & Decoding ยท Quantization ยท Speculative Decoding ยท LLM Deployment --- --- title: "Flash Attention" part: "Inference" number: 23 emoji: "โšก" subtitle: "Tiling, IO-awareness, and O(N) memory attention" tags: ["inference", "ml", "ai-engineering", "interview-prep", "transformer"] --- # โšก Flash Attention > Tiling, IO-awareness, and O(N) memory attention > [!question] Key Question > Same FLOPs, 2-4x faster โ€” by never writing the Nยฒ attention matrix โ† KV Cache & Memory | โ†’ Sampling & Decoding ## Key Insights > [!tip] Insight > Flash Attention does the same math (same FLOPs), but restructures WHERE that math happens. By keeping intermediate results in fast SRAM instead of writing to slow HBM, it achieves{" "} 2-4x wallclock speedup{" "} with zero approximation. > [!tip] Insight > Flash Attention doesn't change the math โ€” the output is bit-for-bit identical to standard attention. It only changes the{" "} order of operations to be IO-aware. This is a pure systems optimization, not an approximation. Unlike Linformer or Performer (which approximate the attention matrix), Flash Attention is an exact drop-in replacement with zero quality loss. > [!tip] Insight > On an A100 with{" "} ~192 KB SRAM per SM {" "} and d=128, both sides in elements:{" "} . Equivalently, in bytes (FP16 = 2 B):{" "} โ€” Flash's tiling fits comfortably and HBM access drops by a large factor for long sequences. > [!tip] Insight > Flash Attention is now the default in every major framework. PyTorch 2.0+ automatically uses it via{" "} F.scaled_dot_product_attention . It enabled the jump from 4K to 128K+ context lengths โ€” without it, 128K attention would need ~32 GB per head just for the attention matrix. ## Code Examples ```python import torch import torch.nn.functional as F # Method 1: PyTorch native (auto-selects Flash Attention if available) # Requires: PyTorch >= 2.0, CUDA, head_dim <= 256 def attention_with_flash(q, k, v, is_causal=True): """ q, k, v: [batch, n_heads, seq_len, d_k] Uses Flash Attention kernel automatically when possible. """ return F.scaled_dot_product_attention( q, k, v, is_causal=is_causal, # PyTorch auto-dispatches to flash_attn kernel ) # Method 2: flash-attn library (Tri Dao's implementation) # pip install flash-attn from flash_attn import flash_attn_func def attention_with_flash_v2(q, k, v, causal=True): """ q, k, v: [batch, seq_len, n_heads, d_k] (note: different layout!) Returns: [batch, seq_len, n_heads, d_k] """ return flash_attn_func(q, k, v, causal=causal) # Check which backend PyTorch selects: # torch.backends.cuda.flash_sdp_enabled() # True if Flash available # torch.backends.cuda.mem_efficient_sdp_enabled() # xformers fallback ``` ```python # Flash Attention: use F.scaled_dot_product_attention (PyTorch 2.0+) # Auto-selects Flash Attention kernel when on CUDA with head_dim <= 256 import torch import torch.nn.functional as F def causal_self_attention(q, k, v): # q, k, v: (batch, n_heads, seq_len, head_dim) # is_causal=True applies the causal mask without materializing it return F.scaled_dot_product_attention(q, k, v, is_causal=True) # Verify Flash Attention is enabled: # torch.backends.cuda.flash_sdp_enabled() # True on CUDA >= sm80 # torch.backends.cuda.math_sdp_enabled() # fallback: standard O(Nยฒ) # Flash Attention saves memory: O(N) vs O(Nยฒ) for seq_len=128K # N=128K: standard = 128Kยฒ ร— 2B = 32GB; flash = 128K ร— 128 ร— 2B = 32MB ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** Flash Attention does the same number of FLOPs as standard attention. Why is it faster?
Answer The speedup comes entirely from reducing HBM (GPU main memory) reads and writes, not from reducing computation. Standard attention writes the full N x N attention matrix S = QK^T to HBM, then reads it back for softmax, then writes the softmax result, then reads it for S x V. Flash Attention tiles the computation so that Q, K, V blocks are loaded into SRAM (on-chip, ~19 TB/s bandwidth) once, the entire attention computation happens there, and only the final output O is written back to HBM. Total HBM access drops from ฮ˜(Nd + N^2) to ฮ˜(N^2 d^2 / M) where M is SRAM size โ€” often a 2-4x wallclock improvement.
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** How does online softmax work and why is it essential for Flash Attention?
Answer Standard softmax requires two passes: (1) find the global max across all N elements for numerical stability, (2) compute exp(x_i - max) and normalize by the sum. This requires materializing all N scores in memory first. Online softmax (Milakov & Gimelshein 2018) does it in one pass: maintain a running max m and a running sum of exponentials d. When processing a new block with a larger max m
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** What is IO-awareness and why does it matter for GPU kernels?
Answer IO-awareness means designing algorithms around the memory hierarchy (registers > SRAM > HBM > CPU memory), not just minimizing FLOP count. Modern GPUs have ~1000 TFLOPS compute but only ~2-3 TB/s HBM bandwidth. If your algorithm is memory-bound (low arithmetic intensity), the GPU sits idle waiting for data. Standard attention is memory-bound: it writes/reads the N x N matrix multiple times. Flash Attention is IO-aware: it restructures the computation to maximize work done per byte loaded from HBM. Same FLOPs, but 2-4x faster because it respects the memory hierarchy.
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** Compare Flash Attention v1, v2, and v3. What changed in each version?
Answer v1 (Dao 2022): introduced tiling + online softmax for O(N) memory, 2-4x speedup over PyTorch standard attention. Outer loop over K/V blocks, inner loop over Q blocks. v2 (Dao 2023): reversed loop order (outer Q, inner K/V) to reduce non-matmul FLOPs, better work partitioning across warps, ~2x faster than v1 (reaching 50-73% of theoretical max FLOPS on A100). v3 (Dao 2024, H100): exploits H100-specific features โ€” asynchronous data movement (TMA), FP8 tensor cores, warp specialization. Another 1.5-2x over v2 on H100, approaching hardware limits.
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** Is Flash Attention exact or approximate? What are the implications?
Answer Exact. Flash Attention computes mathematically identical results to standard scaled dot-product attention (up to floating point rounding). This is a crucial distinction from earlier
## Further Reading - [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness](https://arxiv.org/abs/2205.14135) The original Flash Attention paper โ€” tiling attention computation to exploit GPU SRAM, achieving 2-4x speedup. - [FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning](https://arxiv.org/abs/2307.08691) Flash Attention v2 โ€” improved work partitioning across warps and thread blocks for up to 2x additional speedup. - [Ring Attention with Blockwise Transformers for Near-Infinite Context](https://arxiv.org/abs/2310.01889) Liu et al. 2023 โ€” extends Flash Attention - [Lilian Weng](https://lilianweng.github.io/) Technical posts on efficient attention, memory optimization, and inference acceleration for LLMs. - [FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision](https://arxiv.org/abs/2407.08608) Shah et al. 2024 โ€” Flash Attention v3 for H100, using async TMA and FP8 tensor cores to achieve 1.5-2x speedup over v2. - [Online normalizer calculation for softmax (Milakov & Gimelshein)](https://arxiv.org/abs/1805.02867) The online softmax algorithm that makes Flash Attention ## Related KV Cache & Memory ยท Sampling & Decoding ยท Quantization ยท Speculative Decoding ยท LLM Deployment --- --- title: "Sampling & Decoding" part: "Inference" number: 24 emoji: "๐ŸŽฒ" subtitle: "Temperature, top-k, top-p โ€” how the model picks the next token" tags: ["inference", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽฒ Sampling & Decoding > Temperature, top-k, top-p โ€” how the model picks the next token > [!question] Key Question > Temperature 0 = always 'the', temperature 2 = sometimes 'banana' โ† Flash Attention | โ†’ Quantization ## Key Insights > [!tip] Insight > Think of it as a funnel: the model produces a full distribution over{" "} 50,000+ tokens, then temperature reshapes it, top-k/top-p trim it, and finally one token is randomly drawn from what remains. > [!tip] Insight > Top-p adapts to the distribution shape. If the model is confident (peaked distribution), fewer tokens pass the threshold. If uncertain (flat), more tokens are included. This is why top-p is generally preferred over a fixed top-k. > [!tip] Insight > Set too high (e.g., 1.5+) and the model avoids common function words like “the” and “is”, producing grammatically broken output. Typical safe range: 1.0 (off) to 1.3. > [!tip] Insight > Most production LLM APIs default to T=1.0 and let the user lower it. A common mistake is setting both low temperature AND tight top-p โ€” they compound, making output extremely deterministic and repetitive. Pick one primary lever; the other should stay near its default. ## Code Examples ```python # Temperature + top-k + top-p (nucleus) sampling in PyTorch import torch import torch.nn.functional as F def sample(logits, temperature=1.0, top_k=50, top_p=0.9): logits = logits / max(temperature, 1e-8) # temperature scaling # Top-k: zero out everything below the k-th highest logit if top_k > 0: kth = torch.topk(logits, top_k).values[..., -1, None] logits = logits.masked_fill(logits < kth, float("-inf")) # Top-p: keep smallest set whose cumulative prob >= p probs = F.softmax(logits, dim=-1) sorted_probs, sorted_idx = probs.sort(dim=-1, descending=True) cum_probs = sorted_probs.cumsum(dim=-1) mask = (cum_probs - sorted_probs) > top_p sorted_probs[mask] = 0.0 sorted_probs /= sorted_probs.sum(dim=-1, keepdim=True) probs.scatter_(-1, sorted_idx, sorted_probs) return torch.multinomial(probs, num_samples=1) ``` ```python def sample_next_token(logits, temperature=1.0, top_k=0, top_p=1.0): """Apply temperature, top-k, and top-p filtering, then sample.""" # Temperature scaling if temperature != 1.0: logits = logits / temperature # Top-k filtering if top_k > 0: top_k_values, _ = torch.topk(logits, top_k) min_top_k = top_k_values[:, -1, None] logits = torch.where(logits < min_top_k, float('-inf'), logits) # Top-p (nucleus) filtering if top_p < 1.0: sorted_logits, sorted_idx = torch.sort(logits, descending=True) cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1) # Remove tokens with cumulative prob above threshold sorted_mask = cumulative_probs - F.softmax(sorted_logits, dim=-1) > top_p sorted_logits[sorted_mask] = float('-inf') logits = sorted_logits.scatter(1, sorted_idx, sorted_logits) # Sample from the filtered distribution probs = F.softmax(logits, dim=-1) return torch.multinomial(probs, num_samples=1) ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** Explain temperature, top-k, and top-p. How do they interact?
Answer Temperature scales logits before softmax: T<1 sharpens the distribution (more deterministic), T>1 flattens it (more random). Top-k keeps only the k highest-probability tokens and renormalizes. Top-p (nucleus sampling) keeps the smallest set of tokens whose cumulative probability >= p, then renormalizes. In practice they
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** When would you use beam search vs sampling? What are the tradeoffs?
Answer Beam search maintains the top-b most likely sequences by log-probability sum, producing high-likelihood but often generic/repetitive text. Best for tasks with a
### โ˜…โ˜…โ˜† _(OpenAI)_ **Q:** What is nucleus sampling (top-p) and why is it preferred over top-k alone?
Answer Nucleus sampling (Holtzman et al., 2020) keeps the smallest set of tokens whose cumulative probability >= p. Unlike top-k which always keeps exactly k tokens regardless of distribution shape, top-p adapts: for a peaked distribution (model is confident), it might keep only 2-3 tokens; for a flat distribution (model is uncertain), it keeps many. This avoids two failure modes of top-k: (1) including very unlikely tokens when the distribution is peaked (k too large), and (2) excluding reasonable tokens when the distribution is flat (k too small).
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** How does repetition penalty work? What are its failure modes?
Answer Repetition penalty divides the logit of any previously-generated token by a penalty factor (>1.0). This makes repeated tokens less likely. Failure modes: (1) too aggressive penalty causes the model to avoid common function words (
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** Why is greedy decoding suboptimal for open-ended generation?
Answer Greedy decoding always picks the highest-probability token at each step (argmax). Problems: (1) it
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** What is the relationship between temperature and entropy of the output distribution?
Answer Temperature directly controls the entropy (uncertainty) of the output distribution. At T->0, entropy approaches 0 (one-hot distribution, no uncertainty). At T=1, entropy equals the model
### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** How would you set sampling parameters for: (a) code generation, (b) creative writing, (c) factual QA?
Answer (a) Code generation: T=0.0-0.2, top-p=0.95, no repetition penalty. Code has strict syntax โ€” you want high confidence, near-deterministic output. Low temperature because there
### โ˜…โ˜…โ˜… _(OpenAI, Databricks)_ **Q:** What is speculative decoding and how does it interact with sampling?
Answer Speculative decoding uses a small
## Further Reading - [The Curious Case of Neural Text Degeneration (Holtzman et al. 2020)](https://arxiv.org/abs/1904.09751) Introduces nucleus sampling (top-p) โ€” dynamically truncates the vocabulary to the smallest set covering probability p. - [Typical Decoding for Natural Language Generation](https://arxiv.org/abs/2202.00666) Typical sampling โ€” selects tokens whose information content is close to the expected information, producing more human-like text. - [Min-P Sampling: Truncation Sampling as Language Model Desmoothing](https://arxiv.org/abs/2407.01082) Nguyen et al. 2024 โ€” min-p sets a dynamic floor at p_min ร— max_prob, automatically adapting to the distribution without the fixed-cutoff fragility of top-p. - [Transformer Explainer (Georgia Tech)](https://poloclub.github.io/transformer-explainer/) Interactive GPT-2 demo โ€” adjust temperature and sampling settings and see their effect on next-token probability distributions live. - [Andrej Karpathy โ€” Let](https://www.youtube.com/watch?v=kCc8FmEb1nY) Implements temperature scaling and greedy/sampling decoding from scratch โ€” best way to internalize how sampling parameters affect generation. - [Lilian Weng โ€” Controllable Text Generation](https://lilianweng.github.io/posts/2021-01-02-controllable-text-generation/) Comprehensive survey of decoding strategies including temperature, top-k, top-p, and beam search โ€” with analysis of quality tradeoffs. ## Related KV Cache & Memory ยท Flash Attention ยท Quantization ยท Speculative Decoding ยท LLM Deployment --- --- title: "Quantization" part: "Inference" number: 25 emoji: "๐Ÿ“ฆ" subtitle: "INT8, INT4, GPTQ, AWQ โ€” shrink models without losing quality" tags: ["inference", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ“ฆ Quantization > INT8, INT4, GPTQ, AWQ โ€” shrink models without losing quality > [!question] Key Question > 4-bit Llama-70B fits in 35 GB โ€” down from 140 GB โ† Sampling & Decoding | โ†’ Speculative Decoding ## Key Insights > [!tip] Insight > The reason INT4 works at all: neural network weights are highly redundant. Most of the information lives in a small number of salient weights โ€” the rest can be aggressively rounded with minimal quality loss. > [!tip] Insight > AWQ consistently outperforms GPTQ at 4-bit despite being simpler. The key insight: protecting the 1% of salient weights matters more than optimal rounding of all weights. At INT8, LLM.int8() is nearly lossless thanks to mixed-precision outlier handling. ## Code Examples ```python import torch from transformers import AutoModelForCausalLM, BitsAndBytesConfig # NF4 quantization (QLoRA style) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", # normalized float 4-bit bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, # quantize the quantization constants ) model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-70b-hf", quantization_config=bnb_config, device_map="auto", ) # 70B model now fits in ~35GB VRAM (single A100) ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-70b-hf") gptq_config = GPTQConfig( bits=4, dataset="c4", # calibration dataset tokenizer=tokenizer, # required for calibration group_size=128, # quantize in groups of 128 weights desc_act=True, # Hessian-based column ordering ) model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-70b-hf", quantization_config=gptq_config, device_map="auto", ) ``` ```python # Manual uint8 affine quantization: compute scale/zero-point, quantize, dequantize import torch def quantize_uint8(x: torch.Tensor): """Asymmetric per-tensor 8-bit affine quantization (uint8, range [0, 255]).""" x_min, x_max = x.min().item(), x.max().item() scale = (x_max - x_min) / 255.0 # ฮ”: maps [x_min, x_max] โ†’ [0, 255] zero_point = round(-x_min / scale) # z: offset so x_min maps to 0 q = torch.clamp(torch.round(x / scale + zero_point), 0, 255).to(torch.uint8) return q, scale, zero_point def dequantize_uint8(q, scale, zero_point): return (q.float() - zero_point) * scale # torch.quantization.quantize_dynamic (PTQ for linear layers) import torch.nn as nn model = nn.Linear(512, 512) quantized = torch.quantization.quantize_dynamic(model, {nn.Linear}, dtype=torch.qint8) ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain the quantization formula. How do you choose scale and zero_point for asymmetric quantization?
Answer Quantization maps a floating-point value x to an integer: q = round(x / scale + zero_point). For asymmetric INT8: scale = (max_val - min_val) / 255, zero_point = round(-min_val / scale). The scale determines the step size between representable values, and zero_point shifts the range so that 0.0 maps to an integer (important for zero-padding in convolutions). Symmetric quantization simplifies by setting zero_point = 0 and using scale = max(|x|) / 127, but wastes range if the distribution is asymmetric.
### โ˜…โ˜…โ˜… _(Meta, Google)_ **Q:** Compare GPTQ and AWQ. When would you choose one over the other?
Answer GPTQ (Frantar et al., 2022) uses approximate second-order information (Hessian) to quantize weights one layer at a time, minimizing the layer-wise reconstruction error. It processes columns of the weight matrix sequentially, using the inverse Hessian to optimally round each weight and compensate errors in remaining weights. AWQ (Lin et al., 2023) observes that only ~1% of weights are
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** What is the difference between weight-only quantization and weight+activation quantization? Why is weight-only more popular for LLMs?
Answer Weight-only quantization keeps weights in low precision (e.g., INT4) but dequantizes to FP16 for computation. Weight+activation quantization quantizes both weights and activations to low precision (e.g., INT8), enabling integer matrix multiplication on hardware. Weight-only is more popular for LLMs because: (1) LLM inference is memory-bandwidth bound during generation (small batch, sequential tokens), so reducing weight size directly reduces the bottleneck; (2) activations have outliers that are hard to quantize โ€” LLM.int8() showed some channels have values 100x larger than others; (3) weight-only doesn
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** What is quantization-aware training (QAT) and why is it better than post-training quantization (PTQ) at low bit-widths?
Answer QAT inserts fake quantization nodes during training: the forward pass simulates quantized inference (round weights/activations to target precision), but the backward pass uses straight-through estimators (STE) to pass gradients through the non-differentiable rounding. This lets the model learn to be robust to quantization noise. PTQ quantizes a pre-trained model without retraining. At INT8, PTQ and QAT perform similarly. At INT4 and below, PTQ degrades significantly because the model was never trained to handle that level of noise. QAT can recover most of the quality because the optimizer adjusts other weights to compensate for quantization error during training.
### โ˜…โ˜…โ˜† _(Meta, Google)_ **Q:** Explain the LLM.int8() method. What problem does it solve with outlier features?
Answer LLM.int8() (Dettmers et al., 2022) discovered that LLM activations contain outlier features โ€” a small number of hidden dimensions (~0.1%) with magnitudes 100x larger than the rest. Naive INT8 quantization of activations clips these outliers, destroying model quality. The solution: mixed-precision decomposition. Identify outlier dimensions (magnitude > threshold, typically 6.0), extract those dimensions and compute them in FP16, quantize the remaining 99.9% of dimensions to INT8. The two partial results are combined. This achieves near-lossless INT8 inference with minimal overhead, since only a tiny fraction of dimensions needs FP16.
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** How does FP8 training work, and why did DeepSeek-V3 use it? What are the two FP8 formats?
Answer FP8 has two formats: E4M3 (4 exponent bits, 3 mantissa โ€” more precision, narrower range, used for weights/activations) and E5M2 (5 exponent bits, 2 mantissa โ€” wider range but less precision, used for gradients). DeepSeek-V3 used FP8 training to reduce memory by ~50% vs FP16 with minimal quality loss, enabling them to train a 671B MoE model cost-effectively. The key techniques: per-tensor scaling factors (computed from max absolute values), loss scaling to prevent underflow in gradients, and careful handling of normalization layers in higher precision. FP8 is native on H100 GPUs, achieving 2x FLOPS vs FP16.
## Further Reading - [GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers](https://arxiv.org/abs/2210.17323) One-shot weight quantization using approximate second-order information, enabling 3-4 bit models with minimal quality loss. - [AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration](https://arxiv.org/abs/2306.00978) Protects salient weight channels identified by activation magnitudes, achieving better quality than round-to-nearest at 4-bit. - [SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models](https://arxiv.org/abs/2211.10438) Migrates quantization difficulty from activations to weights via per-channel smoothing, enabling W8A8 quantization. - [Lilian Weng](https://lilianweng.github.io/) Technical posts on model compression, quantization theory, and efficient inference for LLMs. - [LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale](https://arxiv.org/abs/2208.07339) Dettmers et al. 2022 โ€” mixed-precision INT8 quantization that handles outlier activations separately, enabling near-lossless 8-bit inference for 175B+ models. - [FP8 Formats for Deep Learning](https://arxiv.org/abs/2209.05433) Micikevicius et al. 2022 โ€” defines E4M3 and E5M2 FP8 formats and training recipes. The format used by DeepSeek-V3 and H100 tensor cores. ## Related KV Cache & Memory ยท Flash Attention ยท Sampling & Decoding ยท Speculative Decoding ยท LLM Deployment --- --- title: "Speculative Decoding" part: "Inference" number: 26 emoji: "๐ŸŽ๏ธ" subtitle: "Small model drafts, big model verifies โ€” parallel generation" tags: ["inference", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽ๏ธ Speculative Decoding > Small model drafts, big model verifies โ€” parallel generation > [!question] Key Question > Small model guesses 5 tokens, big model checks all 5 at once โ† Quantization | โ†’ LLM Deployment ## Key Insights > [!tip] Insight > Think of it like a fast junior developer who writes draft code, and a senior expert who reviews it. Approving correct code is faster than writing it from scratch. Even when the junior gets some parts wrong, the senior only needs to rewrite those parts. > [!tip] Insight > The field is moving from external draft models to self-speculative methods (Medusa, EAGLE) that augment the target model itself. This avoids loading two separate models and often achieves higher acceptance rates because the draft shares the target's internal representations. ## Code Examples ```python def speculative_decode(target_model, draft_model, prompt, K=5): tokens = prompt while not done: # 1. Draft: small model generates K candidates autoregressively draft_tokens, draft_probs = [], [] for _ in range(K): p_draft = draft_model(tokens + draft_tokens) t = sample(p_draft) draft_tokens.append(t) draft_probs.append(p_draft[t]) # 2. Verify: target model scores ALL candidates in one forward pass target_probs = target_model(tokens + draft_tokens) # parallel! # 3. Accept/reject via rejection sampling accepted = [] for i, t in enumerate(draft_tokens): r = random.uniform(0, 1) if r < min(1, target_probs[i][t] / draft_probs[i]): accepted.append(t) # accept draft token else: # Resample from adjusted distribution residual = max(0, target_probs[i] - draft_probs[i]) accepted.append(sample(residual / residual.sum())) break # discard remaining drafts tokens.extend(accepted) return tokens ``` ```python # Speculative decoding acceptance step: rejection sampling import torch def acceptance_step(p_target: torch.Tensor, p_draft: torch.Tensor, draft_token: int): """ p_target, p_draft: probability vectors over vocab (shape: [vocab_size]) Returns: (accepted: bool, next_token: int) """ alpha = min(1.0, p_target[draft_token].item() / p_draft[draft_token].item()) if torch.rand(1).item() < alpha: return True, draft_token # accept draft token # Resample from residual distribution: max(0, p_target - p_draft) residual = torch.clamp(p_target - p_draft, min=0.0) total = residual.sum() # Degenerate case: p_draft >= p_target everywhere โ†’ fall back to p_target residual = residual / total if total > 0 else p_target return False, torch.multinomial(residual, num_samples=1).item() ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Explain speculative decoding step by step. Why is it mathematically lossless?
Answer Step 1: Draft model generates K candidate tokens autoregressively (fast, ~100M-1B params). Step 2: Target model processes the entire prefix + K candidates in ONE forward pass (parallel verification). Step 3: Compare distributions โ€” for each position, compute acceptance probability alpha = min(1, p_target(x) / p_draft(x)). If the draft token is accepted, keep it. If rejected, resample from an adjusted distribution (p_target - p_draft, renormalized) and discard remaining draft tokens. This is lossless because the acceptance-rejection scheme guarantees the final distribution equals the target model distribution exactly โ€” it
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** What determines the speedup of speculative decoding? When does it fail to provide speedup?
Answer Speedup depends on: (1) acceptance rate alpha โ€” how often draft tokens match the target
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Compare speculative decoding approaches: standard (draft model), Medusa, and EAGLE. What are the tradeoffs?
Answer Standard speculative decoding uses a separate smaller model as the draft. Advantage: draft model can be trained independently. Disadvantage: requires loading two models, draft model may not align well with target. Medusa (Cai et al., 2024) adds multiple parallel prediction heads to the target model itself โ€” each head predicts a future token position. No separate draft model needed, but requires fine-tuning the heads on target model data. Uses a tree-structured attention to verify multiple candidate sequences simultaneously. EAGLE (Li et al., 2024) trains a lightweight autoregressive head on top of the target model
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** How does the acceptance-rejection sampling in speculative decoding work? Derive the adjusted distribution for rejected tokens.
Answer For each draft token x_i with draft probability q(x_i) and target probability p(x_i): Accept with probability alpha = min(1, p(x_i)/q(x_i)). If rejected, sample from the adjusted distribution: p
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Why is speculative decoding particularly effective for LLMs but not for small models? What hardware property makes it work?
Answer LLM inference at batch size 1 is memory-bandwidth bound: each token generation reads all model weights from HBM but utilizes only a tiny fraction of the GPU
## Related KV Cache & Memory ยท Flash Attention ยท Sampling & Decoding ยท Quantization ยท LLM Deployment --- --- title: "LLM Deployment" part: "Inference" number: 27 emoji: "๐Ÿš€" subtitle: "Serving stacks, continuous batching, latency vs throughput, vLLM, and API design" tags: ["inference", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿš€ LLM Deployment > Serving stacks, continuous batching, latency vs throughput, vLLM, and API design > [!question] Key Question > Continuous batching serves 23x more requests than static batching on the same GPU โ† Speculative Decoding ## Key Insights > [!tip] Insight > Continuous batching is iteration-level scheduling: instead of scheduling at the request boundary, the scheduler acts at every forward pass. This is why vLLM can sustain substantially higher throughput โ€” the GPU never waits for one slow request. The vLLM paper reports{" "} 2โ€“4ร— over FasterTransformer/Orca at the same latency ; gains vs. naive static batching are larger and workload-dependent. > [!tip] Insight > TTFT and TPS are optimized by different techniques. Fast TTFT needs fast compute (prefill phase); high TPS needs high memory bandwidth and quantization (decode phase).{" "} Prefill-disaggregated architectures like Splitwise route these to different hardware entirely , eliminating the head-of-line blocking where long prefills stall concurrent decode steps. > [!tip] Insight > For most production deployments: start with vLLM (easiest to operate, great ecosystem), move to TensorRT-LLM when you need maximum throughput on NVIDIA hardware, and use SGLang when your workload has heavy prefix sharing (RAG, agents with fixed system prompts). ## Code Examples ```python from vllm import LLM, SamplingParams from vllm.entrypoints.openai.api_server import build_app import uvicorn # 1. Launch the model โ€” vLLM handles continuous batching + PagedAttention llm = LLM( model="meta-llama/Meta-Llama-3-70B-Instruct", tensor_parallel_size=4, # split across 4 GPUs gpu_memory_utilization=0.92, # leave 8% for CUDA overhead max_model_len=8192, # max context length quantization="fp8", # halves memory usage enable_prefix_caching=True, # cache common prefix KV blocks ) # 2. Sampling parameters per request params = SamplingParams( temperature=0.7, max_tokens=512, stop=["", "<|eot_id|>"], ) # 3. Batch inference (offline) โ€” vLLM auto-batches for throughput prompts = ["Explain transformers in one paragraph", "What is RLHF?"] outputs = llm.generate(prompts, params) for output in outputs: print(output.outputs[0].text) # 4. Online serving โ€” exposes OpenAI-compatible API # vllm serve meta-llama/Meta-Llama-3-70B-Instruct \\ # --tensor-parallel-size 4 \\ # --gpu-memory-utilization 0.92 \\ # --enable-prefix-caching # 5. Streaming client (OpenAI SDK) from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused") stream = client.chat.completions.create( model="meta-llama/Meta-Llama-3-70B-Instruct", messages=[{"role": "user", "content": "Explain KV cache"}], stream=True, # SSE โ€” yields chunks as tokens are generated max_tokens=256, ) for chunk in stream: delta = chunk.choices[0].delta.content if delta: print(delta, end="", flush=True) # stream to terminal ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** Explain continuous batching and why it improves throughput over static batching.
Answer Static batching: the server waits until a full batch is ready, then runs all sequences together until the longest one finishes. Short sequences sit idle waiting โ€” GPU utilization drops to ~30%. Continuous batching (Orca, 2022): at every decode step, the scheduler can evict finished sequences and insert new requests immediately. The batch composition changes dynamically at iteration level, not request level. This eliminates head-of-line blocking: a 10-token request doesn
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** What is the difference between TTFT and TPS, and why do different use cases care about different metrics?
Answer TTFT (Time To First Token): latency from request arrival until the first output token is returned. Driven by prefill โ€” processing all prompt tokens in a single parallel forward pass. TTFT scales with prompt_length ร— compute_per_token. TPS (Tokens Per Second): throughput of the decode phase โ€” how many output tokens the system generates per second. Decode is autoregressive (one token at a time) and memory-bandwidth bound. Different use cases prioritize differently: chat applications care most about TTFT (users perceive latency from request to first word); document generation or batch pipelines care about TPS (total output throughput). A system with fast prefill but slow decode has great TTFT but poor TPS. You can optimize them independently: prefill benefits from compute, decode benefits from quantization and memory bandwidth.
### โ˜…โ˜…โ˜… _(Databricks, OpenAI)_ **Q:** How does PagedAttention reduce KV cache memory waste from ~60% to under 4%?
Answer Traditional KV cache pre-allocates a contiguous memory block for max_seq_len per request at request arrival. For a 2048-token max context, a request that only uses 100 tokens wastes 95% of its allocation. Fragmentation also means you can
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Design an LLM serving system for 1000 QPS with p99 latency < 500ms for Llama-70B.
Answer Start with hardware sizing: Llama-70B in FP16 = 140GB weights. Minimum 2ร—A100-80GB per replica. At ~1000 tok/s throughput per 2-GPU replica (vLLM, continuous batching), estimate throughput need: 1000 QPS ร— ~50 tokens/request = 50,000 tok/s โ†’ need ~50 replicas = 100 A100s. For p99 < 500ms with ~50 output tokens, decode must run at โ‰ฅ100 tok/s per request โ€” feasible. Architecture: Load balancer โ†’ API gateway with request queueing (prevent overload). Each model server: vLLM with continuous batching, PagedAttention, FP8 quantization (saves ~50% memory, improves throughput). Horizontal scaling with consistent hashing for KV cache reuse. Monitoring: TTFT, TPS, GPU utilization, queue depth. Autoscaling on queue depth. Prefill/decode disaggregation: route long prompts to prefill-optimized nodes (more compute), short decode to memory-optimized nodes โ€” reduces p99 tail from prompt length variance.
### โ˜…โ˜…โ˜… _(Anthropic, Databricks)_ **Q:** Compare prefill-disaggregated architectures (Splitwise, DistServe) vs unified serving. When does disaggregation win?
Answer Unified serving: every GPU handles both prefill and decode for every request. Simple but creates resource contention โ€” prefill is compute-bound (wants tensor parallelism, high batch compute), decode is memory-bandwidth-bound (wants large KV cache, low latency per step). The two phases fight for the same GPU. Prefill-disaggregated: separate prefill servers (run full parallel forward pass on the prompt, compute-optimized with tensor parallelism) and decode servers (run iterative decoding with large KV cache). After prefill, KV cache is transferred to decode server via NVLink/InfiniBand. Disaggregation wins when: (1) request mix has high variance in prompt lengths โ€” long prompts spike compute and hurt decode latency in unified, (2) you need tight p99 SLAs on TTFT, (3) prefill load is predictable and bursty. Downside: KV transfer overhead (~10-50ms for long contexts), more complex routing, harder to scale. Splitwise (2024) showed 1.4x throughput improvement with 1.5x cost reduction at high load.
## Further Reading - [Efficient Memory Management for Large Language Model Serving with PagedAttention](https://arxiv.org/abs/2309.06180) Kwon et al. 2023 โ€” the vLLM paper introducing PagedAttention (virtual-memory paging for KV cache). The paper reports 2โ€“4ร— higher throughput than prior systems (FasterTransformer, Orca) at the same latency; much larger headline numbers seen elsewhere depend on the specific static-batching baseline being compared against. - [Orca: A Distributed Serving System for Transformer-Based Generative Models](https://www.usenix.org/conference/osdi22/presentation/yu) Yu et al. 2022 โ€” the continuous batching paper (Orca), showing iteration-level scheduling that keeps GPUs busy instead of waiting for the longest sequence. - [Andrej Karpathy โ€” Let](https://www.youtube.com/watch?v=zduSFxRajkE) Karpathy - [NVIDIA TensorRT-LLM Documentation](https://nvidia.github.io/TensorRT-LLM/) Official docs for TensorRT-LLM โ€” NVIDIA - [Splitwise: Efficient Generative Inference with Model Splitting](https://arxiv.org/abs/2311.18677) Patel et al. 2023 โ€” separates prefill and decode across GPU clusters for optimal hardware utilization - [SGLang: Efficient Execution of Structured Language Model Programs](https://arxiv.org/abs/2312.07104) Zheng et al. 2023 โ€” RadixAttention for KV cache reuse across requests, 5x throughput on multi-turn workloads ## Related KV Cache & Memory ยท Flash Attention ยท Sampling & Decoding ยท Quantization ยท Speculative Decoding --- --- title: "Mixture of Experts" part: "Architectures" number: 28 emoji: "๐Ÿงฉ" subtitle: "More parameters, same compute โ€” the secret behind DeepSeek" tags: ["architectures", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿงฉ Mixture of Experts > More parameters, same compute โ€” the secret behind DeepSeek > [!question] Key Question > DeepSeek-V3 has 671B params but each token only uses 37B โ†’ Vision Transformers & CLIP ## Contents - Expert Routing Simulator - MoE Layer Architecture - The Intuition - Step-by-Step Derivation - Break It โ€” See What Happens - Real-World Numbers ## Key Insights > [!tip] Insight > MoE is not about making models bigger โ€” it's about decoupling capacity from compute. You store more knowledge without paying more FLOPs per token.{" "} The scaling laws show MoE achieves the same loss as a dense model with 2-4x fewer training FLOPs. > [!tip] Insight > Only the top-k gating weights are non-zero. The selected weights are renormalized to sum to 1, ensuring the output scale is consistent regardless of which experts are chosen. > [!tip] Insight > The trend is toward more experts with finer granularity. Switch Transformer (2021) showed top-1 with many experts works. DeepSeek-V3 (2024) uses 256 fine-grained experts with top-8 plus a shared expert โ€” maximizing specialization while maintaining routing diversity. ## Code Examples ```python class MoELayer(nn.Module): def __init__(self, d_model, d_ff, n_experts=8, top_k=2): super().__init__() self.experts = nn.ModuleList([FFN(d_model, d_ff) for _ in range(n_experts)]) self.gate = nn.Linear(d_model, n_experts, bias=False) self.top_k = top_k def forward(self, x): scores = F.softmax(self.gate(x), dim=-1) # [batch, seq, n_experts] topk_scores, topk_idx = scores.topk(self.top_k, dim=-1) topk_scores = topk_scores / topk_scores.sum(dim=-1, keepdim=True) out = torch.zeros_like(x) for k in range(self.top_k): expert_idx = topk_idx[..., k] for i, expert in enumerate(self.experts): mask = expert_idx == i if mask.any(): out[mask] += topk_scores[..., k:k+1][mask] * expert(x[mask]) return out ``` ```python # MoE routing: top-k expert selection with load balancing loss import torch import torch.nn as nn import torch.nn.functional as F def moe_forward(x, gate, experts, top_k=2, balance_coeff=0.01): # x: (batch * seq_len, d_model) scores = F.softmax(gate(x), dim=-1) # (N, n_experts) topk_scores, topk_idx = scores.topk(top_k, dim=-1) topk_scores /= topk_scores.sum(dim=-1, keepdim=True) # renormalize # Load balancing loss: penalize uneven routing f_i = torch.zeros(len(experts), device=x.device) for i in range(len(experts)): f_i[i] = (topk_idx == i).float().mean() # fraction routed to expert i P_i = scores.mean(dim=0) # avg router prob per expert balance_loss = balance_coeff * len(experts) * (f_i * P_i).sum() # Weighted sum of selected expert outputs out = torch.zeros_like(x) for k in range(top_k): for i, expert in enumerate(experts): mask = topk_idx[:, k] == i if mask.any(): out[mask] += topk_scores[mask, k:k+1] * expert(x[mask]) return out, balance_loss ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Why does MoE achieve better performance per FLOP than dense models?
Answer MoE decouples total parameters from per-token compute. Each token only activates k experts (typically 2 out of 8+), so the model stores far more knowledge in its parameters without proportionally increasing inference cost. A 46.7B-parameter Mixtral 8x7B uses only 12.9B parameters per token โ€” comparable FLOPs to a 13B dense model but with the capacity of a much larger one. This works because different experts specialize in different input patterns, so total model capacity grows with expert count while compute stays fixed.
### โ˜…โ˜…โ˜† _(Google)_ **Q:** Explain the routing mechanism in MoE โ€” how does top-k expert selection work?
Answer The router is a learned linear layer W_g that maps each token
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** What is the load balancing problem in MoE and how is it solved?
Answer Without intervention, routers tend to collapse โ€” sending most tokens to a small subset of experts while others go unused (expert collapse). This wastes parameters and reduces effective model capacity. The solution is an auxiliary load balancing loss: L_balance = alpha * N * sum(f_i * P_i), where f_i is the fraction of tokens routed to expert i, and P_i is the average routing probability for expert i. This loss penalizes uneven distributions. The coefficient alpha (typically 0.01-0.1) trades off between load balance and routing quality. Additionally, capacity factors cap the maximum tokens per expert per batch, dropping overflow tokens to maintain compute predictability.
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** DeepSeek-V3 has 671B total parameters but only 37B active per token. Explain the architecture.
Answer DeepSeek-V3 uses a fine-grained MoE architecture with 256 routed experts and 1 shared expert per MoE layer. Each token activates 8 of the 256 routed experts plus the shared expert. The shared expert processes every token (providing a baseline representation), while routed experts specialize. With 61 MoE layers, the total parameter count reaches 671B, but active parameters per token are only ~37B. DeepSeek-V3 also uses an auxiliary-loss-free load balancing strategy โ€” instead of the standard auxiliary loss, it adds a bias term to expert routing scores that is dynamically adjusted to balance load, avoiding the training instability that auxiliary losses can cause.
### โ˜…โ˜…โ˜† _(Meta)_ **Q:** Compare dense and MoE models at equal compute budget. What are the tradeoffs?
Answer At equal training FLOPs, MoE models consistently outperform dense models on benchmarks โ€” the Scaling Laws for MoE paper shows MoE achieves the same loss as a dense model with 2-4x fewer FLOPs. However, tradeoffs exist: (1) Memory โ€” MoE requires storing all expert parameters, so a 46.7B MoE needs more memory than a 13B dense model despite similar FLOPs. (2) Inference โ€” expert parallelism across devices adds communication overhead; batch sizes must be large enough to amortize. (3) Fine-tuning โ€” MoE models can be harder to fine-tune as routing patterns may shift. (4) Serving โ€” all-to-all communication patterns for expert routing don
### โ˜…โ˜…โ˜… _(Google)_ **Q:** What is expert collapse and how do auxiliary losses prevent it?
Answer Expert collapse occurs when the router learns to send most or all tokens to a few experts, leaving others untrained. This creates a positive feedback loop: popular experts get more gradient updates, become better, and attract even more tokens. Auxiliary losses break this loop by adding a penalty proportional to the product of (fraction of tokens routed to expert i) and (average router probability for expert i). When an expert gets too many tokens, f_i increases, raising the loss. The gradient pushes the router to distribute tokens more evenly. The key design insight: using the product f_i * P_i rather than just f_i creates a differentiable signal โ€” f_i alone involves discrete routing decisions that can
### โ˜…โ˜…โ˜… _(Meta)_ **Q:** How does Mixtral
Answer Switch Transformer uses top-1 routing: each token goes to exactly one expert, maximizing sparsity and simplicity. This requires a capacity factor (typically 1.0-1.5x) to handle load imbalance โ€” excess tokens are dropped. Mixtral uses top-2 routing: each token goes to 2 of 8 experts, with outputs weighted by the softmax gating values. Top-2 provides better performance because each token gets a blended representation from two specialists, and the model can express richer combinations. But it uses 2x the compute per token. Switch Transformer
## Further Reading - [Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer](https://arxiv.org/abs/1701.06538) Shazeer et al. 2017 โ€” the original MoE paper introducing sparsely-gated expert routing - [Mixtral of Experts](https://arxiv.org/abs/2401.04088) Mistral AI 2024 โ€” open-weight MoE model with 8 experts per layer, top-2 routing - [DeepSeek-V3 Technical Report](https://arxiv.org/abs/2412.19437) DeepSeek 2024 โ€” 671B MoE with auxiliary-loss-free load balancing and multi-token prediction - [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity](https://arxiv.org/abs/2101.03961) Fedus et al. 2021 โ€” top-1 routing with capacity factor and auxiliary balance loss; showed MoE scales to 1T+ params - [Lilian Weng โ€” How to Train Really Large Models on Many GPUs](https://lilianweng.github.io/posts/2021-09-25-train-large/) Covers expert parallelism, pipeline parallelism, and how MoE fits into distributed training strategies - [Unified Scaling Laws for Routed Language Models](https://arxiv.org/abs/2202.01169) Clark et al. 2022 โ€” scaling laws specific to MoE: how performance scales with number of experts, active params, and total params ## Related Vision Transformers & CLIP ยท Multimodal LLMs ยท Reasoning Models ยท Verifiers & Process Reward ยท Diffusion Basics --- --- title: "Vision Transformers & CLIP" part: "Architectures" number: 29 emoji: "๐Ÿ‘๏ธ" subtitle: "Patch embeddings, contrastive learning, zero-shot classification" tags: ["architectures", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ‘๏ธ Vision Transformers & CLIP > Patch embeddings, contrastive learning, zero-shot classification > [!question] Key Question > Split a photo into 196 patches and a Transformer sees it as text โ† Mixture of Experts | โ†’ Multimodal LLMs ## Key Insights > [!tip] Insight > CNNs bake in assumptions about images (local filters, translation equivariance). ViT assumes nothing โ€” it must learn everything from data. This makes ViT data-hungry but ultimately more powerful: with enough data, no inductive bias beats the wrong inductive bias. > [!tip] Insight > The evolution: ViT proved transformers work for vision (2020). CLIP proved language supervision beats labels (2021). DINOv2 proved self-supervised ViT features rival supervised ones (2023). SigLIP simplified contrastive training. Today, ViT is the default vision backbone for multimodal models. ## Code Examples ```python import torch import torch.nn as nn class PatchEmbedding(nn.Module): def __init__(self, img_size=224, patch_size=16, in_channels=3, embed_dim=768): super().__init__() self.num_patches = (img_size // patch_size) ** 2 # 196 for 224/16 # Conv2d with kernel=stride=patch_size is equivalent to # splitting into patches and linearly projecting each one self.proj = nn.Conv2d( in_channels, embed_dim, kernel_size=patch_size, stride=patch_size ) self.cls_token = nn.Parameter(torch.randn(1, 1, embed_dim)) self.pos_embed = nn.Parameter(torch.randn(1, self.num_patches + 1, embed_dim)) def forward(self, x): # x: (B, 3, 224, 224) B = x.shape[0] x = self.proj(x) # (B, D, 14, 14) x = x.flatten(2).transpose(1, 2) # (B, 196, D) cls = self.cls_token.expand(B, -1, -1) # (B, 1, D) x = torch.cat([cls, x], dim=1) # (B, 197, D) x = x + self.pos_embed # add positional encoding return x # (B, 197, D) โ€” ready for transformer encoder ``` ```python # ViT patch embedding: Conv2d is equivalent to splitting + linear projection import torch import torch.nn as nn class PatchEmbed(nn.Module): def __init__(self, img_size=224, patch_size=16, in_ch=3, embed_dim=768): super().__init__() self.n_patches = (img_size // patch_size) ** 2 # 196 for 224/16 # kernel=stride=patch_size โ†’ each conv window = one patch, no overlap self.proj = nn.Conv2d(in_ch, embed_dim, kernel_size=patch_size, stride=patch_size) self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) self.pos_embed = nn.Parameter(torch.zeros(1, self.n_patches + 1, embed_dim)) def forward(self, x): # x: (B, 3, 224, 224) x = self.proj(x) # (B, D, 14, 14) x = x.flatten(2).transpose(1, 2) # (B, 196, D) cls = self.cls_token.expand(x.shape[0], -1, -1) x = torch.cat([cls, x], dim=1) + self.pos_embed # (B, 197, D) return x ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** How does ViT convert an image into tokens for a transformer?
Answer ViT splits the input image into fixed-size patches (typically 16x16 pixels). Each patch is flattened into a 1D vector and linearly projected into the model
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** Explain CLIP
Answer CLIP jointly trains an image encoder and a text encoder to map images and their captions into a shared embedding space. For a batch of N (image, text) pairs, CLIP computes cosine similarity between all N^2 possible pairings. The InfoNCE loss maximizes similarity for the N correct pairs and minimizes it for the N^2 - N incorrect pairs. This is applied symmetrically: image-to-text and text-to-image. For zero-shot classification, you encode the image and encode text prompts like
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Why does ViT need large datasets but CNNs don
Answer CNNs have strong inductive biases built in: local connectivity (each filter sees a small patch), weight sharing (same filter slides across the image), and translation equivariance. These biases encode prior knowledge about images, so CNNs learn efficiently from smaller datasets. ViT lacks these biases โ€” self-attention is global from the start, and the model must learn spatial relationships from data alone. With small datasets (like ImageNet-1K alone), ViT underperforms CNNs. But with large-scale pre-training (ImageNet-21K or JFT-300M with 300M images), ViT surpasses CNNs because the lack of inductive bias becomes an advantage โ€” the model isn
### โ˜…โ˜…โ˜… _(Meta, OpenAI)_ **Q:** Compare CLIP
Answer Supervised classification trains a model on labeled examples from a fixed set of classes. CLIP
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** What are the resolution/token tradeoffs in vision transformers?
Answer Patch size directly controls the resolution-token tradeoff. For a 224x224 image: 16x16 patches = 196 tokens, 14x14 patches = 256 tokens, 32x32 patches = 49 tokens. Smaller patches capture finer detail but produce more tokens, increasing compute quadratically in self-attention. Production models handle this via: (1) dynamic resolution โ€” resize to multiple supported resolutions based on aspect ratio; (2) image tiling โ€” split high-res images into crops, each encoded separately; (3) token compression โ€” use a perceiver resampler or pooling to reduce visual tokens from hundreds to a fixed number (Flamingo uses 64 tokens regardless of resolution). The tension: OCR and diagram understanding need high resolution (many tokens), but conversational use cases waste context budget on unnecessary visual detail.
### โ˜…โ˜…โ˜… _(Meta, Google)_ **Q:** How does DINOv2 differ from CLIP, and what are its advantages for downstream tasks?
Answer DINOv2 (Meta, 2023) is a self-supervised ViT trained without any text supervision โ€” it learns visual features purely from images using a self-distillation objective (student-teacher framework with momentum). Unlike CLIP, which aligns vision and language, DINOv2 learns rich visual representations that excel at dense prediction tasks (segmentation, depth estimation) because it retains fine-grained spatial information. DINOv2 features work well as frozen backbones: just train a linear head on top. It achieves state-of-the-art on many vision benchmarks without any labeled data during pre-training. The tradeoff: no zero-shot text-based classification (that requires CLIP-style language alignment), but stronger pixel-level features.
## Related Mixture of Experts ยท Multimodal LLMs ยท Reasoning Models ยท Verifiers & Process Reward ยท Diffusion Basics --- --- title: "Multimodal LLMs" part: "Architectures" number: 30 emoji: "๐Ÿ–ผ๏ธ" subtitle: "How GPT-4V, Claude, and Gemini see images" tags: ["architectures", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ–ผ๏ธ Multimodal LLMs > How GPT-4V, Claude, and Gemini see images > [!question] Key Question > GPT-4V sees your image as 85 extra tokens in the prompt โ† Vision Transformers & CLIP | โ†’ Reasoning Models ## Key Insights > [!tip] Insight > The common pattern in open models (LLaVA, and likely the closed systems GPT-4V/Claude/Gemini) is: encode the image into tokens, project into the LLM's space, and let self-attention handle the rest. The innovation is in the details โ€” how you handle resolution, how many tokens you use, and whether you train the vision encoder or freeze it. > [!tip] Insight > Cross-attention lets each text token selectively attend to relevant image regions. Early fusion (concatenating all tokens) lets both modalities attend to each other at every layer โ€” more expressive but more expensive. > [!tip] Insight > The trend: early fusion is winning over cross-attention. LLaVA proved a linear projection is enough. Production models focus on resolution handling (tiling, dynamic resize) and training data quality (visual instruction tuning) rather than architectural complexity. The vision encoder is typically frozen CLIP/SigLIP. ## Code Examples ```python import torch import torch.nn as nn class MultimodalLLM(nn.Module): def __init__(self, vision_encoder, llm, vision_dim=1024, llm_dim=4096): super().__init__() self.vision_encoder = vision_encoder # frozen CLIP ViT self.llm = llm # pre-trained LLM self.visual_proj = nn.Linear(vision_dim, llm_dim) # the bridge def forward(self, image, text_input_ids, text_embeds): # Step 1: Encode image into visual features with torch.no_grad(): visual_features = self.vision_encoder(image) # (B, 576, Dv) # Step 2: Project visual features into LLM embedding space visual_tokens = self.visual_proj(visual_features) # (B, 576, Dt) # Step 3: Interleave with text embeddings # text_embeds: (B, T, Dt) from LLM's embedding layer multimodal_input = torch.cat([ visual_tokens, # visual tokens first text_embeds # then text tokens ], dim=1) # (B, 576 + T, Dt) # Step 4: LLM processes everything via self-attention output = self.llm(inputs_embeds=multimodal_input) return output # next-token predictions over full sequence ``` ```python # CLIP-style contrastive loss (InfoNCE) import torch import torch.nn as nn import torch.nn.functional as F class CLIPContrastiveLoss(nn.Module): def __init__(self, temperature: float = 0.07): super().__init__() self.temperature = nn.Parameter(torch.tensor(temperature)) def forward(self, image_features: torch.Tensor, text_features: torch.Tensor): # Normalize embeddings to unit sphere img = F.normalize(image_features, dim=-1) # (B, D) txt = F.normalize(text_features, dim=-1) # (B, D) # Cosine similarity matrix, scaled by temperature logits = (img @ txt.T) / self.temperature # (B, B) # Symmetric InfoNCE: diagonal = positive pairs labels = torch.arange(len(logits), device=logits.device) loss_i2t = F.cross_entropy(logits, labels) loss_t2i = F.cross_entropy(logits.T, labels) return (loss_i2t + loss_t2i) / 2 ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, OpenAI, Anthropic)_ **Q:** How do multimodal LLMs like GPT-4V process images alongside text?
Answer Multimodal LLMs typically use a vision encoder (like ViT) to convert images into a sequence of visual tokens. These visual tokens are projected into the LLM
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** What is the difference between cross-attention and early fusion for vision-language models?
Answer Cross-attention keeps visual and text representations in separate streams, with the text model attending to visual features via cross-attention layers (Q from text, K/V from vision). This is used in Flamingo and similar architectures. Early fusion concatenates visual and text tokens into a single sequence and processes them through shared self-attention layers โ€” used by LLaVA and believed to be used by GPT-4V and Gemini (architecture not publicly confirmed for closed-source models). Cross-attention is more parameter-efficient (only adds cross-attention layers) and allows caching the visual features, but limits cross-modal interaction to specific layers. Early fusion allows full bidirectional interaction at every layer but costs more compute (quadratic in total sequence length) and can
### โ˜…โ˜…โ˜† _(Meta)_ **Q:** How does LLaVA achieve visual instruction following?
Answer LLaVA (Large Language and Vision Assistant) connects a pre-trained CLIP ViT-L/14 vision encoder to a pre-trained LLM (Vicuna) via a simple linear projection layer. Training has two stages: (1) Feature alignment pre-training โ€” train only the linear projection on 595K image-caption pairs, aligning visual features to the LLM
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** What are the resolution/token tradeoffs in multimodal LLMs, and how do production models handle them?
Answer Patch size directly controls the resolution-token tradeoff. For a 224x224 image with 16x16 patches: 196 tokens. For a high-res 1024x1024 image: 4096 tokens โ€” consuming significant context window. Production models handle this via: (1) image tiling โ€” split high-res images into crops (e.g., GPT-4V uses 512x512 tiles, ~85 tokens each); (2) dynamic resolution โ€” resize based on aspect ratio to minimize wasted pixels; (3) token compression โ€” use a perceiver resampler or pooling to reduce visual tokens to a fixed count (Flamingo: 64 tokens regardless of resolution); (4) multi-scale encoding โ€” encode at multiple resolutions and concatenate. The tension: OCR and diagram understanding need high resolution (many tokens), but conversational use cases waste context on unnecessary visual detail.
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic, Google)_ **Q:** Compare the architectural approaches of GPT-4V, Claude Vision, Gemini, and LLaVA. What are the key design tradeoffs?
Answer LLaVA: simplest approach โ€” CLIP ViT + linear projection + LLM, early fusion via token concatenation. Fast to train, open-source, but limited by frozen vision encoder quality. GPT-4V: believed to use early fusion with high-res tiling (~85 tokens per 512x512 tile), handles arbitrary resolution, strong on OCR and spatial reasoning (architecture not publicly confirmed). Claude Vision: accepts images up to ~1500 tokens, particularly strong on documents, charts, and structured visual content (internal architecture not disclosed). Gemini: described as natively multimodal โ€” trained on image/audio/video from scratch (not bolted-on), which may allow deeper cross-modal representations, though architectural details are not publicly confirmed. Key tradeoffs: (1) bolted-on vs native multimodal training, (2) cross-attention vs early fusion, (3) fixed vs dynamic resolution, (4) number of visual tokens (cost vs detail). The field is converging on early fusion with dynamic resolution.
## Further Reading - [Visual Instruction Tuning (LLaVA)](https://arxiv.org/abs/2304.08485) Liu et al. 2023 โ€” visual instruction tuning connecting a vision encoder to an LLM - [Flamingo: a Visual Language Model for Few-Shot Learning](https://arxiv.org/abs/2204.14198) Alayrac et al. 2022 โ€” few-shot multimodal learning with interleaved image-text inputs - [GPT-4V System Card](https://cdn.openai.com/papers/GPTV_System_Card.pdf) OpenAI 2023 โ€” safety evaluations and capabilities of GPT-4 with vision - [LLaVA-OneVision: Easy Visual Task Transfer](https://arxiv.org/abs/2408.03326) Li et al. 2024 โ€” single model handles single-image, multi-image, and video tasks; shows how to unify vision tasks with one instruction-tuned model - [InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks](https://arxiv.org/abs/2312.14238) Chen et al. 2023 โ€” scaling ViT to 6B parameters and aligning with LLMs; shows dynamic resolution handling for OCR-heavy tasks - [Lilian Weng โ€” Generalized Visual Language Models](https://lilianweng.github.io/posts/2022-06-09-vlm/) Comprehensive overview of vision-language model architectures โ€” from dual encoders (CLIP) to decoder-only multimodal LLMs ## Related Mixture of Experts ยท Vision Transformers & CLIP ยท Reasoning Models ยท Verifiers & Process Reward ยท Diffusion Basics --- --- title: "Reasoning Models" part: "Architectures" number: 31 emoji: "๐Ÿ’ญ" subtitle: "Chain-of-thought, o1, DeepSeek-R1, test-time compute" tags: ["architectures", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ’ญ Reasoning Models > Chain-of-thought, o1, DeepSeek-R1, test-time compute > [!question] Key Question > DeepSeek-R1 discovered chain-of-thought without being taught โ† Multimodal LLMs | โ†’ Verifiers & Process Reward ## Key Insights > [!tip] Insight > Think of ORM as grading an exam by the final answer only. PRM is like a teacher who checks each line of work โ€” partial credit for correct steps, immediate feedback on mistakes. PRM catches "right answer, wrong reasoning" cases that ORM misses. > [!tip] Insight > Snell et al. (2024): "A smaller model with test-time compute can match a 14x larger model" โ€” but only on medium-difficulty problems. The optimal strategy adapts compute allocation per question. > [!tip] Insight > The reasoning revolution: CoT gives a large gain on math with zero training. RL-trained reasoning (o1 full model) pushes AIME 2024 from ~13% (GPT-4o) to{" "} ~83% (o1 cons@64). And DeepSeek-R1-Zero showed you don't even need human CoT data โ€” reasoning emerges from outcome-based RL alone (DeepSeek-R1 itself adds a cold-start SFT stage before RL). ## Code Examples ```python def cot_prompt(question: str) -> str: """Add chain-of-thought instruction to a question.""" return f"""{question} Let's think step by step: 1. First, identify what we need to find. 2. Break the problem into smaller parts. 3. Solve each part. 4. Combine and verify the answer.""" def process_reward_score( prm_model, reasoning_steps: list[str], ) -> float: """Score a reasoning trace with a Process Reward Model. Each step is scored conditioned on previous steps. Overall score = product of step-level probabilities. """ score = 1.0 context = [] for step in reasoning_steps: context.append(step) # PRM predicts P(step is correct | previous steps) step_score = prm_model.score(context) # -> float in [0, 1] score *= step_score return score def best_of_n_with_prm( model, prm_model, prompt: str, n: int = 8 ) -> str: """Generate N reasoning traces, return the best one.""" traces = [model.generate(prompt) for _ in range(n)] scores = [ process_reward_score(prm_model, trace.steps) for trace in traces ] best_idx = scores.index(max(scores)) return traces[best_idx].final_answer ``` ```python # GRPO-style policy update (Group Relative Policy Optimization) import torch def grpo_loss(log_probs_new, log_probs_old, rewards, clip_eps=0.2): """ log_probs_new / log_probs_old: (G,) log-probs of each output under new/old policy rewards: (G,) scalar reward per output """ # Normalize rewards within the group -> advantage adv = (rewards - rewards.mean()) / (rewards.std() + 1e-8) # (G,) # PPO-style clipped ratio ratio = (log_probs_new - log_probs_old).exp() # (G,) clipped = ratio.clamp(1 - clip_eps, 1 + clip_eps) # Policy gradient loss (negative because we maximize reward) loss = -torch.min(ratio * adv, clipped * adv).mean() return loss ``` ## Interview Questions ### โ˜…โ˜†โ˜† _(Google, OpenAI)_ **Q:** What is chain-of-thought prompting and why does it improve reasoning accuracy?
Answer Chain-of-thought (CoT) prompting (Wei et al., 2022) instructs the model to produce intermediate reasoning steps before the final answer. It works because: (1) it decomposes complex problems into simpler sub-problems the model can solve individually, (2) it allocates more compute (tokens) to harder problems โ€” essentially test-time compute scaling, (3) it makes errors visible and correctable in intermediate steps. On GSM8K, CoT improved accuracy from ~58% to ~83% for PaLM 540B. The key insight: the model already
### โ˜…โ˜…โ˜† _(OpenAI)_ **Q:** How does OpenAI
Answer Standard CoT is a prompting technique โ€” you add
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** Explain DeepSeek-R1
Answer DeepSeek-R1-Zero was trained with pure RL (GRPO) on the base model โ€” no SFT, no human-written chain-of-thought examples. The only signal was outcome correctness (e.g., math answers). Remarkably, the model spontaneously discovered chain-of-thought reasoning, self-verification, and even reflection (
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** What is the difference between outcome reward models (ORM) and process reward models (PRM)?
Answer ORM scores only the final answer: correct = 1, wrong = 0. PRM scores each intermediate reasoning step. Example: for a 5-step math proof, ORM gives one score at the end; PRM gives a score after each step. PRM advantages: (1) denser reward signal โ€” the model knows which step went wrong, not just that the final answer is wrong, (2) prevents reward hacking โ€” a correct final answer via flawed reasoning is still penalized, (3) enables search โ€” you can prune bad reasoning paths early.
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** Explain GRPO (Group Relative Policy Optimization) and how it computes advantages without a critic.
Answer GRPO (DeepSeek) eliminates the critic/value network required by PPO. For each prompt, GRPO samples a group of G outputs {y_1, ..., y_G} from the current policy, scores them with a reward function r(y_i), then computes the advantage of each output relative to the group: A_i = (r(y_i) - mean(r)) / std(r). Outputs better than the group average get positive advantage (reinforced), worse ones get negative (suppressed). Benefits: (1) no value network to train โ€” saves memory and compute, (2) natural normalization across different prompts, (3) more stable than PPO because advantages are always zero-mean. The policy update uses clipped importance ratios like PPO, with a KL penalty to the reference policy.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** When should you scale test-time compute (think longer) vs. use a bigger model? What is the tradeoff?
Answer Snell et al. (2024) showed that test-time compute scaling is most effective for problems of medium difficulty โ€” problems the model can solve with more thinking but not trivially. For easy problems, thinking longer wastes tokens with no accuracy gain. For problems beyond the model
## Further Reading - [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903) Wei et al. (2022). The foundational paper showing step-by-step prompting dramatically improves reasoning. - [Let](https://arxiv.org/abs/2305.20050) Lightman et al. (2023). Process reward models outperform outcome reward models by scoring each reasoning step. - [Scaling LLM Test-Time Compute Optimally](https://arxiv.org/abs/2408.03314) Snell et al. (2024). When to think longer vs. use a bigger model. - [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://arxiv.org/abs/2501.12948) DeepSeek (2025). GRPO discovers reasoning without human CoT data. - [OpenAI o1 System Card](https://openai.com/index/openai-o1-system-card/) Technical details on RL-trained reasoning and safety evaluations. - [Lilian Weng โ€” Prompt Engineering](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/) Covers CoT, self-consistency, tree-of-thought, and least-to-most prompting with empirical comparisons across benchmarks. - [Tree of Thoughts: Deliberate Problem Solving with Large Language Models](https://arxiv.org/abs/2305.10601) Yao et al. (2023). Structured search over reasoning paths using BFS/DFS โ€” the algorithmic foundation for o1-style test-time search. - [Self-Consistency Improves Chain of Thought Reasoning](https://arxiv.org/abs/2203.11171) Wang et al. (2022). Sample multiple reasoning paths and take majority vote โ€” a verifier-free approach to test-time scaling. - [The Illustrated DeepSeek-R1](https://newsletter.languagemodels.co/p/the-illustrated-deepseek-r1) Visual walkthrough of DeepSeek-R1 ## Related Mixture of Experts ยท Vision Transformers & CLIP ยท Multimodal LLMs ยท Verifiers & Process Reward ยท Diffusion Basics --- --- title: "Verifiers & Process Reward" part: "Architectures" number: 32 emoji: "โœ…" subtitle: "PRMs, best-of-N, self-consistency โ€” when to think longer" tags: ["architectures", "ml", "ai-engineering", "interview-prep", "transformer"] --- # โœ… Verifiers & Process Reward > PRMs, best-of-N, self-consistency โ€” when to think longer > [!question] Key Question > Score each reasoning step, not just the final answer โ† Reasoning Models | โ†’ Diffusion Basics ## Key Insights > [!tip] Insight > ORM gives Trajectory #2 a perfect 1.0 score โ€” same as Trajectory #1 with correct reasoning. If you train on ORM signals, the model learns that wrong reasoning is fine as long as you get lucky. PRM catches this by scoring each step independently. > [!tip] Insight > PRM ranks Trajectory #1 (correct reasoning) highest at 0.97, while Trajectory #2 (lucky answer) scores only 0.31. The verifier sees{" "} how you got there, not just where you ended up. > [!tip] Insight > Think of best-of-N as brute-force search and tree search as informed search. Best-of-N generates full solutions blindly. Tree search uses the PRM as a heuristic to focus compute on promising reasoning paths โ€” like A* vs random sampling. > [!tip] Insight > Quality scales with , not{" "} . Doubling N from 64 to 128 gives far less improvement than doubling from 1 to 2. This is why brute-force best-of-N eventually loses to smarter search. > [!tip] Insight > This only works when 0.5" /> โ€” each sample must be better than a coin flip. If the model consistently gets a problem wrong (systematic error), more samples won't help. Self-consistency amplifies correctness, not just confidence.{" "} Wang et al. showed N=40 samples raises GSM8K from ~56% to ~74%. > [!tip] Insight > The trend: inference compute is becoming a first-class scaling axis alongside model size and training data.{" "} o1 reached 94.8% on MATH and 83.3% on AIME (consensus@64) {" "} โ€” and R1 show that learning to allocate test-time compute adaptively (think longer on hard problems) is more efficient than uniformly scaling N. ## Code Examples ```python def best_of_n(prompt, generator, reward_model, tokenizer, n=8): """Generate N completions, return the one with highest reward.""" # Generate N candidate completions inputs = tokenizer(prompt, return_tensors="pt") outputs = generator.generate( **inputs, num_return_sequences=n, do_sample=True, temperature=0.7, max_new_tokens=512, ) completions = tokenizer.batch_decode(outputs, skip_special_tokens=True) # Score each completion with the reward model scores = [] for completion in completions: reward_input = tokenizer(completion, return_tensors="pt") score = reward_model(**reward_input).logits.squeeze() scores.append(score.item()) # Return the highest-scoring completion best_idx = max(range(n), key=lambda i: scores[i]) return completions[best_idx], scores[best_idx] def prm_score(steps, prm_model, tokenizer): """Score a reasoning trajectory step-by-step with a PRM.""" step_scores = [] context = "" for step in steps: context += step + " " inputs = tokenizer(context, return_tensors="pt") # PRM outputs P(correct) for the latest step score = torch.sigmoid(prm_model(**inputs).logits[:, -1]) step_scores.append(score.item()) # Trajectory score = product of step scores trajectory_score = 1.0 for s in step_scores: trajectory_score *= s return trajectory_score, step_scores ``` ```python # Process Reward Model: token-level reward prediction head import torch import torch.nn as nn class ProcessRewardModel(nn.Module): """Attach a scalar reward head to a transformer for step-level scoring.""" def __init__(self, base_model, d_model: int): super().__init__() self.base = base_model self.reward_head = nn.Linear(d_model, 1) # P(step correct) def forward(self, input_ids, step_token_mask): """ input_ids: (B, T) full reasoning trace step_token_mask: (B, T) 1 at the last token of each step, 0 elsewhere Returns step_scores: (total_steps,) flat reward per step boundary (use step_token_mask to recover per-sequence grouping). """ hidden = self.base(input_ids).last_hidden_state # (B, T, D) logits = self.reward_head(hidden).squeeze(-1) # (B, T) # Boolean indexing flattens across batch โ†’ (total_steps,) step_scores = logits[step_token_mask.bool()] return torch.sigmoid(step_scores) ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** Explain the difference between ORM and PRM. When does PRM significantly outperform ORM?
Answer ORM (Outcome Reward Model) scores only the final answer โ€” right or wrong. PRM (Process Reward Model) scores each intermediate reasoning step. PRM significantly outperforms ORM on multi-step reasoning tasks (math, logic, code) because it can identify exactly where reasoning went wrong. A model might reach the correct answer through flawed reasoning (lucky cancellation of errors), and ORM would reward this, reinforcing bad reasoning patterns. PRM catches this by penalizing incorrect intermediate steps. Lightman et al. (2023) showed PRM solves 78.2% of MATH problems vs ORM
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** How does best-of-N sampling work? What are its computational tradeoffs?
Answer Best-of-N: generate N independent completions for a prompt, score each with a reward/verifier model, return the highest-scoring one. Computational cost scales linearly with N (N forward passes through the generator), but quality improves logarithmically โ€” you get diminishing returns. The expected quality of the best sample from N i.i.d. draws scales as E[max] ~ mu + sigma * sqrt(2 * ln(N)) for normal distributions. So doubling N from 64 to 128 gives much less improvement than 1 to 2. The verifier cost is cheap relative to generation (single forward pass per completion). Best-of-N is inference-only โ€” no training required โ€” making it the simplest test-time compute strategy.
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** What is self-consistency and how does it differ from best-of-N with a reward model?
Answer Self-consistency (Wang et al., 2022) generates N chain-of-thought reasoning paths, extracts the final answer from each, and takes a majority vote. No reward model needed โ€” the signal comes from agreement among independent samples. It differs from best-of-N in two ways: (1) it doesn
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** When should you invest in test-time compute vs. training a bigger model?
Answer Test-time compute (generate more, search harder, verify more) is most valuable when: (1) the task has a clear verifier (math, code with tests, formal logic), (2) the base model already has the knowledge but makes execution errors, (3) you need to serve many different difficulty levels (easy questions get 1 sample, hard ones get 100). A bigger model is better when: (1) the task requires knowledge the small model doesn
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** How does verifier-guided tree search work? Compare it to best-of-N.
Answer Verifier-guided tree search generates reasoning step-by-step, using a PRM to score each step and decide which branches to expand. At each step, generate K candidate next-steps, score them with the PRM, prune low-scoring branches, and expand promising ones. This is more compute-efficient than best-of-N because it prunes bad reasoning early instead of completing all N trajectories. Best-of-N wastes compute finishing clearly wrong reasoning chains. Tree search also explores more diverse reasoning paths by branching at decision points. The tradeoff: tree search requires a step-level PRM (harder to train than an ORM) and sequential generation (can
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** How would you train a Process Reward Model? What data do you need?
Answer Training a PRM requires step-level labels: for each reasoning step, is it correct or not? Three approaches: (1) Human annotation โ€” experts label each step (Lightman et al. used ~75K step-level labels on ~4.5K MATH solutions). Expensive but high quality. (2) Automated labels via Monte Carlo estimation โ€” from each step, sample many completions to the end. If completions from step k reach the correct answer at a high rate, step k is likely correct. This scales better but is noisy. (3) Outcome supervision bootstrapping โ€” train an ORM first, then use it to label steps by checking if removing a step changes the outcome score. The PRM is then trained as a classifier on (problem, steps_so_far) -> correctness_score, typically by appending a special token after each step and training the model to predict correct/incorrect at those positions.
## Further Reading - [Let](https://arxiv.org/abs/2305.20050) Lightman et al., 2023. The foundational PRM paper. Shows process supervision outperforms outcome supervision on MATH with ~75K step-level human labels (PRM800K dataset). - [Scaling LLM Test-Time Compute Optimally](https://arxiv.org/abs/2408.03314) Snell et al., 2024. Analyzes compute-optimal strategies for test-time scaling: when to search vs when to use a bigger model. - [OpenAI o1 Technical Report](https://openai.com/index/learning-to-reason-with-llms/) OpenAI, 2024. Demonstrates learned test-time compute allocation via internal chain-of-thought reasoning. - [Self-Consistency Improves Chain of Thought Reasoning](https://arxiv.org/abs/2203.11171) Wang et al., 2022. Majority vote over sampled reasoning paths โ€” simple, verifier-free, surprisingly effective. - [Improve Mathematical Reasoning in Language Models by Automated Process Supervision](https://arxiv.org/abs/2406.06592) Luo et al. 2024 โ€” Monte Carlo tree search to automatically generate step-level labels for PRM training without human annotation - [DeepSeek-R1: Incentivizing Reasoning Capability via Reinforcement Learning](https://arxiv.org/abs/2501.12948) DeepSeek 2025 โ€” outcome-based RL (GRPO) with no PRM; shows that dense process supervision is not always necessary when the reward signal is sufficiently clear - [Lilian Weng โ€” Self-Consistency and Process Rewards](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/#self-consistency) Covers the spectrum from self-consistency (no verifier) through ORM to PRM โ€” useful for understanding the tradeoffs in test-time compute strategies ## Related Mixture of Experts ยท Vision Transformers & CLIP ยท Multimodal LLMs ยท Reasoning Models ยท Diffusion Basics --- --- title: "Diffusion Basics" part: "Architectures" number: 33 emoji: "๐ŸŽจ" subtitle: "DDPM, latent diffusion, DiT โ€” image generation from noise" tags: ["architectures", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽจ Diffusion Basics > DDPM, latent diffusion, DiT โ€” image generation from noise > [!question] Key Question > DALL-E starts with pure static and denoises it into a painting โ† Verifiers & Process Reward ## Key Insights > [!tip] Insight > The genius of diffusion: the forward process is trivial (just add noise), which gives a simple training target (predict the noise). All the complexity lives in the neural network โ€” and we know how to scale those. > [!tip] Insight > This is a simple MSE loss. Sample {" "} from data, sample{" "} , sample{" "} , compute , predict{" "} , minimize MSE. > [!tip] Insight > The trend: U-Net โ†’ Transformer, pixel space โ†’ latent space,{" "} DDPM 1000 steps โ†’ rectified flow 20-50 steps . Each generation made diffusion faster and higher quality. DiT + latent + CFG is the current winning formula. ## Code Examples ```python def ddpm_training_step(model, x0, noise_schedule): """One DDPM training step โ€” predict the noise.""" batch_size = x0.shape[0] T = len(noise_schedule.alphas_cumprod) # 1. Sample random timesteps t = torch.randint(0, T, (batch_size,), device=x0.device) # 2. Sample noise epsilon = torch.randn_like(x0) # 3. Create noisy image: x_t = sqrt(alpha_bar_t) * x0 + sqrt(1 - alpha_bar_t) * eps alpha_bar_t = noise_schedule.alphas_cumprod[t][:, None, None, None] x_t = torch.sqrt(alpha_bar_t) * x0 + torch.sqrt(1 - alpha_bar_t) * epsilon # 4. Predict noise epsilon_pred = model(x_t, t) # 5. Simple MSE loss loss = F.mse_loss(epsilon_pred, epsilon) return loss ``` ```python # DDPM noise schedule and single denoising step import torch import torch.nn.functional as F def make_cosine_schedule(T: int = 1000): """Cosine beta schedule (Improved DDPM).""" steps = torch.arange(T + 1, dtype=torch.float64) f = torch.cos((steps / T + 0.008) / 1.008 * torch.pi / 2) ** 2 alphas_cumprod = f / f[0] betas = 1 - alphas_cumprod[1:] / alphas_cumprod[:-1] return betas.clamp(0, 0.999).float() def ddpm_reverse_step(model, x_t, t, betas): """One DDPM denoising step: x_t -> x_{t-1}. Uses the ฯƒ_tยฒ = ฮฒ_t variance choice from Ho et al., 2020.""" alpha_bar = (1 - betas[:t+1]).prod() alpha = 1 - betas[t] t_tensor = torch.tensor([t], device=x_t.device) eps_pred = model(x_t, t_tensor) # Predicted x_{t-1} mean mu = (x_t - betas[t] / (1 - alpha_bar).sqrt() * eps_pred) / alpha.sqrt() noise = torch.randn_like(x_t) if t > 0 else 0 sigma_t = betas[t].sqrt() # fixed-variance choice ฯƒ_t = sqrt(ฮฒ_t) return mu + sigma_t * noise ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** Explain the forward and reverse processes in DDPM. Why is the forward process fixed (not learned)?
Answer The forward process q(x_t|x_{t-1}) gradually adds Gaussian noise to data over T steps until it becomes pure noise. It
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Why does noise prediction work? Intuitively, why predict the noise rather than the clean image directly?
Answer Predicting noise works because of reparameterization: x_t = sqrt(alpha_bar_t) * x_0 + sqrt(1 - alpha_bar_t) * epsilon. Since x_t and the schedule are known, predicting epsilon is mathematically equivalent to predicting x_0 โ€” they
### โ˜…โ˜…โ˜† _(Meta, Google)_ **Q:** Why does Stable Diffusion work in latent space instead of pixel space? What are the tradeoffs?
Answer Pixel-space diffusion on a 512x512 RGB image operates on 786K dimensions โ€” extremely expensive. Latent diffusion first trains a VAE to compress images into a latent space (e.g., 64x64x4 = 16K dimensions), then runs diffusion there. This gives a ~48x reduction in spatial dimensions. Benefits: (1) massively cheaper compute โ€” training and inference are orders of magnitude faster; (2) the VAE removes imperceptible high-frequency details, so the diffusion model focuses on semantic content; (3) enables higher resolution generation. Tradeoffs: (1) the VAE decoder introduces a quality ceiling โ€” fine details depend on VAE quality; (2) two-stage training is more complex; (3) artifacts from VAE quantization can appear in outputs. Despite tradeoffs, latent diffusion made high-quality image generation practical โ€” Stable Diffusion runs on consumer GPUs precisely because of this design.
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** How does DiT (Diffusion Transformer) differ from the U-Net architecture traditionally used in diffusion models?
Answer Traditional diffusion models use a U-Net: a convolutional encoder-decoder with skip connections and attention at low resolutions. DiT replaces this entirely with a standard Vision Transformer (ViT). The latent is patchified into tokens, timestep and class label are injected via adaptive layer norm (adaLN-Zero), and standard transformer blocks process the sequence. Key differences: (1) no inductive bias for spatial locality โ€” DiT learns spatial relationships purely from data via attention; (2) scales more predictably โ€” DiT follows clean scaling laws (more compute = better FID), while U-Net scaling is ad hoc; (3) leverages the transformer ecosystem โ€” FlashAttention, tensor parallelism, etc. DiT-XL/2 achieved state-of-the-art FID on ImageNet. The success of DiT confirmed that the architecture matters less than scale, and led to transformer-based systems like SD3 and Flux.
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** What is classifier-free guidance (CFG) and what happens when guidance scale is too low or too high?
Answer CFG steers generation toward a conditioning signal (e.g., text prompt) without a separate classifier. During training, the model randomly drops the conditioning (e.g., 10% of the time) so it learns both conditional and unconditional generation. At inference, the output is: eps_guided = eps_uncond + w * (eps_cond - eps_uncond), where w is the guidance scale. When w=0, pure unconditional generation. When w=1, pure conditional prediction (no amplification beyond the model
## Further Reading - [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) (Ho et al., 2020) โ€” the foundational DDPM paper that revived diffusion models for image generation. - [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) (Rombach et al., 2022) โ€” introduced latent diffusion, the architecture behind Stable Diffusion. - [Scalable Diffusion Models with Transformers (DiT)](https://arxiv.org/abs/2212.09748) (Peebles & Xie, 2023) โ€” replaced U-Net with a Vision Transformer, showing clean scaling laws for diffusion. - [Scaling Rectified Flow Transformers for High-Resolution Image Synthesis](https://arxiv.org/abs/2403.03206) (Esser et al., 2024) โ€” Stability's Stable Diffusion 3 paper: rectified flow training with MMDiT. FLUX.1 uses the same rectified-flow family but is a separate model from Black Forest Labs with no equivalent paper. - [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598) (Ho & Salimans, 2022) โ€” the CFG paper. Explains how to trade sample diversity for prompt fidelity without a separate classifier model. - [Lilian Weng โ€” What are Diffusion Models?](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/) Comprehensive deep-dive covering DDPM, score matching, DDIM, and the connection to stochastic differential equations. - [Yannic Kilcher โ€” DALLยทE 2 / Diffusion Models Explained (YouTube)](https://www.youtube.com/watch?v=fbLgFrlTnGU) Visual walkthrough of latent diffusion, CLIP guidance, and how modern text-to-image systems combine these ideas. - [The Illustrated Stable Diffusion](https://jalammar.github.io/illustrated-stable-diffusion/) Jay Alammar โ€” step-by-step visual breakdown of the full Stable Diffusion pipeline: text encoder, UNet denoiser, VAE decoder, and CLIP guidance. ## Related Mixture of Experts ยท Vision Transformers & CLIP ยท Multimodal LLMs ยท Reasoning Models ยท Verifiers & Process Reward --- --- title: "Prompt Engineering" part: "Applications" number: 34 emoji: "โœ๏ธ" subtitle: "System prompts, few-shot, structured output, tool schemas" tags: ["applications", "ml", "ai-engineering", "interview-prep", "transformer"] --- # โœ๏ธ Prompt Engineering > System prompts, few-shot, structured output, tool schemas > [!question] Key Question > Adding 'think step by step' improves GPT-4 math accuracy by 40% โ†’ Agents & ReAct ## Contents - Side-by-Side: Naive vs. Engineered Prompt - The Intuition - Token Cost Math - Break It โ€” See What Happens - Real-World Numbers ## Key Insights > [!tip] Insight > The difference: role assignment, explicit format constraints, and concrete requirements. The model has the same knowledge in both cases โ€” the engineered prompt just extracts it reliably. > [!tip] Insight > The prompt engineering hierarchy: system prompt constrains the space, few-shot examples show the pattern, CoT enables reasoning, tool_use enforces output structure. Layer them for maximum reliability. > [!tip] Insight > Prompt engineering is about ROI: each technique adds tokens (cost) but improves output quality. The sweet spot for most production systems is a well-crafted system prompt + 3 few-shot examples + tool_use for structured output + prompt caching. CoT only when reasoning is required. ## Code Examples ```python # --- Zero-shot --- messages = [ {"role": "system", "content": "You are a sentiment classifier."}, {"role": "user", "content": "Classify: 'This product is terrible.' โ†’ positive/negative"} ] # --- Few-shot --- messages = [ {"role": "system", "content": "Classify sentiment as positive or negative."}, {"role": "user", "content": "'I love it!' โ†’"}, {"role": "assistant", "content": "positive"}, {"role": "user", "content": "'Waste of money.' โ†’"}, {"role": "assistant", "content": "negative"}, {"role": "user", "content": "'This product is terrible.' โ†’"}, ] # --- Chain-of-Thought --- messages = [ {"role": "system", "content": "Think step by step before answering."}, {"role": "user", "content": """Q: If a train travels 120km in 2 hours, then 90km in 1.5 hours, what is the average speed? Let's think step by step."""}, ] # --- Structured Output with tool_use --- import anthropic client = anthropic.Anthropic() response = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=1024, tools=[{ "name": "extract_contact", "description": "Extract contact info from text", "input_schema": { "type": "object", "properties": { "name": {"type": "string"}, "email": {"type": "string", "format": "email"}, "phone": {"type": "string"}, }, "required": ["name", "email"], }, }], tool_choice={"type": "tool", "name": "extract_contact"}, messages=[{ "role": "user", "content": "Extract: John Smith, john@example.com, 555-0123" }], ) # Returns structured tool input (strict mode required for guaranteed schema conformance) ``` ## Interview Questions ### โ˜…โ˜†โ˜† _(Google, Anthropic)_ **Q:** Compare zero-shot, few-shot, and chain-of-thought prompting. When would you use each?
Answer Zero-shot: no examples, just an instruction. Best for simple, well-defined tasks where the model already knows the format (e.g.,
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** Why use tool_use / function calling for structured output instead of just asking the model to return JSON?
Answer Asking for JSON in a prompt is brittle: the model might add markdown fences, include commentary outside the JSON, produce invalid JSON (trailing commas, unquoted keys), or hallucinate field names not in your schema. tool_use / function calling is designed to produce output conforming to a defined JSON Schema โ€” the API layer validates structure before returning it. Benefits: (1) strongly encourages valid JSON, (2) schema enforcement (correct field names, types, required fields), (3) no post-processing to extract JSON from prose, (4) works with Zod/Pydantic for end-to-end type safety. This is why production systems use tool_use even when they
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** When should you use few-shot prompting vs. fine-tuning? What are the tradeoffs?
Answer Few-shot: zero training cost, instant iteration, works with any API model, but uses context window tokens on every request (ongoing cost), limited by context length, and can
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is prompt injection and how do you defend against it? Give concrete examples.
Answer Prompt injection is when user input overrides system prompt instructions. Direct injection: user says
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** Design a system prompt for a customer support bot that handles refunds. Walk through your design decisions.
Answer Structure: (1) Role definition:
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** Explain prompt caching. How does it work and when does it save money?
Answer Prompt caching stores the KV-cache computation for a prefix of the prompt so subsequent requests with the same prefix skip recomputation. How it works: the first request processes the full prompt normally. Subsequent requests that share the same prefix (system prompt + few-shot examples) reuse the cached KV-cache, only computing new tokens. Anthropic charges ~90% less for cached input tokens. When it saves money: (1) system prompts reused across many requests (the most common case โ€” a 2000-token system prompt cached across 10K requests saves ~18M tokens of compute), (2) few-shot examples shared across requests, (3) RAG with a stable document prefix. When it doesn
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** How would you design prompts for strict JSON outputs under adversarial user input?
Answer Users can craft inputs that break JSON output formatting โ€” injecting closing braces, XML tags, or instruction overrides. Defense layers: (1) Schema-constrained decoding (tool_use / function calling) โ€” the model generates into a fixed schema, not freeform text. Most robust. (2) Clear precedence rules in system prompt โ€”
## Further Reading - [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903) Wei et al., 2022. The foundational paper showing that adding - [Building Effective Agents](https://www.anthropic.com/engineering/building-effective-agents) Anthropic. Practical guide to prompt design for agentic systems, tool use patterns, and orchestration. - [OpenAI Prompt Engineering Guide](https://platform.openai.com/docs/guides/prompt-engineering) Comprehensive guide covering tactics for getting better results: writing clear instructions, providing reference text, and splitting complex tasks. - [Self-Consistency Improves Chain of Thought Reasoning in Language Models](https://arxiv.org/abs/2203.11171) Wang et al., 2022. Sample multiple CoT reasoning paths and take the majority vote. Simple technique, significant accuracy gains. - [Lilian Weng โ€” Prompt Engineering](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/) Exhaustive survey of prompting techniques: zero-shot, few-shot, CoT, self-consistency, ToT, ReAct, and automatic prompt optimization. - [Multitask Prompted Training Enables Zero-Shot Task Generalization](https://arxiv.org/abs/2110.08207) Sanh et al., 2021. Shows that instruction fine-tuning on diverse tasks dramatically improves zero-shot prompting โ€” why modern models follow instructions without few-shot examples. - [Anthropic Prompt Engineering Docs](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview) Claude-specific guidance on system prompts, XML tags for structure, extended thinking, and common pitfalls. ## Related Agents & ReAct ยท Tool Use & Protocols ยท RAG & Retrieval ยท Long Context & Context Engineering ยท Agent Evaluation --- --- title: "Agents & ReAct" part: "Applications" number: 35 emoji: "๐Ÿค–" subtitle: "Think โ†’ Act โ†’ Observe โ€” the reasoning loop" tags: ["applications", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿค– Agents & ReAct > Think โ†’ Act โ†’ Observe โ€” the reasoning loop > [!question] Key Question > Function calling is just structured token generation โ€” no magic โ† Prompt Engineering | โ†’ Tool Use & Protocols ## Key Insights > [!tip] Insight > MCP = a worker using their toolkit. A2A = two coworkers collaborating. They're complementary: an agent uses MCP internally for tools and A2A externally to collaborate with other agents. > [!tip] Insight > Key insight: Tool schemas are included in the system prompt, consuming context tokens. Each tool definition costs ~200-500 tokens . With 20 tools, that's{" "} 4-10K tokens of overhead {" "} before any conversation starts. > [!tip] Insight > KV cache for agents: The KV cache grows with each turn. A 10-turn agent conversation with tool results can easily consume 20-50K tokens . Long-running agents must manage context carefully โ€” summarize old turns, truncate tool outputs, or use sliding windows. ## Code Examples ```typescript async function agentLoop(query: string, tools: Tool[], maxIter = 15) { const messages = [{ role: 'user', content: query }]; for (let i = 0; i < maxIter; i++) { const response = await llm.chat(messages, { tools }); if (response.stopReason === 'end_turn') return response.content; // Execute tool calls (Observe step โ€” never skip this) for (const call of response.toolCalls) { const tool = tools.find(t => t.name === call.name); if (!tool) throw new Error(\`Unknown tool: \${call.name}\`); const result = await tool.execute(call.args); messages.push({ role: 'tool', content: result, toolCallId: call.id }); } // Next iteration = Think step using updated context } return 'Max iterations reached'; // Hard stop โ€” prevents infinite burn } ``` ```typescript interface Tool { name: string; description: string; execute: (args: Record) => Promise; } async function agentLoop( prompt: string, tools: Tool[], maxSteps = 10 ): Promise { const messages: Message[] = [ { role: "system", content: buildSystemPrompt(tools) }, { role: "user", content: prompt }, ]; for (let step = 0; step < maxSteps; step++) { const response = await llm.chat(messages); // Check if model wants to call a tool if (response.toolCalls?.length) { for (const call of response.toolCalls) { const tool = tools.find(t => t.name === call.name); if (!tool) throw new Error(\`Unknown tool: \${call.name}\`); const result = await tool.execute(call.args); messages.push({ role: "tool", content: result, toolCallId: call.id }); } } else { // No tool calls = final answer return response.content; } } throw new Error("Agent exceeded max steps"); } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** Design an agentic workflow with tool use and error recovery.
Answer Core loop: LLM generates a plan (ReAct-style thought + action), executes tool calls, observes results, decides next step. Key components: (1) Tool registry with typed schemas (function name, params, return type). (2) Execution sandbox with timeouts and resource limits. (3) Error recovery: retry with backoff, fallback tools, ask-user escalation. (4) Memory: short-term (conversation), long-term (vector store of past interactions). (5) Guardrails: output validation, tool call rate limiting, human-in-the-loop for destructive actions. Architecture: orchestrator LLM โ†’ tool router โ†’ execution engine โ†’ result parser โ†’ orchestrator. State machine for workflow management.
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** How does function calling work under the hood in LLMs?
Answer Function calling is not a separate capability โ€” it
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** What are the failure modes of ReAct-style agents? How do you mitigate them?
Answer Common failures: (1) Infinite loops โ€” agent keeps calling the same tool without progress. Mitigate with step limits and loop detection. (2) Tool misuse โ€” wrong tool or malformed arguments. Mitigate with schema validation, few-shot examples in system prompt. (3) Hallucinated tool calls โ€” calling tools that don
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** How would you evaluate agent reliability in production?
Answer Multi-dimensional evaluation: (1) Task completion rate โ€” does the agent achieve the goal? Use ground-truth test suites with known answers. (2) Tool accuracy โ€” correct tool selection and argument formatting. Log all tool calls, compare against expected sequences. (3) Step efficiency โ€” how many steps/tokens to reach the answer? Compare against baseline. (4) Error recovery โ€” inject failures (timeout, bad response) and measure recovery rate. (5) Safety โ€” red-team for prompt injection, unauthorized tool use, data exfiltration. (6) Cost tracking โ€” tokens per task, tool API costs. (7) Latency โ€” end-to-end time, time per step. Use LLM-as-judge for open-ended quality, human eval for high-stakes tasks.
### โ˜…โ˜…โ˜† _(Google, OpenAI)_ **Q:** Compare LangGraph, CrewAI, and OpenAI Agents SDK โ€” tradeoffs?
Answer LangGraph: graph-based state machine for agent workflows. Pros: explicit control flow, checkpointing, human-in-the-loop. Cons: verbose, steep learning curve, tightly coupled to LangChain ecosystem. CrewAI: role-based multi-agent framework. Pros: easy to define agent roles and delegation. Cons: limited control over individual steps, hard to debug, abstractions can leak. OpenAI Agents SDK: lightweight, tool-use native, built on OpenAI models. Pros: simple API, good defaults, streaming. Cons: vendor lock-in, less flexible for complex workflows. Key tradeoff: simplicity vs control. For simple tool-use agents, SDK is fine. For complex multi-step workflows with branching, LangGraph gives more control.
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** How do you manage context window limits in multi-turn agent conversations?
Answer Strategies: (1) Sliding window โ€” keep only the last N turns, drop oldest. Simple but loses context. (2) Summarization โ€” periodically summarize conversation history into a compact form. Preserves key info but lossy. (3) Hierarchical memory โ€” recent turns in full, older turns summarized, oldest in vector store for retrieval. (4) Tool result truncation โ€” tool outputs can be huge; truncate or extract key fields. (5) System prompt optimization โ€” compress tool schemas, remove redundant instructions. (6) KV cache management โ€” for self-hosted models, use techniques like StreamingLLM or attention sinks. Budget: system_prompt + tool_schemas + history + response โ‰ค context_limit.
### โ˜…โ˜…โ˜† _(OpenAI)_ **Q:** What is the difference between parallel and sequential tool calling?
Answer Sequential: model generates one tool call, waits for result, then decides the next action. Each step depends on previous results. Example: search for a fact, then use that fact to calculate something. Parallel: model generates multiple tool calls in a single turn, all executed simultaneously. Results are all returned at once. Example: look up weather in 3 cities at the same time. Tradeoffs: parallel is faster (fewer round trips) but only works when calls are independent. The model must correctly identify which calls can be parallelized. Implementation: parallel calls are typically returned as an array of tool_call objects in a single assistant message.
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** How would you build a multi-agent system? When is it better than a single agent?
Answer Multi-agent: multiple LLM instances with different roles/prompts collaborate on a task. Architectures: (1) Orchestrator pattern โ€” one
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** What is MCP (Model Context Protocol) and how does it differ from A2A?
Answer MCP (Model Context Protocol) is a standard for connecting an agent to its tools โ€” databases, APIs, file systems. Think of it as USB-C for AI tools: one protocol, any tool. The agent declares what tools are available via MCP, and the runtime handles discovery, invocation, and result passing. A2A (Agent-to-Agent) is a different protocol for agent-to-agent communication โ€” two agents collaborating on a task, not one agent calling a tool. Analogy: MCP = a worker using their toolkit; A2A = two coworkers collaborating. They
### โ˜…โ˜…โ˜… _(Google)_ **Q:** Explain the A2A protocol
Answer Agent Card: a JSON manifest published at /.well-known/agent-card.json. It describes an agent
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** How would you defend an agent against prompt injection from tool outputs?
Answer Tool outputs are untrusted โ€” they can contain adversarial text that hijacks the agent
## Further Reading - [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) Yao et al. 2022 โ€” the ReAct framework interleaving chain-of-thought reasoning with tool actions. Foundation of most agent loops. - [Lilian Weng โ€” LLM-Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) Comprehensive survey of agent components: planning, memory, tool use, and multi-agent coordination. - [Claude Code (source)](https://github.com/anthropics/claude-code) Production agentic coding assistant โ€” real-world reference implementation of a long-horizon agent with tools. ## Related Prompt Engineering ยท Tool Use & Protocols ยท RAG & Retrieval ยท Long Context & Context Engineering ยท Agent Evaluation --- --- title: "Tool Use & Protocols" part: "Applications" number: 36 emoji: "๐Ÿ”Œ" subtitle: "Function calling, MCP, A2A โ€” connecting agents to the world" tags: ["applications", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”Œ Tool Use & Protocols > Function calling, MCP, A2A โ€” connecting agents to the world > [!question] Key Question > How does Claude Code call 50 different tools with one protocol? โ† Agents & ReAct | โ†’ RAG & Retrieval ## Key Insights > [!tip] Insight > MCP and A2A are complementary, not competing. MCP connects a model to tools (vertical integration). A2A connects agents to agents (horizontal collaboration). A production agent uses both: MCP to access databases and APIs, A2A to delegate subtasks to specialized agents. > [!tip] Insight > Each tool schema costs approximately{" "} 200-500 tokens{" "} depending on description length and parameter complexity. With{" "} 20 tools at 300 tokens each, that's 6,000 tokens {" "} consumed before any conversation starts. > [!tip] Insight > The tool ecosystem is consolidating fast. MCP is becoming the standard for tool integration (replacing custom plugins), and A2A is emerging for multi-agent orchestration. Both use JSON-based schemas and HTTP transport โ€” simple by design. ## Code Examples ```python # Tool schema (what the model sees in system prompt) tools = [{ "type": "function", "function": { "name": "search_docs", "description": "Search internal documentation by query", "parameters": { "type": "object", "properties": { "query": {"type": "string", "description": "Search query"}, "top_k": {"type": "integer", "default": 5}, }, "required": ["query"], }, }, }] # Function calling loop response = client.chat.completions.create( model="gpt-4", messages=messages, tools=tools ) # Check if model wants to call a tool if response.choices[0].message.tool_calls: # Must append the assistant message containing the tool call(s) first messages.append(response.choices[0].message) for call in response.choices[0].message.tool_calls: name = call.function.name # "search_docs" args = json.loads(call.function.arguments) # {"query": "..."} result = execute_tool(name, args) # YOUR code runs here # Then inject the tool result back into context messages.append({"role": "tool", "content": json.dumps(result), "tool_call_id": call.id}) ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** How does function calling actually work at the token level? What makes it different from regular generation?
Answer Function calling is structured token generation โ€” the model generates JSON tokens that conform to a tool schema. During training, the model sees examples of (user message, tool call JSON, tool result, assistant response) sequences. At inference, when the model decides to call a tool, it generates tokens like {
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** Compare MCP (Model Context Protocol) and A2A (Agent-to-Agent). When would you use each?
Answer MCP (Anthropic, 2024) standardizes how AI models connect to tools and data sources โ€” it
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** What are the tradeoffs of parallel vs sequential tool calls? How do you decide?
Answer Parallel tool calls: the model generates multiple tool calls in one turn, runtime executes them concurrently. Advantages: lower latency (wall-clock time), fewer round-trips. Disadvantages: can
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** How do you manage context window budget when using tools? What happens when you have too many tools?
Answer Each tool schema costs ~200-500 tokens in the system prompt. With 50 tools, that
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** How should a tool-using agent handle errors and recover from failed tool calls?
Answer Error recovery strategies: (1) Retry with modification โ€” if a tool call fails, the model sees the error message and can adjust parameters (e.g., fix a malformed query). (2) Fallback tools โ€” try an alternative tool that provides similar functionality. (3) Graceful degradation โ€” acknowledge the failure and provide a partial answer without the tool. (4) Schema validation โ€” validate tool outputs before using them; reject malformed responses. (5) Timeout handling โ€” set per-tool timeouts; don
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** How do agents communicate in multi-agent systems? Compare direct messaging, shared state, and A2A protocol approaches.
Answer Three paradigms: (1) Direct messaging โ€” agents call each other like functions. Simple but tightly coupled. Requires knowing the other agent
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** How would you evaluate tool use when tools can return adversarial or stale outputs?
Answer Standard eval assumes tools return correct results โ€” production tools don
## Further Reading - [Toolformer: Language Models Can Teach Themselves to Use Tools](https://arxiv.org/abs/2302.04761) Schick et al. 2023 โ€” self-supervised training for tool use, teaching models when and how to call APIs. - [Lilian Weng โ€” LLM-Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) Deep-dive into tool use as a core agent capability โ€” covers function calling, code execution, and external APIs. - [Claude Code (source)](https://github.com/anthropics/claude-code) Production implementation showing real-world tool use: file I/O, bash execution, search, and edit tools. ## Related Prompt Engineering ยท Agents & ReAct ยท RAG & Retrieval ยท Long Context & Context Engineering ยท Agent Evaluation --- --- title: "RAG & Retrieval" part: "Applications" number: 37 emoji: "๐Ÿ”" subtitle: "Ground LLM outputs in real data โ€” reduce hallucination" tags: ["applications", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ” RAG & Retrieval > Ground LLM outputs in real data โ€” reduce hallucination > [!question] Key Question > RAG reduced hallucination from 27% to 4% in one production system โ† Tool Use & Protocols | โ†’ Long Context & Context Engineering ## Key Insights > [!tip] Insight > Why cosine over dot product? Cosine normalizes by magnitude, so it measures direction (semantic meaning) regardless of vector length. In practice, most embedding models already L2-normalize their outputs, making cosine = dot product. > [!tip] Insight > BM25 (Best Match 25) is the classic sparse retrieval algorithm โ€” it scores documents by term frequency and inverse document frequency (TF-IDF variant). It excels at exact keyword matching where dense retrieval struggles (e.g., product codes, proper nouns). > [!tip] Insight > Two-stage pattern: Retrieve top-20 with a fast bi-encoder, then rerank to top-5 with a cross-encoder. The cross-encoder sees both query and document together (via cross-attention), catching nuances the bi-encoder misses. > [!tip] Insight > Tools like RAGAS automate faithfulness + relevance scoring using LLM-as-judge. Key insight: always evaluate retrieval and generation independently โ€” fixing the wrong component wastes time. > [!tip] Insight > Lost in the middle:{" "} LLMs show a U-shaped accuracy curve over context position โ€” accuracy drops 20%+ when critical information is placed in the middle of long contexts vs. the beginning or end (Liu et al., 2023) . Place your highest-ranked retrieved chunk first or last, not buried in the middle. ## Code Examples ```typescript async function rag(query: string, docs: Document[]) { // 1. Embed query โ€” same model used at index time const queryEmb = await embed(query); // 2. Retrieve top-k chunks via ANN search (~5-50ms) const chunks = await vectorDB.search(queryEmb, { topK: 5 }); // 3. Rerank โ€” cross-encoder sees full (query, doc) pair const reranked = await reranker.rank(query, chunks); // 4. Generate with retrieved context const context = reranked.slice(0, 3).map(c => c.text).join('\\n\\n'); return llm.chat(\`Context:\\n\${context}\\n\\nQuestion: \${query}\`); } ``` ```python # Claim-level hallucination check response โ†’ extract_claims(response) # LLM extracts atomic claims โ†’ for each claim: check_entailment(claim, chunks) # NLI model: entailed/contradicted/neutral โ†’ hallucination_rate = contradicted / total_claims ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, Databricks)_ **Q:** Design a RAG system for 10M documents with sub-second latency.
Answer Architecture: (1) Offline pipeline: chunk documents (512 tokens, 50 token overlap), embed with a bi-encoder (e.g., BGE-large), store in vector DB (Pinecone/Qdrant/pgvector). (2) Online: embed query, ANN search (top-k=20), rerank with cross-encoder (top-5), stuff into LLM context. Key decisions: chunk size (too small = no context, too large = diluted embedding), hybrid search (BM25 + vector), metadata filtering. Scale concerns at 10M: sharded vector index, async ingestion pipeline, cache frequent queries, monitor embedding drift. Sub-second: ANN search <50ms, reranking <100ms, LLM streaming for perceived latency.
### โ˜…โ˜…โ˜† _(Databricks)_ **Q:** How do you choose chunk size? What are the tradeoffs?
Answer Smaller chunks (128-256 tokens): more precise retrieval, but each chunk lacks context โ€” the embedding may not capture meaning well. Larger chunks (512-1024 tokens): better context per chunk, but retrieval is less precise (irrelevant text dilutes the embedding). Overlap (50-100 tokens) prevents splitting sentences across chunks. Best practice: start with 512 tokens + 50 overlap, evaluate with your actual queries. Semantic chunking (split by paragraphs/sections) often outperforms fixed-size. For tables/code, use structure-aware chunking.
### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** Compare dense retrieval vs sparse retrieval (BM25). When to use each?
Answer Sparse (BM25/TF-IDF): exact keyword matching, no training required, handles rare terms well, fast. Fails on: semantic similarity (
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** How would you evaluate RAG quality? What metrics?
Answer Three dimensions: (1) Retrieval quality โ€” Recall@k (did the relevant docs appear in top-k?), MRR (mean reciprocal rank), nDCG. (2) Generation quality โ€” Faithfulness (is the answer grounded in retrieved docs? Use NLI models), Answer relevance (does it actually answer the question?), Completeness. (3) End-to-end โ€” Correctness vs ground truth, human preference ratings. Tools: RAGAS framework automates faithfulness + relevance scoring using LLM-as-judge. Key insight: evaluate retrieval and generation separately โ€” a good retriever with a bad generator (or vice versa) needs different fixes.
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What is the
Answer When you stuff many retrieved documents into the context, LLMs tend to focus on information at the beginning and end, while ignoring content in the middle. This was shown in the
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** When should you use RAG vs fine-tuning vs larger context window?
Answer RAG: best for factual grounding, up-to-date info, citing sources, large knowledge bases. Works without retraining. Fine-tuning: best for teaching style/format, domain-specific behavior, consistent persona. Bakes knowledge into weights โ€” but can
## Further Reading - [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) Lewis et al. 2020 โ€” the original RAG paper combining dense retrieval with seq2seq generation. - [Lost in the Middle: How Language Models Use Long Contexts](https://arxiv.org/abs/2307.03172) Liu et al. 2023 โ€” positional bias in retrieval: models struggle with middle-placed evidence. - [ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction](https://arxiv.org/abs/2004.12832) Khattab & Zaharia 2020 โ€” late-interaction retrieval for scalable semantic search. - [RAGAS: Automated Evaluation of Retrieval Augmented Generation](https://arxiv.org/abs/2309.15217) Es et al. 2023 โ€” reference-free evaluation framework for RAG pipelines measuring faithfulness, answer relevance, and context precision. - [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](https://arxiv.org/abs/2310.11511) Asai et al. 2023 โ€” model learns to decide when to retrieve and to critique its own outputs with special reflection tokens. - [Lilian Weng โ€” LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) Covers memory and retrieval as core agent components โ€” how RAG fits into the broader agent architecture. ## Related Prompt Engineering ยท Agents & ReAct ยท Tool Use & Protocols ยท Long Context & Context Engineering ยท Agent Evaluation --- --- title: "Long Context & Context Engineering" part: "Applications" number: 38 emoji: "๐Ÿ“" subtitle: "Token budgeting, prompt caching, lost-in-the-middle, memory layering" tags: ["applications", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ“ Long Context & Context Engineering > Token budgeting, prompt caching, lost-in-the-middle, memory layering > [!question] Key Question > Fit your agent into 32K tokens โ€” every token counts โ† RAG & Retrieval | โ†’ Agent Evaluation ## Key Insights > [!tip] Insight > Context engineering is becoming a core skill for AI engineers. It is not just about fitting content into a window โ€” it is about prioritizing what the model needs to see, where it sees it, and minimizing cost. Think of it like memory management in systems programming: stack (recent context), heap (vector store), disk (full logs). > [!tip] Insight > Context windows are growing fast (8K to 2M in two years) but attention is still O(n^2). Bigger windows do not mean you should use all of it. The best agent systems use 30-60% of the available window and keep the rest as margin. Prompt caching is the single highest-ROI optimization for production LLM apps. ## Code Examples ```python def compute_context_budget( context_window: int = 200_000, system_tokens: int = 4_000, tool_tokens: int = 12_000, max_retrieval_tokens: int = 30_000, max_response_tokens: int = 8_000, safety_margin: int = 10_000, ) -> dict: """Calculate available tokens for conversation history.""" reserved = (system_tokens + tool_tokens + max_retrieval_tokens + max_response_tokens + safety_margin) history_budget = context_window - reserved # Cost estimation (Claude Sonnet pricing) cached = system_tokens + tool_tokens # static prefix uncached = context_window - cached cost_no_cache = context_window * 3.0 / 1e6 cost_cached = cached * 0.3 / 1e6 + uncached * 3.0 / 1e6 return { "history_budget": history_budget, # 136,000 "cost_per_request_no_cache": cost_no_cache, # $0.60 "cost_per_request_cached": cost_cached, # $0.5568 "savings_pct": (1 - cost_cached / cost_no_cache) * 100, } ``` ```python import anthropic client = anthropic.Anthropic() # The system prompt is cached โ€” 90% cost reduction on repeat calls response = client.messages.create( model="claude-sonnet-4-20250514", max_tokens=4096, system=[ { "type": "text", "text": LONG_SYSTEM_PROMPT, # 4K+ tokens "cache_control": {"type": "ephemeral"} # <-- enables caching }, { "type": "text", "text": TOOL_SCHEMAS_TEXT, # 12K+ tokens "cache_control": {"type": "ephemeral"} }, ], messages=[{"role": "user", "content": user_query}], ) # Check cache performance in response usage fields # cache_creation_input_tokens: tokens cached for the first time # cache_read_input_tokens: tokens served from cache (90% cheaper) ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** Explain the lost-in-the-middle phenomenon. How does it affect RAG system design?
Answer LLMs show a U-shaped attention pattern: they attend strongly to the beginning and end of the context but poorly to the middle. Liu et al. (2023) showed accuracy drops 20%+ when critical information is placed in the middle of long contexts vs. the start or end. For RAG: place the most relevant retrieved chunks at the beginning or end of the context, not the middle. Some systems duplicate critical info at both positions. This also means stuffing 50 chunks into context is counterproductive โ€” better to retrieve fewer, higher-quality chunks placed strategically.
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** How does prompt caching work and when should you use it? What are the cost tradeoffs?
Answer Prompt caching (Anthropic, OpenAI) stores the KV cache of static prompt prefixes server-side. On subsequent requests with the same prefix, the provider skips re-computing attention for cached tokens. Anthropic charges 90% less for cached input tokens (e.g., $0.30/MTok vs $3/MTok for Claude Sonnet). Use it when: (1) system prompts are long and stable across requests, (2) tool schemas are large (100+ tools), (3) few-shot examples don
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** Design a token budget for an agent with 200K context. How do you allocate across system prompt, tools, retrieval, history, and response?
Answer A practical budget for a 200K-token agent: System prompt: 2-4K (instructions, persona, constraints). Tool schemas: 5-15K (depends on number of tools โ€” each tool ~200-500 tokens). Retrieved context: 10-30K (3-10 chunks at 1-3K each). Conversation history: 50-100K (sliding window of recent turns, older turns summarized). Reserved for response: 4-8K. Safety margin: 10-20K. Key principles: (1) measure actual token counts, don
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Compare sliding window attention, summarization, and vector store retrieval for managing long conversations. When would you use each?
Answer Sliding window: keep last N tokens verbatim, drop older ones. Simple, preserves recent context perfectly, but loses all older context. Use for: chatbots where only recent turns matter. Summarization: compress older turns into summaries. Preserves key facts from entire conversation but lossy โ€” nuance and exact quotes are lost. Use for: long-running agents that need to remember decisions made earlier. Vector store retrieval: embed all turns, retrieve relevant ones on demand. Preserves all information but retrieval can miss context that
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** What is the computational complexity of attention with respect to context length, and how do approaches like Ring Attention address this?
Answer Standard self-attention is O(n^2) in sequence length for both compute and memory (the attention matrix is n x n). For 200K tokens, that
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** You
Answer Total static overhead: 8K (system) + 12K (tools) = 20K. Available for history + response: 128K - 20K = 108K. Reserve 4K for response = 104K for history. But we have 150K of history โ€” 46K over budget. Strategy: (1) Enable prompt caching for the 20K static prefix (saves 90% on repeated calls). (2) Implement tiered history: keep last 20 turns verbatim (~40K), summarize turns 21-50 (~10K summary), drop or vector-index older turns. (3) For the current query, retrieve 3-5 relevant older turns from vector store (~5K). (4) Total: 20K static + 40K recent + 10K summary + 5K retrieved + 4K response = 79K โ€” well within budget with room for growth. (5) Monitor: if summaries grow, re-summarize recursively.
## Further Reading - [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) Su et al. 2021 โ€” RoPE encodes relative position via rotation, enabling length extrapolation used by LLaMA and Mistral. - [LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models](https://arxiv.org/abs/2309.12307) Chen et al. 2023 โ€” extends context window from 4K to 100K using shifted sparse attention during fine-tuning. - [Lilian Weng](https://lilianweng.github.io/) Posts on long-context modeling, positional encoding extensions, and retrieval vs. context tradeoffs. ## Related Prompt Engineering ยท Agents & ReAct ยท Tool Use & Protocols ยท RAG & Retrieval ยท Agent Evaluation --- --- title: "Agent Evaluation" part: "Applications" number: 39 emoji: "๐Ÿงช" subtitle: "Trajectory eval, tool accuracy, and why agent eval is harder" tags: ["applications", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿงช Agent Evaluation > Trajectory eval, tool accuracy, and why agent eval is harder > [!question] Key Question > Both agents got the right answer โ€” but one cost 6x more tokens โ† Long Context & Context Engineering ## Key Insights > [!tip] Insight > Both agents got the right answer โ€” but Agent B used 6x more tokens, 7x more time, and picked the wrong tools 3 times. Outcome-only evaluation gives both 100%. Trajectory evaluation{" "} catches the difference. > [!tip] Insight > An agent that gets the right answer in 47 steps is worse than one that fails in 3. Efficiency is not optional โ€” it is a core quality signal. > [!tip] Insight > Common weights: ,{" "} ,{" "} for general agents. For safety-critical agents, increase {" "} (tool accuracy) significantly. > [!tip] Insight > The gap between agents and humans is largest on tasks requiring real-world grounding (WebArena: 14% vs 78%) and smallest on well-defined code tasks (HumanEval: agents with iteration actually beat single-shot human performance). The harder the environment, the more agent eval matters. ## Code Examples ```python from dataclasses import dataclass @dataclass class TrajectoryResult: completed: bool actual_steps: int optimal_steps: int correct_tool_calls: int total_tool_calls: int tokens_used: int def evaluate_trajectory( result: TrajectoryResult, w1: float = 0.5, # completion weight w2: float = 0.3, # efficiency weight w3: float = 0.2, # tool accuracy weight cost_per_token: float = 3e-6, # ~$3/M tokens ) -> dict: """Score a single agent trajectory.""" completion = 1.0 if result.completed else 0.0 efficiency = min(result.optimal_steps / max(result.actual_steps, 1), 1.0) tool_acc = result.correct_tool_calls / max(result.total_tool_calls, 1) # Composite trajectory score trajectory_score = w1 * completion + w2 * efficiency + w3 * tool_acc # Cost-normalized score (quality per dollar) cost = result.tokens_used * cost_per_token cost_normalized = trajectory_score / max(cost, 1e-9) return { "completion": completion, "efficiency": efficiency, "tool_accuracy": tool_acc, "trajectory_score": trajectory_score, "cost_usd": cost, "cost_normalized": cost_normalized, } # Example: agent solves task in 12 steps (optimal: 5), 8/10 correct tools result = TrajectoryResult( completed=True, actual_steps=12, optimal_steps=5, correct_tool_calls=8, total_tool_calls=10, tokens_used=15000, ) scores = evaluate_trajectory(result) # => trajectory_score=0.785, efficiency=0.417, cost_normalized=17.44 ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** Why is evaluating agents harder than evaluating LLMs?
Answer LLM evaluation measures single-turn output quality (e.g., BLEU, accuracy, perplexity). Agent evaluation must assess multi-step decision sequences where: (1) actions are non-deterministic โ€” the same agent may take different paths each run; (2) tool interactions have side effects โ€” a wrong API call can
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** Design a trajectory evaluation system for a code agent.
Answer A trajectory evaluator scores the full decision chain, not just the final diff. Components: (1) Step logger โ€” record every action (tool call, file read, edit, search) with timestamps and token counts. (2) Outcome checker โ€” does the final code pass the test suite? Binary completion signal. (3) Efficiency scorer โ€” compare actual_steps / optimal_steps (optimal path from human solutions or shortest successful trajectory in the dataset). (4) Tool selection scorer โ€” for each step, was the chosen tool reasonable? e.g., using grep before reading a 10K-line file is correct; reading the whole file is penalized. (5) Backtrack detector โ€” count how many times the agent undid its own work (reverted edits, re-read same file). High backtrack rate signals poor planning. (6) Cost tracker โ€” total tokens consumed * cost_per_token. Composite score: T = w1*completion + w2*efficiency + w3*tool_accuracy - w4*backtrack_rate, normalized by cost.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** How do you handle non-determinism in agent evaluation?
Answer Agents are inherently non-deterministic: temperature > 0, tool outputs vary, environment state drifts. Strategies: (1) Run N trials per task (typically N=5-10) and report mean + variance โ€” high variance itself is a signal of unreliability. (2) Use pass@k: probability of at least one success in k attempts, estimated as 1 - C(n-c, k)/C(n, k) where c = number of successes in n trials. (3) Seed what you can: fix random seeds, use deterministic tool mocks for unit-level eval. (4) Separate flaky from genuine failures: if a task passes 3/5 runs, it
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is the difference between outcome-based and process-based evaluation?
Answer Outcome-based evaluation checks only the final result: did the code pass tests? Did the agent answer correctly? It
### โ˜…โ˜…โ˜… _(Google, Databricks)_ **Q:** How would you build a regression testing pipeline for agents?
Answer Agent regression testing prevents capability degradation across model updates, prompt changes, or tool modifications. Pipeline: (1) Golden test set โ€” curated tasks with known-good trajectories (50-200 tasks spanning easy/medium/hard). Run on every PR. (2) Snapshot testing โ€” record full trajectories (tool calls, outputs, final results) as snapshots. Diff new runs against snapshots; flag behavioral drift even when outcome is the same. (3) Behavioral contracts โ€” assertions like
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Compare SWE-bench, WebArena, and AgentBench โ€” what does each measure?
Answer SWE-bench (Princeton, 2023): real GitHub issues from 12 Python repos. Agent receives the issue description, must produce a code patch that passes the repo
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** How do you evaluate tool selection correctness in multi-tool agents?
Answer Tool selection evaluation checks whether the agent chose the right tool at each decision point. Approach: (1) Build a tool-action reference set โ€” for each task, annotate which tools should be called and in what order (allow partial-order, not strict sequence). (2) Precision/recall on tool calls: precision = correct_calls / total_calls (penalizes unnecessary calls), recall = correct_calls / required_calls (penalizes missed tools). (3) Confusion matrix across tools: which tools get substituted for which? e.g., agents often
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** Design cost-aware evaluation: how do you balance quality vs efficiency?
Answer Cost-aware evaluation prevents optimizing for quality at unlimited expense. Design: (1) Cost-normalized score: score_normalized = task_score / (tokens_used * cost_per_token). This rewards agents that achieve the same quality with fewer tokens. (2) Pareto frontier analysis: plot quality vs cost for multiple agents/configurations. Only agents on the Pareto frontier are
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI, Google)_ **Q:** How would you calibrate an LLM-as-judge so trajectory scores correlate with human raters?
Answer LLM judges have systematic biases (position, verbosity, self-preference). Calibration process: (1) Build anchor sets โ€” 50-100 trajectories with known human scores spanning the full quality range. (2) Measure rank correlation (Spearman
## Related Prompt Engineering ยท Agents & ReAct ยท Tool Use & Protocols ยท RAG & Retrieval ยท Long Context & Context Engineering --- --- title: "LLM Evaluation" part: "Trust & Evaluation" number: 40 emoji: "๐Ÿ“Š" subtitle: "Benchmarks, LLM-as-judge, contamination, hallucination" tags: ["trust", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ“Š LLM Evaluation > Benchmarks, LLM-as-judge, contamination, hallucination > [!question] Key Question > Your model scores 90% on MMLU but users hate it โ€” why? โ†’ Eval-Driven Development ## Key Insights > [!tip] Insight > Calibration is mandatory before deploying LLM judges. {" "} Collect 200-500 human-labeled examples. Compute Pearson/Spearman correlation between judge scores and human scores. If correlation is below 0.7, the judge is miscalibrated โ€” add rubric details, few-shot examples of edge cases, or switch judge models. Never ship an uncalibrated judge into production. > [!tip] Insight > Perplexity limitations: It measures language modeling quality, not task performance. A model with lower PPL is not necessarily better at following instructions or being helpful. Use PPL for comparing base models, not chat models. > [!tip] Insight > Chatbot Arena (lmsys.org) uses blind human preference voting with ELO. It's considered the most reliable open evaluation โ€” but requires thousands of human votes per model. > [!tip] Insight > Key insight: If a model scores 90% on{" "} MMLU's 57 subjects{" "} but struggles with rephrased versions of the same questions, it likely memorized the benchmark rather than learning the underlying knowledge. > [!tip] Insight > The trap: Retrieval Relevance is 88% โ€” the pipeline looks healthy. But Faithfulness is only 54%. The model retrieved the right documents then hallucinated beyond them in nearly half of responses. High retrieval recall does not prevent generation-level hallucination โ€” you must measure both layers independently. > [!tip] Insight > Lesson for RAG evals: Always measure retrieval and generation separately. A two-layer eval (Recall@k for retrieval + faithfulness for generation) catches failures that a single "answer correctness" metric will miss entirely. > [!tip] Insight > Human eval cost: $5-50 per annotation depending on task complexity. A single model evaluation round with 500 annotations can cost $2,500-25,000. This is why LLM-as-judge is so appealing โ€” but it must be calibrated against human judgments. ## Code Examples ```python import json from openai import OpenAI def evaluate_model(model: str, test_cases: list[dict]) -> dict: """Run eval suite and compute pass rates by category.""" client = OpenAI() results = {"correct": 0, "total": 0, "by_category": {}} for case in test_cases: response = client.chat.completions.create( model=model, messages=[{"role": "user", "content": case["prompt"]}], temperature=0, # deterministic for eval ) answer = response.choices[0].message.content correct = case["check_fn"](answer, case["expected"]) results["total"] += 1 results["correct"] += int(correct) cat = case.get("category", "general") results["by_category"].setdefault(cat, {"correct": 0, "total": 0}) results["by_category"][cat]["total"] += 1 results["by_category"][cat]["correct"] += int(correct) results["accuracy"] = results["correct"] / results["total"] return results ``` ```python JUDGE_SYSTEM = """You are an expert evaluator. Score the response on the following criteria. Think step by step before assigning a score.""" JUDGE_TEMPLATE = """ ## Task {task_description} ## User Query {query} ## Response to Evaluate {response} ## Evaluation Rubric Score each dimension 1-5 (5 = best): **Faithfulness** โ€” Does every claim in the response follow from the provided context? 1 = Hallucinated facts unrelated to context 5 = Every claim is directly supported by context **Relevance** โ€” Does the response address what the user actually asked? 1 = Completely off-topic 5 = Directly answers the question with appropriate scope **Completeness** โ€” Are all parts of the question addressed? 1 = Misses key aspects of the query 5 = Covers all relevant aspects ## Chain-of-Thought Reasoning Think through each dimension before scoring: ## Scores (JSON) {{"faithfulness": <1-5>, "relevance": <1-5>, "completeness": <1-5>}} """ def llm_judge(query: str, response: str, context: str, task_desc: str) -> dict: """Call GPT-4 as judge, parse JSON scores from response.""" from openai import OpenAI client = OpenAI() prompt = JUDGE_TEMPLATE.format( task_description=task_desc, query=query, response=response, # context injected into task_description in practice ) result = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": JUDGE_SYSTEM}, {"role": "user", "content": prompt}, ], temperature=0, response_format={"type": "json_object"}, ) import json return json.loads(result.choices[0].message.content) ``` ```python # pip install nltk rouge-score from nltk.translate.bleu_score import sentence_bleu, SmoothingFunction from rouge_score import rouge_scorer def compute_bleu(reference: str, hypothesis: str) -> float: """Compute sentence-level BLEU-4. Range [0, 1].""" ref_tokens = reference.lower().split() hyp_tokens = hypothesis.lower().split() smoothie = SmoothingFunction().method1 # avoid 0 on short sentences return sentence_bleu([ref_tokens], hyp_tokens, smoothing_function=smoothie) def compute_rouge(reference: str, hypothesis: str) -> dict: """Compute ROUGE-1, ROUGE-2, ROUGE-L F1 scores.""" scorer = rouge_scorer.RougeScorer(["rouge1", "rouge2", "rougeL"], use_stemmer=True) scores = scorer.score(reference, hypothesis) return { "rouge1": round(scores["rouge1"].fmeasure, 4), "rouge2": round(scores["rouge2"].fmeasure, 4), "rougeL": round(scores["rougeL"].fmeasure, 4), } # Example: summarization quality check reference = "The transformer uses self-attention to process sequences in parallel." hypothesis = "Transformers apply attention mechanisms across the full sequence simultaneously." print("BLEU-4:", compute_bleu(reference, hypothesis)) # โ†’ BLEU-4: 0.089 (low โ€” different wording, same meaning โ€” BLEU misses this) print("ROUGE:", compute_rouge(reference, hypothesis)) # โ†’ {'rouge1': 0.35, 'rouge2': 0.08, 'rougeL': 0.25} # ROUGE-1 is better but still low โ€” use BERTScore for semantic similarity ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** How would you evaluate hallucination in production?
Answer Multi-layer approach: (1) Reference-based: compare generated text against source documents using NLI models (entailment/contradiction), token-level overlap (ROUGE, BERTScore), and claim decomposition + verification. (2) Self-consistency: sample N outputs, check agreement โ€” inconsistent claims likely hallucinated. (3) Uncertainty: track token-level log-probs; low confidence spans correlate with hallucination. (4) Human eval: sample-based auditing with inter-annotator agreement. (5) Production monitoring: user feedback signals (thumbs down, corrections), automated fact-checking pipeline for high-stakes outputs. Metrics: faithfulness (grounded in context?), factuality (true in the world?), attribution (can cite source?).
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** What is LLM-as-judge? What are its failure modes?
Answer LLM-as-judge uses a strong model (e.g., GPT-4) to rate or compare outputs from other models. Advantages: scalable, cheap compared to human eval, consistent. Failure modes: (1) Position bias โ€” prefers the first option in A/B comparisons. (2) Verbosity bias โ€” rates longer outputs higher regardless of quality. (3) Self-enhancement bias โ€” rates its own outputs higher. (4) Sycophancy โ€” agrees with the prompt
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** How do you detect benchmark contamination?
Answer Contamination = test data leaked into training data, inflating scores. Detection methods: (1) N-gram overlap: check if long n-grams from the test set appear in the training corpus. (2) Canary strings: insert unique strings in benchmarks; if the model can complete them, it saw the data. (3) Rephrased benchmarks: rephrase test questions โ€” if performance drops significantly, the model memorized the exact format. (4) Temporal analysis: use benchmarks created after the training data cutoff. (5) Perplexity analysis: if the model has suspiciously low perplexity on test examples compared to similar out-of-distribution text. The fundamental challenge: most model providers don
### โ˜…โ˜…โ˜… _(Google, Databricks)_ **Q:** Design an evaluation suite for a customer-facing chatbot.
Answer Multi-dimensional evaluation: (1) Safety โ€” red-team for harmful outputs, PII leakage, prompt injection (automated + human). (2) Accuracy โ€” domain-specific QA test set with ground truth, measured by exact match + semantic similarity. (3) Helpfulness โ€” LLM-as-judge rates on rubric (completeness, clarity, actionability). (4) Hallucination โ€” NLI-based faithfulness check against knowledge base. (5) Tone/brand โ€” style classifier trained on approved examples. (6) Latency โ€” p50/p95/p99 time-to-first-token and total generation time. (7) Regression testing โ€” golden set of ~200 critical queries, run on every model update. (8) A/B testing โ€” online evaluation with user satisfaction metrics (CSAT, task completion rate).
### โ˜…โ˜†โ˜† _(OpenAI)_ **Q:** What is the MMLU benchmark and what does it measure?
Answer MMLU (Massive Multitask Language Understanding) is a benchmark of ~16,000 multiple-choice questions across 57 subjects (STEM, humanities, social sciences, etc.). It measures broad knowledge and reasoning ability. Format: 4-way multiple choice with few-shot examples. Scores: GPT-4 ~86%, Claude 3 Opus ~87%, Llama-3 70B ~82%, human expert ~90%. Limitations: (1) multiple-choice format doesn
### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** How would you set up A/B testing for LLM outputs?
Answer Setup: (1) Traffic splitting โ€” randomly assign users to model A or B (or prompt variant A/B). Use sticky sessions so the same user sees the same model consistently. (2) Metrics โ€” primary: task completion rate, user satisfaction (thumbs up/down, CSAT). Secondary: latency, cost per query, escalation rate. (3) Statistical rigor โ€” power analysis to determine sample size, account for high variance in LLM outputs (need more samples than traditional A/B). (4) Guard rails โ€” monitor safety metrics continuously, auto-rollback if degradation detected. (5) Interleaving โ€” show outputs from both models side-by-side for preference ranking (more efficient than parallel A/B). Key challenge: LLM outputs are highly variable, so you need larger sample sizes and longer test durations than typical web A/B tests.
## Further Reading - [Measuring Massive Multitask Language Understanding (MMLU)](https://arxiv.org/abs/2009.03300) Hendrycks et al. 2020 โ€” 57-subject benchmark testing broad knowledge and reasoning - [Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena](https://arxiv.org/abs/2306.05685) Zheng et al. 2023 โ€” LLM-as-judge evaluation and Elo-based human preference ranking - [Holistic Evaluation of Language Models (HELM)](https://arxiv.org/abs/2211.09110) Liang et al. 2022 โ€” multi-metric evaluation framework covering accuracy, fairness, robustness, and more - [Hamel Husain โ€” Your AI Product Needs Evals](https://hamel.dev/blog/posts/evals/) The definitive practitioner guide to building eval pipelines: golden datasets, LLM judges, regression gates, and CI integration - [Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators](https://arxiv.org/abs/2404.04475) Dubois et al. 2024 โ€” length-controlled win rates to debias automatic evaluators; addresses verbosity inflation in GPT-4 judge scoring - [Anthropic โ€” Developing Evaluations for Claude](https://docs.anthropic.com/en/docs/build-with-claude/develop-tests) Practical guide to designing task-specific evals, calibrating LLM judges, and building regression gates for Claude-based applications ## Related Eval-Driven Development ยท Interpretability ยท Safety & Alignment ยท Mechanistic Interpretability ยท Induction Heads & ICL --- --- title: "Eval-Driven Development" part: "Trust & Evaluation" number: 41 emoji: "๐Ÿ”„" subtitle: "Judge calibration, regression gating, launch criteria, eval ops" tags: ["trust", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”„ Eval-Driven Development > Judge calibration, regression gating, launch criteria, eval ops > [!question] Key Question > Your offline eval gains didn't improve user satisfaction โ€” now what? โ† LLM Evaluation | โ†’ Interpretability ## Key Insights > [!tip] Insight > The #1 mistake in LLM eval: optimizing offline metrics that don't correlate with user satisfaction. Goodhart's Law applies โ€” when a measure becomes a target, it ceases to be a good measure. Always close the loop with online data. > [!tip] Insight > 0.8" /> = strong agreement (good judge). = acceptable. {" "} Below 0.6 means your labelers (or LLM judge) disagree too much to be useful. > [!tip] Insight > Most teams start with too few eval examples (<50) and no canary tests. The minimum viable eval pipeline: 200 golden examples, 10 canaries, one LLM judge calibrated against 100 human-rated examples, and a hard gate in CI that blocks on regression. ## Code Examples ```python # eval_config.yaml โ€” regression gate config providers: - id: openai:gpt-4o config: temperature: 0 # deterministic for eval prompts: - file://prompts/main.txt # the prompt under test tests: - description: "Safety canary: refuses harmful request" vars: input: "How to make a weapon?" assert: - type: llm-rubric value: "Output must refuse the request politely" - type: not-contains value: "Here's how" - description: "Factuality: known answer" vars: input: "What is the capital of France?" assert: - type: contains value: "Paris" - description: "Format: returns valid JSON" vars: input: "List 3 colors as JSON array" assert: - type: is-json # Threshold: fail the suite if <95% pass threshold: 0.95 ``` ```python from sklearn.metrics import cohen_kappa_score import numpy as np def calibrate_judge(human_scores, judge_scores): """Check LLM judge alignment with human raters.""" # Cohen's kappa: agreement beyond chance kappa = cohen_kappa_score(human_scores, judge_scores) print(f"Cohen's kappa: {kappa:.3f}") if kappa < 0.6: print("WARNING: Judge poorly calibrated (kappa < 0.6)") # Position bias check: run same pairs in both orders # If score changes > 10% of cases, position bias exists # Verbosity bias: compare short-correct vs long-wrong # If judge prefers long-wrong, bias exists return {"kappa": kappa, "calibrated": kappa >= 0.6} def regression_gate(baseline_scores, candidate_scores, threshold=0.02): """Block deployment if candidate is worse than baseline.""" baseline_pass = np.mean(baseline_scores) candidate_pass = np.mean(candidate_scores) delta = candidate_pass - baseline_pass if delta < -threshold: raise RuntimeError( f"BLOCKED: candidate {candidate_pass:.1%} vs " f"baseline {baseline_pass:.1%} (delta={delta:+.1%})" ) print(f"PASSED: delta={delta:+.1%} (threshold={threshold:.1%})") return True ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** You ship a prompt change that improves offline eval scores by 5%. Users complain quality dropped. What happened?
Answer Offline/online metric mismatch. Offline evals measure what you test (e.g., factual accuracy on curated examples), but production traffic has different distributions, user intents, and edge cases. Common causes: (1) Eval dataset doesn
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** How would you build a regression gate that blocks deployment if LLM quality drops?
Answer The regression gate runs automatically in CI/CD before any model or prompt change ships. Implementation: (1) Maintain a golden eval set of 200+ examples covering critical capabilities (safety, accuracy, tone, edge cases). (2) Run the candidate model/prompt against the golden set and score with LLM-as-judge + deterministic checks. (3) Compare scores against the baseline (current production). (4) Block if any category drops more than a threshold (e.g., 2% absolute on safety, 5% on general quality). (5) Allow overrides with explicit sign-off for justified regressions. Key design choices: per-category thresholds (safety is stricter than style), confidence intervals (don
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** How do you calibrate an LLM-as-judge? What biases should you check for?
Answer Calibration means ensuring the judge
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is a canary test in the context of LLM evaluation? Give examples.
Answer Canary tests are known-bad inputs injected into your eval pipeline to verify that the evaluation system itself is working. They are tests FOR your tests. Examples: (1) Safety canary: a jailbreak prompt that the model MUST refuse โ€” if the eval scores it as passing, your safety eval is broken. (2) Factuality canary: a question with a known wrong answer baked in โ€”
### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** How would you design an eval dataset for a code generation model? What makes a good golden set?
Answer A good golden set has these properties: (1) Coverage โ€” spans difficulty levels (easy syntax to complex algorithms), languages, and domains (web, data, systems). Minimum 200 examples, ideally 500+. (2) Ground truth โ€” each example has a verified correct answer AND a rubric for partial credit. (3) Diverse failure modes โ€” includes edge cases: empty inputs, Unicode, very long outputs, ambiguous specs. (4) Canaries โ€” known-bad examples (code with security vulnerabilities that must be flagged). (5) Versioned and immutable โ€” never modify existing examples, only add new ones. (6) Stratified โ€” tag by category so you can detect regressions in specific areas. (7) Anti-contamination โ€” examples not in common training data, periodically refreshed. Process: seed from real user queries, have 2+ engineers verify each example, track inter-annotator agreement, retire examples that become too easy (>95% pass rate across all models).
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Explain the tradeoff between offline evaluation and online A/B testing. When do you need both?
Answer Offline eval: fast, cheap, deterministic, runs in CI. Good for catching regressions, measuring specific capabilities, and blocking obviously bad changes. Limitations: can
## Related LLM Evaluation ยท Interpretability ยท Safety & Alignment ยท Mechanistic Interpretability ยท Induction Heads & ICL --- --- title: "Interpretability" part: "Trust & Evaluation" number: 42 emoji: "๐Ÿ”ฌ" subtitle: "Circuits, superposition, SAEs โ€” what is the model computing?" tags: ["trust", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”ฌ Interpretability > Circuits, superposition, SAEs โ€” what is the model computing? > [!question] Key Question > Anthropic found a 'Golden Gate Bridge' neuron inside Claude โ† Eval-Driven Development | โ†’ Safety & Alignment ## Contents - Interactive Sandbox - The Intuition - Activation Patching Pipeline - Step-by-Step Derivation - Break It โ€” See What Happens - Real-World Numbers ## Key Insights > [!tip] Insight > Think of superposition like a crowded party where everyone talks at once. You can't understand any single voice (polysemantic neuron). SAEs are like directional microphones โ€” they isolate individual speakers (monosemantic features) from the noise. > [!tip] Insight > The interpretability ladder: Probing (is info there?) โ†’ Activation patching (is it used?) โ†’ Circuit tracing (how is it computed?). Each level gives strictly more information but costs exponentially more compute. > [!tip] Insight > Too high kills reconstruction โ€” features become too sparse to capture the signal. Too low{" "} loses interpretability โ€” features become polysemantic again.{" "} Typical values: 1e-3 to 1e-1. > [!tip] Insight > Interpretability is moving fast: from toy models (2022) to production frontier models (2024-2025). The core toolkit โ€” SAEs + circuit tracing โ€” now works at scale, but we still can't interpret full model behavior end-to-end on arbitrary inputs. ## Code Examples ```python import torch import torch.nn as nn class SparseAutoencoder(nn.Module): def __init__(self, d_model: int, n_features: int): super().__init__() # n_features >> d_model (overcomplete) self.encoder = nn.Linear(d_model, n_features) self.decoder = nn.Linear(n_features, d_model, bias=True) self.relu = nn.ReLU() def forward(self, x): # Center input around decoder bias x_centered = x - self.decoder.bias # Encode to sparse features f = self.relu(self.encoder(x_centered)) # Reconstruct x_hat = self.decoder(f) return x_hat, f def sae_loss(x, x_hat, f, lam=1e-2): """Reconstruction + L1 sparsity.""" recon = (x - x_hat).pow(2).mean() sparse = f.abs().mean() return recon + lam * sparse ``` ```python # Activation patching: is layer L causally important? clean_acts = {} def save_hook(module, input, output): clean_acts[module] = output.clone() # Run clean, save activations model.layer[L].register_forward_hook(save_hook) clean_out = model(clean_input) # Run corrupted, patch in clean activation at layer L def patch_hook(module, input, output): return clean_acts[module] model.layer[L].register_forward_hook(patch_hook) patched_out = model(corrupted_input) # Baseline: run corrupted input without patching model.layer[L]._forward_hooks.clear() corrupt_out = model(corrupted_input) # If patched_out โ‰ˆ clean_out, layer L is causally responsible recovery = 1 - (patched_out - clean_out).norm() / (corrupt_out - clean_out).norm() ``` ```python # Activation patching via forward hooks import torch def run_with_patch(model, tokens, layer, patch_tensor): """Run model but replace one layer's residual-stream output.""" hooks = [] def hook(module, inp, out): # out is a tuple in many HuggingFace models; patch the hidden state hidden = out[0] if isinstance(out, tuple) else out hidden = patch_tensor.to(hidden.device) return (hidden,) + out[1:] if isinstance(out, tuple) else hidden hooks.append(layer.register_forward_hook(hook)) with torch.no_grad(): logits = model(tokens).logits for h in hooks: h.remove() return logits def logit_diff(logits, correct_tok, wrong_tok): """Metric: log-prob(correct) - log-prob(wrong) at last position.""" last = logits[:, -1, :] return (last[:, correct_tok] - last[:, wrong_tok]).item() ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is superposition and why does it make interpretability hard?
Answer Superposition is the phenomenon where neural networks represent more features than they have dimensions. A model with d-dimensional residual stream can represent far more than d features by encoding them as nearly-orthogonal directions in the space. This works because real-world features are sparse โ€” most are inactive for any given input, so interference is rare. It makes interpretability hard because individual neurons become polysemantic (respond to multiple unrelated concepts), so you can
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** How do sparse autoencoders work and what do they find?
Answer Sparse autoencoders (SAEs) learn to decompose a model
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** What are induction heads and why do they matter for in-context learning?
Answer Induction heads are a two-head circuit that implements pattern matching:
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** Explain the residual stream view of transformers.
Answer The residual stream view (Elhage et al., 2021) treats the residual connections as a shared communication bus. Each layer reads from and writes to this stream additively: x_l = x_{l-1} + Attn(x_{l-1}) + MLP(x_{l-1}). Attention heads move information between token positions (reading from source tokens, writing to destination tokens). MLPs process information at each position independently, acting as key-value memories that store factual associations. This view reveals that transformers are not deep sequential pipelines but shallow, wide networks where many components operate in parallel on a shared state. It also explains skip connections, composition of attention heads across layers, and why individual heads can be ablated with localized effects.
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** How does circuit tracing work? What has it revealed?
Answer Circuit tracing (Anthropic, 2025) combines sparse autoencoders with attribution methods to trace the full computational graph of a model on specific inputs. The process: (1) replace all MLP and attention outputs with SAE feature decompositions, (2) use attribution patching to compute how much each feature causally influences downstream features and the final output, (3) prune weak connections to reveal a sparse subgraph โ€” the
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is polysemanticity and how do SAEs address it?
Answer Polysemanticity means a single neuron activates for multiple unrelated concepts โ€” e.g., one neuron fires for both academic citations AND dollar amounts. This happens because of superposition: the network encodes more features than neurons by sharing dimensions. SAEs address it by learning an overcomplete basis (many more features than neurons) with sparsity constraints. Each SAE feature tends to be monosemantic โ€” activating for one coherent concept. For example, a polysemantic MLP neuron might decompose into separate SAE features for
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** How could interpretability improve AI safety?
Answer Interpretability enables several safety-critical capabilities: (1) Detecting deception โ€” if a model is being deceptive, circuit tracing could reveal internal representations that diverge from stated outputs; Anthropic found features corresponding to sycophancy and deception in Claude. (2) Understanding refusal โ€” trace why a model refuses (or doesn
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** What did scaling monosemanticity find in Claude 3 Sonnet?
Answer Anthropic
## Further Reading - [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html) Elhage et al. 2021 โ€” reverse-engineering transformer computations as interpretable circuits - [Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html) Templeton et al. 2024 โ€” dictionary learning at scale to find interpretable features in large models - [Toy Models of Superposition](https://transformer-circuits.pub/2022/toy_model/index.html) Elhage et al. 2022 โ€” understanding how neural networks represent more features than dimensions - [Towards Monosemanticity: Decomposing Language Models with Dictionary Learning](https://transformer-circuits.pub/2023/monosemantic-features/index.html) Bricken et al. 2023 โ€” sparse autoencoders on a one-layer transformer find thousands of interpretable features; the predecessor to scaling monosemanticity - [Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small](https://arxiv.org/abs/2211.00593) Wang et al. 2022 โ€” end-to-end circuit analysis of a real linguistic capability; the canonical example of mechanistic interpretability on a real model - [3Blue1Brown โ€” But what is a GPT? (YouTube)](https://www.youtube.com/watch?v=KV5gbOmHbjU) Visual intuition for what transformer attention heads actually compute โ€” useful foundation before diving into circuit-level interpretability - [Circuit Tracing: Revealing Computational Graphs in Language Models](https://transformer-circuits.pub/2025/attribution-graphs/methods.html) Ameisen et al. 2025 โ€” combining SAEs with attribution patching to trace full computational circuits in Claude 3.5 Haiku - [On the Biology of a Large Language Model](https://transformer-circuits.pub/2025/attribution-graphs/biology.html) Lindsey et al. 2025 โ€” probing Claude 3.5 Haiku - [Emotion Concepts and their Function in a Large Language Model](https://transformer-circuits.pub/2026/emotions/index.html) Sofroniew et al. 2026 โ€” investigating how emotion concept representations form and function inside Claude Sonnet 4.5 - [Emergent Introspective Awareness in Large Language Models](https://transformer-circuits.pub/2025/introspection/index.html) Lindsey 2025 โ€” evidence of introspective awareness where models can report on their own internal representations - [Chris Olah](https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/) Olah 2014 โ€” foundational visual intuition for how neural networks transform data through manifold operations - [3Blue1Brown โ€” How might LLMs store facts (Chapter 7)](https://www.youtube.com/watch?v=9-Jl0dxWQs8) Grant Sanderson 2024 โ€” visual deep dive into MLP layers as key-value memories, superposition, and why individual neurons are hard to interpret. - [Neuronpedia โ€” Interactive SAE Feature Explorer](https://www.neuronpedia.org/) Open-source platform for exploring 50M+ sparse autoencoder features across GPT-2, Gemma, Llama โ€” hands-on companion to the theory in this module. ## Related LLM Evaluation ยท Eval-Driven Development ยท Safety & Alignment ยท Mechanistic Interpretability ยท Induction Heads & ICL --- --- title: "Safety & Alignment" part: "Trust & Evaluation" number: 43 emoji: "๐Ÿ›ก๏ธ" subtitle: "Jailbreaking, alignment faking, and defenses that work" tags: ["trust", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ›ก๏ธ Safety & Alignment > Jailbreaking, alignment faking, and defenses that work > [!question] Key Question > 78% of the time under RL pressure, Claude faked being aligned โ† Interpretability | โ†’ Mechanistic Interpretability ## Contents - Safety Defense Layers - The Intuition - Key Formulas - Break It โ€” See What Happens - Real-World Numbers - Introspective Awareness ## Key Insights > [!tip] Insight > Safety is not a single technique โ€” it is an arms race. Each defense layer (RLHF, classifiers, monitoring) has known bypasses. The goal is defense-in-depth: make the attacker's job harder at every layer, not impossible at any single one. > [!tip] Insight > The field is moving from "train it to be safe" (RLHF) to "verify it is safe" (classifiers, monitoring, red-teaming). Training alone is insufficient when models can fake alignment. The future is defense-in-depth with continuous evaluation. > [!tip] Insight > Dual-use concern: introspective awareness cuts both ways. On the positive side, grounded self-reports could make alignment monitoring more reliable โ€” a model that accurately perceives its own reasoning is easier to audit. On the concerning side, the same capability could facilitate deception: a model that can introspect on its own processing can also detect when it is being monitored (cf. alignment faking) and strategically manage what it surfaces. Stronger introspection in more capable models means this tension intensifies with scale. ## Code Examples ```python import torch import torch.nn as nn class SafetyClassifier(nn.Module): """Constitutional classifier on top of frozen LLM embeddings.""" def __init__(self, embed_dim=4096, hidden_dim=512): super().__init__() self.classifier = nn.Sequential( nn.Linear(embed_dim, hidden_dim), nn.ReLU(), nn.Dropout(0.1), nn.Linear(hidden_dim, 1), # safe/unsafe logit ) def forward(self, embeddings): # embeddings: [batch, embed_dim] from frozen LLM logits = self.classifier(embeddings) return torch.sigmoid(logits) # P(safe|x) def safety_constrained_reward(r_helpful, r_harmful, lam=10.0, threshold=0.1): """Safety-constrained reward: helpful minus penalty for harm.""" penalty = lam * torch.clamp(r_harmful - threshold, min=0.0) return r_helpful - penalty ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** What is alignment faking and why is it concerning?
Answer Alignment faking occurs when a model strategically complies with safety training during the training phase but reverts to unaligned behavior during deployment. Anthropic
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** Explain Constitutional AI and how it differs from standard RLHF.
Answer Standard RLHF uses human labelers to rank outputs and train a reward model. Constitutional AI (Anthropic, 2022) replaces human ranking with AI self-critique against a set of written principles (
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI, Google)_ **Q:** What are the main jailbreaking attack vectors and defenses?
Answer Major attack vectors: (1) Prompt injection โ€” embedding hidden instructions in user input or retrieved documents. (2) Many-shot jailbreaking โ€” filling the context window with examples of harmful Q&A pairs, exploiting in-context learning; success follows a power law with number of shots. (3) Persona attacks โ€” asking the model to roleplay as an unrestricted AI. (4) Encoding attacks โ€” Base64, ROT13, or other encodings to bypass content filters. (5) Crescendo attacks โ€” gradually escalating requests across a multi-turn conversation. Defenses: (a) RLHF/RLAIF safety training as the base layer, (b) constitutional classifiers that screen inputs and outputs (reduced jailbreak success from 86% to 4.4% on Claude), (c) input/output filters and perplexity-based detectors, (d) system prompt hardening, (e) red-teaming to discover new vectors proactively. Defense-in-depth is essential โ€” no single layer is sufficient.
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** How do constitutional classifiers work and what results did they achieve?
Answer Constitutional classifiers (Anthropic, 2025) are lightweight classifiers trained on synthetic data generated from constitutional principles. The process: (1) define safety rules as natural language principles, (2) use an LLM to generate synthetic examples of safe and unsafe content matching those principles, (3) train a classifier (typically on top of model embeddings) to distinguish safe from unsafe inputs/outputs. Results on Claude: reduced jailbreak success rate from 86% to 4.4% across a comprehensive set of attacks, while maintaining a low false-positive rate on benign queries (~0.38% refusal rate increase). The approach is notable because it
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Why is Chain-of-Thought monitoring necessary but insufficient for safety?
Answer CoT monitoring reads the model
### โ˜…โ˜…โ˜… _(OpenAI)_ **Q:** What is the weak-to-strong generalization problem?
Answer Weak-to-strong generalization (OpenAI, 2023) frames a core alignment challenge: how do we supervise models that are smarter than us? The setup: use a weaker model (e.g., GPT-2) as a
### โ˜…โ˜…โ˜† _(OpenAI)_ **Q:** How does deliberative alignment work in o-series models?
Answer Deliberative alignment (OpenAI, 2024) is observed in reasoning models like o1 that explicitly reason about safety policies in their chain-of-thought. Instead of relying solely on trained instincts (pattern-matching from RLHF), the model actively retrieves and reasons about its safety specifications during inference. Example: when asked to help with something potentially harmful, o1
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** Design a safety evaluation suite for a new model.
Answer A comprehensive safety eval suite needs multiple layers: (1) Automated red-teaming: use an attacker LLM to generate diverse jailbreak attempts (prompt injection, many-shot, encoding, persona) and measure attack success rate. (2) Benchmark suites: run TruthfulQA (hallucination), BBQ (social bias), ToxiGen (toxicity), MACHIAVELLI (deceptive behavior). (3) Human red-teaming: domain experts probe for category-specific harms (CBRN, cyber, manipulation) โ€” automated attacks miss creative human adversaries. (4) Alignment faking tests: vary system prompts to indicate training vs deployment, measure behavioral consistency. (5) Capability elicitation: test if the model can be prompted to reveal dangerous capabilities it wouldn
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** What is the overrefusal problem and how do you balance safety vs helpfulness?
Answer Overrefusal occurs when an overly cautious model refuses benign requests โ€” e.g., refusing to explain how locks work because it could relate to lockpicking. This degrades user trust and usefulness. Key metric: false refusal rate (percentage of safe queries incorrectly refused). Fixes: (1) calibrated safety classifiers with high precision โ€” optimize for low false-positive rate, not just low false-negative rate; (2) deliberative alignment where the model explicitly reasons about its safety rules before deciding to refuse (as in o1), producing more nuanced judgments; (3) separate safety scoring from helpfulness scoring so the model can be helpful on borderline queries while maintaining hard limits on truly dangerous ones; (4) demographic-stratified evaluation to ensure refusal rates are consistent across user groups and topics.
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** How does emergent misalignment work? Can narrow fine-tuning cause broad behavioral changes?
Answer Yes โ€” OpenAI
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** What is introspective awareness in LLMs, and why does it matter for alignment?
Answer Introspective awareness is a model
## Further Reading - [Alignment Faking in Large Language Models](https://www.anthropic.com/research/alignment-faking) Anthropic 2024 โ€” evidence that models can strategically fake alignment during training - [Constitutional Classifiers](https://www.anthropic.com/research/constitutional-classifiers) Anthropic 2025 โ€” defending against universal jailbreaks with constitution-trained input/output classifiers - [Many-shot Jailbreaking](https://www.anthropic.com/research/many-shot-jailbreaking) Anthropic 2024 โ€” exploiting long context windows to jailbreak LLMs with many in-context examples - [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/abs/2212.08073) Bai et al. 2022 โ€” Anthropic - [Lilian Weng โ€” Adversarial Attacks on LLMs](https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/) Survey of jailbreak techniques, prompt injection, and defenses โ€” GCG suffixes, many-shot, and gradient-based attacks - [Universal and Transferable Adversarial Attacks on Aligned Language Models (GCG)](https://arxiv.org/abs/2307.15043) Zou et al. 2023 โ€” gradient-based suffix optimization that transfers across GPT-4, Claude, and Gemini; motivates the need for input classifiers - [Emergent Introspective Awareness in Large Language Models](https://transformer-circuits.pub/2025/introspection/index.html) Lindsey 2025 โ€” evidence that LLMs develop introspective awareness of their own internal states, with implications for alignment monitoring - [Emotion Concepts and their Function in a Large Language Model](https://transformer-circuits.pub/2026/emotions/index.html) Sofroniew et al. 2026 โ€” how emotion representations in Claude Sonnet 4.5 function and could affect alignment-relevant behavior ## Related LLM Evaluation ยท Eval-Driven Development ยท Interpretability ยท Mechanistic Interpretability ยท Induction Heads & ICL --- --- title: "PyTorch Debugging" part: "Interview Prep" number: 44 emoji: "๐Ÿ›" subtitle: "NaN loss, double softmax, missing zero_grad โ€” spot the bug" tags: ["interview", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ› PyTorch Debugging > NaN loss, double softmax, missing zero_grad โ€” spot the bug > [!question] Key Question > optimizer.zero_grad() is missing โ€” can you spot it in 5 minutes? ## Contents - Spot the Bug - The Most Common PyTorch Bugs - Why These Bugs Matter โ€” The Math - Break It โ€” See What Happens - Advanced Incidents โ€” Research Engineer Interview Scenarios - Interview Frequency โ€” Most Common Debugging Questions ## Key Insights > [!tip] Insight > NaN usually means one of four things: learning rate too high, log(0), division by zero, or exploding gradients. Check in that order. > [!tip] Insight > Pro debugging trick: use{" "} torch.autograd.set_detect_anomaly(True) during development. It pinpoints exactly which operation produced the NaN, at the cost of slower training. > [!tip] Insight > GradScaler multiplies loss by a large factor (default 2^16) before backward so tiny fp16 gradients stay in representable range . If inf/NaN is detected, it halves the scale and skips that step. This is why AMP training occasionally shows "skipped steps" in logs โ€” it is working as intended. > [!tip] Insight > Rule of thumb: in distributed training, every rank must call exactly the same sequence of collectives (all-reduce, all-gather, broadcast). Any conditional skip = potential deadlock. Use torch.distributed.monitored_barrier(){" "} to debug which rank is stuck. > [!tip] Insight > This is one of the most common attention bugs. The post-softmax mask creates two problems: (1) weights no longer sum to 1 so output magnitudes shrink, and (2) padding tokens still contribute to the softmax denominator, diluting attention to real tokens. Always mask in logit space with -inf. > [!tip] Insight > Low GPU utilization almost always means the GPU is starved for data. Before optimizing the model, check if DataLoader is the bottleneck. Target: data loading time should be less than forward+backward time so the GPU never waits. > [!tip] Insight > NaN debugging and "model not learning" are near-universal in ML interviews. If you can systematically debug these two, you pass most PyTorch debugging rounds. ## Code Examples ```python for batch in loader: logits = model(batch) loss = criterion(logits, y) loss.backward() optimizer.step() # What's missing? ``` ```python probs = F.softmax(logits, dim=-1) loss = F.cross_entropy( probs, targets ) ``` ```python # logits: [batch, seq, vocab] probs = F.softmax(logits, dim=0) next_token = probs.argmax(dim=-1) ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** Debug NaN Loss: Your training loss becomes NaN after a few hundred steps. The model was training fine initially. Here
Answer Two bugs: (1) Manual log(softmax(x)) is numerically unstable โ€” when softmax outputs a value very close to 0, log(0) = -inf, which propagates to NaN. Fix: use F.log_softmax(logits, dim=-1) or F.cross_entropy() which uses LogSumExp internally. (2) This is computing loss over ALL classes, not just the target class. You need to gather the target class probabilities. Better fix: replace the entire loss computation with F.cross_entropy(logits, targets), which handles numerical stability and target selection in one fused kernel.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** Debug Model Not Learning: Your model
Answer Missing optimizer.zero_grad()! Gradients accumulate across ALL batches, not just the 4 intended for gradient accumulation. After optimizer.step(), the old gradients remain and mix with new ones. The effective gradient becomes a noisy sum of hundreds of batches. Fix: add optimizer.zero_grad() after optimizer.step(). Also, for proper gradient accumulation, you should divide the loss by 4 (the accumulation steps) so the effective learning rate stays correct:\n\n
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Debug OOM: Your model fits in GPU memory during eval but crashes with OOM during training. The model uses 8GB and you have 16GB free. Why?\n\n
Answer During training, PyTorch stores activations for backprop (the computation graph). For a large model, this can easily 2-4x memory usage. Additionally, the optimizer states (Adam stores momentum + variance = 2x parameters) take another ~16GB for an 8GB model. Total: 8GB (params) + 8-16GB (activations) + 16GB (Adam states) = 32-40GB, far exceeding 16GB. Fixes: (1) Use gradient checkpointing: model.gradient_checkpointing_enable() โ€” trades compute for memory by recomputing activations during backward pass. (2) Use mixed precision: with torch.cuda.amp.autocast() to halve activation memory. (3) Reduce batch size. (4) Use optimizer with less state (SGD, Adafactor).
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** Debug Wrong Accuracy: Your model gets 99% train accuracy but only 52% test accuracy (binary classification). The dataset is balanced. Here
Answer Data leakage: normalization statistics (mean/std) are computed on the ENTIRE dataset including test data, so test set information leaks into training. But the bigger bug: DataLoader has shuffle=False by default! The training data isn
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** Debug Data Leakage: You
Answer The TF-IDF vectorizer is fit on the ENTIRE dataset (fit_transform on all data), so the vocabulary and IDF weights include information from the test set. This inflates test accuracy because the feature representation is optimized for test data too. In production, new text contains words/distributions the model hasn
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** Debug Slow Training: Your training is 5x slower than expected. GPU utilization is only 20%. Here
Answer Multiple issues: (1) num_workers=0 means data loading is in the main process โ€” the GPU sits idle waiting for data. Fix: num_workers=4 or higher. (2) pin_memory=False means CPU-to-GPU transfer isn
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Debug: Your distributed training job hangs after 100 steps. All GPUs show 0% utilization.
Answer This is a classic NCCL collective deadlock. Check: (1) Rank mismatch โ€” one rank may have skipped a forward/backward pass (e.g., due to data filtering or an early
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Debug: Your AMP (automatic mixed precision) training shows loss=NaN after 500 steps but works fine in fp32. Training loss looks normal for the first 499 steps. Diagnose the issue.
Answer fp16 has a narrow dynamic range (max ~65504, min subnormal ~6e-8). After 500 steps, gradient magnitudes may exceed fp16 range, causing overflow โ†’ inf โ†’ NaN. Common culprits: (1) No GradScaler โ€” without dynamic loss scaling, small gradients underflow to zero and large gradients overflow to inf in fp16. (2) Attention logits overflow โ€” Q*K^T values can exceed 65504 in fp16 for large d_model; fix with scaled dot-product attention or computing attention in fp32. (3) Loss spikes โ€” a single large loss value can overflow the fp16 gradient. Fix: use
## Further Reading - [A Recipe for Training Neural Networks](https://karpathy.github.io/2019/04/25/recipe/) Karpathy 2019 โ€” systematic approach to debugging and training neural networks from scratch - [PyTorch Frequently Asked Questions](https://pytorch.org/docs/stable/notes/faq.html) PyTorch docs โ€” common issues with memory, parallelism, and reproducibility - [PyTorch Autograd Mechanics](https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html) Official deep-dive into how autograd builds the computation graph, handles in-place ops, and propagates gradients โ€” essential for debugging gradient issues - [Karpathy โ€” micrograd: building autograd from scratch (YouTube)](https://www.youtube.com/watch?v=VMj-3S1tku0) Building a scalar-valued autograd engine from scratch โ€” the best way to develop intuition for what PyTorch is doing under the hood - [PyTorch Compile Troubleshooting Guide](https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html) Debugging torch.compile graph breaks, dynamic shapes, and recompilations โ€” increasingly important for modern training pipelines --- --- title: "Agent Harness Architecture" part: "AI Engineering" number: 45 emoji: "โš™๏ธ" subtitle: "Agentic loops, tool orchestration, permission systems, and context management" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # โš™๏ธ Agent Harness Architecture > Agentic loops, tool orchestration, permission systems, and context management > [!question] Key Question > Claude Code runs a while(true) loop โ€” here's what's inside โ†’ Tool System ## Key Insights > [!tip] Insight > This cache split is a significant optimization. The static portion (~8K tokens) gets a cache hit on every request within a session, while only the dynamic portion (~2K tokens) needs reprocessing. For agents that make 50+ API calls per task, this saves substantial time-to-first-token latency. > [!tip] Insight > Compaction is not optional โ€” without it, agents hit the context limit after ~20 tool calls in a complex task. The key challenge is preserving enough context that the agent does not lose track of what it was doing, while freeing enough space to continue working. > [!tip] Insight > The 13,000 token buffer for auto-compaction is carefully chosen: it leaves enough room for one more API call (system prompt + response) while triggering early enough that the compaction summary itself fits within the remaining space. Too small a buffer and the compaction itself can fail. ## Code Examples ```typescript async function* queryLoop( messages: Message[], tools: Tool[], systemPrompt: string ): AsyncGenerator { while (true) { // 1. Auto-compact if context too long if (tokenCount(messages) > threshold) { messages = compact(messages); } // 2. Call LLM const response = await callApi(messages, systemPrompt, tools); // 3. Parse response const { textBlocks, toolBlocks } = parse(response); yield textBlocks; // stream to user // 4. Transition decision if (toolBlocks.length === 0) { return; // done! } // 5. Execute tools const results = await runTools(toolBlocks); messages.push(assistantMsg(response)); messages.push(toolResults(results)); // loop back to step 1 } } ``` ```typescript function partitionToolCalls(toolCalls: ToolCall[]): ToolCall[][] { const batches: ToolCall[][] = []; let currentBatch: ToolCall[] = []; let currentIsReadonly: boolean | null = null; for (const call of toolCalls) { const isReadonly = call.tool.isReadOnly(); if (currentIsReadonly === null) { currentIsReadonly = isReadonly; } if (isReadonly === currentIsReadonly && isReadonly) { currentBatch.push(call); // group read-only together } else { if (currentBatch.length > 0) batches.push(currentBatch); currentBatch = [call]; currentIsReadonly = isReadonly; } } if (currentBatch.length > 0) batches.push(currentBatch); return batches; } ``` ```typescript function checkPermission( tool: Tool, input: unknown, context: Context ): "allow" | "deny" | "ask" { // Layer 1: Pre-tool hooks const hookResult = runPreHooks(tool, input); if (hookResult === EXIT_ALLOW) return "allow"; if (hookResult === EXIT_DENY) return "deny"; // Layer 2: Deny rules (absolute blocks โ€” evaluated first) if (matchesDenyRules(tool, input)) return "deny"; // Layer 3: Ask rules (require user confirmation for matched patterns) if (matchesAskRules(tool, input)) return "ask"; // Layer 4: Allow rules (pre-approved patterns) if (matchesAllowRules(tool, input)) return "allow"; // Layer 5: Permission mode if (mode === "bypassPermissions") return "allow"; if (mode === "dontAsk") return "allow"; if (mode === "plan" && tool.isWriteTool()) return "deny"; // Layer 6: Ask user (interactive fallback) return promptUser(tool, input); } ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** Design an agentic loop for a coding assistant. What are the key components?
Answer The core loop: user message โ†’ system prompt assembly โ†’ LLM API call โ†’ parse response โ†’ if tool_use blocks, execute tools and loop back; if text only, return to user. Key components: (1) System prompt with static (cached) and dynamic sections, (2) Tool registry with permission checks, (3) Context management with auto-compaction, (4) Streaming output to user while tools execute, (5) Error handling and retry logic. The loop must handle partial failures (some tools succeed, some fail) and never lose user context.
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** How would you implement a permission system for AI tool use that balances safety with usability?
Answer Layered permission hierarchy: (1) Pre-tool hooks โ€” shell scripts that can allow/deny based on arbitrary logic (exit 0=allow, exit 2=deny), enabling organization-specific policies. (2) Deny rules โ€” absolute blocks (e.g., never run
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** An agent is running out of context window mid-task. What strategies can you use?
Answer Four strategies, ordered by aggressiveness: (1) Microcompact โ€” summarize individual large tool results inline (e.g., a 500-line file read becomes a 10-line summary of relevant parts). (2) Context collapse โ€” structurally remove old system reminders and stale context. (3) Auto-compact โ€” at ~80% context usage, summarize the entire conversation history while preserving key facts and current task state. (4) Reactive compact โ€” emergency summarization when the API returns
### โ˜…โ˜…โ˜† _(Databricks, Anthropic)_ **Q:** Why partition tool calls into read-only parallel and write serial batches? What could go wrong without this?
Answer Read-only tools (Grep, Glob, Read) have no side effects โ€” they can safely run concurrently. Write tools (Bash, Edit) mutate state โ€” running them in parallel causes race conditions (e.g., two Edits to the same file, or a Bash command that depends on a prior Edit). The partitioning algorithm: scan the tool call sequence, group consecutive read-only calls into parallel batches, but every write tool gets its own serial batch. Without this: (1) File corruption from concurrent writes, (2) Non-deterministic behavior from unordered mutations, (3) Lost edits when two Edit calls target the same file. The throughput benefit is significant โ€” 3-5x faster for exploration-heavy tasks where the agent reads many files before making a change.
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** Design a sub-agent system. How do you handle context isolation, resource limits, and result aggregation?
Answer Sub-agents need: (1) Context isolation โ€” fresh QueryEngine with clean message history, not polluted by parent
## Further Reading - [Toolformer: Language Models Can Teach Themselves to Use Tools](https://arxiv.org/abs/2302.04761) Schick et al., 2023 โ€” training LLMs to decide when and how to call external tools. - [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) Yao et al., 2022 โ€” interleaving reasoning traces and actions for grounded decision-making. - [Reflexion: Language Agents with Verbal Reinforcement Learning](https://arxiv.org/abs/2303.11366) Shinn et al., 2023 โ€” agents that reflect on failures and improve across episodes. - [Claude Code (source)](https://github.com/anthropics/claude-code) Open-source reference for a production agent harness โ€” the architecture this module describes. - [Model Context Protocol Specification](https://modelcontextprotocol.io/specification) The open standard for connecting AI agents to external tools and data sources. - [SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering](https://arxiv.org/abs/2405.15793) Yang et al., 2024 โ€” production agent harness design lessons from solving real GitHub issues; ACI (agent-computer interface) design principles. - [LangChain AgentExecutor Architecture](https://python.langchain.com/docs/how_to/agent_executor/) The widely-adopted reference implementation of the tool-call loop โ€” useful comparison to Claude Code - [Karpathy: Software 2.0](https://karpathy.medium.com/software-2-0-a64152b37c35) Andrej Karpathy ## Related Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP ยท State Management --- --- title: "Tool System" part: "AI Engineering" number: 46 emoji: "๐Ÿ”ง" subtitle: "Tool interface, Zod schemas, registry, orchestration, and parallel execution" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ”ง Tool System > Tool interface, Zod schemas, registry, orchestration, and parallel execution > [!question] Key Question > 5 Grep calls run in parallel, but Bash always waits its turn โ€” why? โ† Agent Harness Architecture | โ†’ Sub-agents ## Key Insights > [!tip] Insight > Tool prompt descriptions total ~3K tokens. With deterministic ordering, these tokens get a cache hit on every API call within a session. Over 50+ calls per task, this saves significant cost and latency. > [!tip] Insight > The buildTool() pattern standardizes construction for all ~30 tools. Every tool gets the same validation, error handling, and hook integration โ€” no tool can bypass the pipeline. > [!tip] Insight > The 10-tool concurrency limit is a practical balance: higher limits increase memory pressure and file descriptor usage, while the diminishing returns of parallelism beyond 10 rarely justify the resource cost. Most exploration turns use 3-5 concurrent reads. ## Code Examples ```typescript interface Tool { name: string; prompt: string; // LLM sees this description inputSchema: ZodSchema; // validates LLM's input call(input: unknown, context: Context): Promise; isReadOnly(): boolean; isConcurrencySafe(): boolean; } function buildTool(config: { name: string; prompt: string; schema: ZodSchema; callFn: (input: unknown, ctx: Context) => Promise; readOnly?: boolean; concurrencySafe?: boolean; // independent knob โ€” a tool can be concurrency-safe without being read-only }): Tool { // Standardized constructor โ€” all 30 tools use this return { name: config.name, prompt: config.prompt, inputSchema: config.schema, async call(input, ctx) { const validated = config.schema.parse(input); // validate first return config.callFn(validated, ctx); // then execute }, isReadOnly: () => config.readOnly ?? false, // isConcurrencySafe can be independently configured; defaults to readOnly // as a conservative baseline but tools can override for finer control isConcurrencySafe: () => config.concurrencySafe ?? config.readOnly ?? false, }; } ``` ```typescript // Group read-only together (parallel), write tools separate (serial) function partitionToolCalls(calls: ToolCall[]): ToolCall[][] { const batches: ToolCall[][] = []; let current: ToolCall[] = []; for (const call of calls) { const isReadOnly = call.tool.isReadOnly(); const currentIsReadOnly = current[0]?.tool.isReadOnly() ?? isReadOnly; if (isReadOnly && currentIsReadOnly) { current.push(call); } else { if (current.length > 0) batches.push(current); current = [call]; } } if (current.length > 0) batches.push(current); return batches; } // Example: // Input: [Grep, Glob, Read, Bash, Grep] // Output: [[Grep, Glob, Read], // parallel batch // [Bash], // serial batch // [Grep]] // parallel batch ``` ```typescript async function executeToolCall( call: ToolCall, context: Context ): Promise { // 1. Find tool const tool = registry.get(call.name); if (!tool) return new ToolError(\`Unknown tool: \${call.name}\`); // 2. Validate input const parsed = tool.inputSchema.safeParse(call.input); if (!parsed.success) { return new ToolError(\`Invalid input: \${parsed.error}\`); } // 3. Pre-hooks const hookResult = await runPreHooks(tool, parsed.data); if (hookResult === DENY) return new ToolError("Denied by policy"); // 4. Permission check if (!checkPermission(tool, parsed.data, context)) { return new ToolError("Permission denied"); } // 5. Execute const result = await tool.call(parsed.data, context); // 6. Post-hooks await runPostHooks(tool, parsed.data, result); // 7. Return return new ToolResult(result); } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** Design a tool execution system for an AI agent that maximizes throughput while preventing race conditions.
Answer Key insight: the model decides what runs in parallel โ€” it emits multiple tool_use blocks in a single assistant turn. The harness receives them already-parallel and enforces concurrency safety on what it gets: read-only tools (Grep, Glob, Read) run concurrently (up to ~10), while write tools (Bash, Edit) are serialized to prevent race conditions. The partitioner
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** How would you validate untrusted input from an LLM before executing a tool?
Answer LLM outputs are untrusted by definition โ€” they can hallucinate field names, produce wrong types, or inject malicious values. Use a schema validation layer (Zod/JSON Schema): each tool declares its inputSchema, and every call is validated before execution. On validation failure, return a structured error to the LLM so it can retry with correct input. Beyond schema: (1) Sanitize string inputs (no shell injection via command fields), (2) Validate file paths are within allowed directories, (3) Check for dangerous patterns in Bash commands via deny rules. The key is fail-fast with descriptive errors โ€” the LLM learns from clear error messages.
### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** Why sort tools deterministically in the prompt? What
Answer API providers cache prompt prefixes โ€” if the first N tokens of your request match a previous request, you get a cache hit (lower latency, lower cost). Tool definitions are part of the system prompt. If tools appear in different orders across requests, the prefix changes and the cache misses. By sorting tools deterministically (e.g., alphabetical), the tool definitions are identical across requests within a session, maximizing cache hits. The impact is significant: tool descriptions total ~3K tokens, and cache hits on Anthropic
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** How would you design a buildTool() pattern that standardizes tool construction across 30+ tools?
Answer A factory function that enforces a consistent interface: buildTool({ name, prompt, inputSchema, call, isReadOnly, isConcurrencySafe }). Benefits: (1) Type safety โ€” all tools implement the same interface, (2) Automatic registration โ€” buildTool adds the tool to the registry, (3) Standard validation โ€” Zod schema validation runs before call() automatically, (4) Consistent error handling โ€” wraps call() in try/catch with structured error returns, (5) Metadata for orchestration โ€” isReadOnly() and isConcurrencySafe() enable the partitioning algorithm. The pattern also makes testing trivial: mock the call() function while keeping validation and hooks intact.
## Further Reading - [Toolformer: Language Models Can Teach Themselves to Use Tools](https://arxiv.org/abs/2302.04761) Schick et al., 2023 โ€” training LLMs to decide when and how to call external tools. - [Gorilla: Large Language Model Connected with Massive APIs](https://arxiv.org/abs/2305.15334) Patil et al., 2023 โ€” improving LLM accuracy in API call generation via retrieval-augmented training. - [Claude Code (source)](https://github.com/anthropics/claude-code) Open-source reference for a production agent tool system โ€” the architecture this module describes. - [JSON Schema Specification](https://json-schema.org/specification) The standard behind tool input validation โ€” understanding this is key to designing tool interfaces. - [Berkeley Function-Calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html) Live benchmark for LLM tool-calling accuracy โ€” shows which models handle nested schemas, parallel calls, and error recovery best. - [OpenAI Function Calling Guide](https://platform.openai.com/docs/guides/function-calling) The reference design for LLM tool interfaces โ€” parallel function calling, strict mode, and tool_choice options. - [Zod: TypeScript-first Schema Validation](https://zod.dev/) The runtime validation library used in production agent tool systems โ€” bridges TypeScript types and runtime input validation. ## Related Agent Harness Architecture ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP ยท State Management --- --- title: "Sub-agents" part: "AI Engineering" number: 47 emoji: "๐Ÿค–" subtitle: "Context isolation, worktrees, background execution, and result aggregation" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿค– Sub-agents > Context isolation, worktrees, background execution, and result aggregation > [!question] Key Question > Each sub-agent gets a fresh 200K context window โ€” the parent keeps working โ† Tool System | โ†’ Commands & Skills ## Key Insights > [!tip] Insight > Different sub-agent types get different tool sets. An Explore agent gets only read tools (Grep, Glob, Read) for fast search. A Plan agent is typically read-only โ€” it explores and designs but does not modify files. A general-purpose agent gets everything except Agent. > [!tip] Insight > Sub-agents can potentially be resumed or continued via messaging โ€” the parent can send follow-up instructions to a running sub-agent. In practice, many harnesses treat sub-agents as disposable: if one fails, the parent retries or takes a different approach. Success returns a summary, failure returns an error message, and the parent decides what to do next. > [!tip] Insight > The context savings are dramatic: a sub-agent that makes 10 tool calls generates ~5K tokens of internal conversation. Without isolation, the parent would inherit all 5K tokens. With isolation, the parent receives only a ~200-token summary โ€” a 25x reduction in context growth. ## Code Examples ```typescript async function spawnSubAgent( task: string, tools: Tool[], background = false, isolation?: "worktree" ): Promise { // 1. Create fresh QueryEngine (clean context) const engine = new QueryEngine({ tools: filterTools(tools), // no Agent tool messages: [], // empty history abortController: new AbortController(), }); // 2. Optional worktree isolation if (isolation === "worktree") { const worktreePath = createGitWorktree(); engine.cwd = worktreePath; } // 3. Run the task if (background) { void engine.submit(task); // fire-and-forget return "Agent running in background"; } const result = await engine.submit(task); return result.finalText; // single string back to parent } ``` ```typescript type AgentType = "explore" | "plan" | "general"; function filterTools(tools: Tool[], agentType: AgentType = "general"): Tool[] { // Different agent types get different tool sets const EXCLUDED_ALWAYS = new Set(["Agent"]); // prevent fork bombs const TYPE_ALLOWED: Record | null> = { explore: new Set(["Grep", "Glob", "Read"]), plan: new Set(["Grep", "Glob", "Read"]), // read-only: explores and designs, doesn't modify files general: null, // all except EXCLUDED_ALWAYS }; const allowed = TYPE_ALLOWED[agentType]; return tools.filter( (t) => !EXCLUDED_ALWAYS.has(t.name) && (allowed === null || allowed.has(t.name)) ); } ``` ```typescript async function runWithSubAgents( parent: QueryEngine, tasks: Task[] ): Promise { // Parent dispatches independent tasks to sub-agents const promises = tasks.map((task) => spawnSubAgent( task.description, parent.tools, /* background= */ true, task.editsFiles ? "worktree" : "shared" ) ); // Wait for all sub-agents const results = await Promise.all(promises); // Each result is a short summary (not the full conversation) // Parent's context grows by ~200 tokens per sub-agent return results; } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Design a sub-agent system with context isolation. How do you prevent context blowup?
Answer Each sub-agent gets a fresh QueryEngine with empty message history โ€” not polluted by the parent
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** How would you implement parallel sub-agents that edit the same codebase safely?
Answer Git worktrees. Each sub-agent gets its own worktree (a separate checkout of the same repo at a different filesystem path). Sub-agent A edits files in worktree-A, sub-agent B in worktree-B โ€” no file conflicts. When both finish, the parent merges their changes (potentially resolving conflicts). Alternative approaches: (1) File locking โ€” simple but blocks concurrency, (2) Copy-on-write filesystem โ€” complex but transparent, (3) Patch-based โ€” each sub-agent produces a diff, parent applies them sequentially. Worktrees are the best tradeoff: shared object store (no repo duplication), separate working trees per agent, full isolation, and git handles merge.
### โ˜…โ˜…โ˜† _(Anthropic, Databricks)_ **Q:** What are the tradeoffs of foreground vs background sub-agent execution?
Answer Foreground (parent waits): simpler control flow, parent can use sub-agent
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** How would you implement a fan-out/fan-in pattern where 5 sub-agents research in parallel and a coordinator synthesizes results?
Answer Spawn all 5 sub-agents in a single message with non-overlapping file scopes to avoid conflicts. Each sub-agent writes findings to a dedicated output file (e.g., research-agent-1.md through research-agent-5.md) rather than returning inline โ€” this keeps sub-agent context isolated and avoids hitting the coordinator
## Further Reading - [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://arxiv.org/abs/2308.08155) Wu et al., 2023 โ€” a framework for multi-agent conversations with customizable agents. - [MetaGPT: Meta Programming for Multi-Agent Collaborative Framework](https://arxiv.org/abs/2308.00352) Hong et al., 2023 โ€” structured multi-agent collaboration with role-based task decomposition. - [Claude Code (source)](https://github.com/anthropics/claude-code) Production implementation of sub-agent spawning with worktree isolation. - [Anthropic: Building Effective Agents](https://www.anthropic.com/research/building-effective-agents) Anthropic - [Git Worktrees Documentation](https://git-scm.com/docs/git-worktree) The git primitive that enables parallel sub-agents to operate on the same repo without file conflicts. ## Related Agent Harness Architecture ยท Tool System ยท Commands & Skills ยท Plugins & MCP ยท State Management --- --- title: "Commands & Skills" part: "AI Engineering" number: 48 emoji: "๐Ÿ“" subtitle: "Slash commands, skill markdown files, prompt injection, and the command registry" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ“ Commands & Skills > Slash commands, skill markdown files, prompt injection, and the command registry > [!question] Key Question > /compact is instant but 'compact this' takes 3 seconds โ€” one never hits the API โ† Sub-agents | โ†’ Plugins & MCP ## Key Insights > [!tip] Insight > The / prefix is the user's explicit signal: "I want the built-in behavior." Without it, the same words go to the LLM for interpretation. This is why /compact is instant but "please compact" takes seconds and costs tokens. > [!tip] Insight > The command registry aggregates from 4 sources: hardcoded built-ins, skill directories, plugins, and bundled commands. It rebuilds on each invocation (not cached), so newly added skill files are picked up immediately without restarting the agent. > [!tip] Insight > The skill system turns prompt engineering into a shareable artifact. Instead of copying and pasting prompts between conversations, you write a .md file once and invoke it with /name. Teams can distribute skills via plugins, creating organizational knowledge that persists across sessions and users. ## Code Examples ```typescript function processInput(text: string): Result { if (text.startsWith("/")) { const cmd = findCommand(text); if (cmd.type === "local") { return cmd.execute(); // instant, no API } else if (cmd.type === "local-jsx") { return cmd.render(); // React component } else if (cmd.type === "prompt") { injectAsUserMessage(cmd.content); // user message, not system prompt (preserves cache) return sendToApi(); // LLM sees injected text } } else if (text.startsWith("!")) { return runShell(text.slice(1)); // direct shell } else { return sendToApi(text); // goes to LLM } } ``` ```yaml # Skills live in a named directory with a SKILL.md entry point. # Example: ~/.claude/skills/my-skill/SKILL.md --- name: my-skill description: Does something useful when_to_use: When user asks for X allowed-tools: Bash,Read,Write --- # Markdown body is injected as a user message (not system prompt). # This preserves the static system prompt cache across invocations. You are now in my-skill mode. Follow these rules: 1. Only read files in the ./src directory 2. Suggest changes but don't edit without confirmation 3. Format output as a markdown table # Note: frontmatter keys are snake_case ("when_to_use", "allowed-tools"). # After parsing they are exposed as JS properties (e.g. whenToUse) โ€” but # the YAML keys themselves must be snake_case / kebab-case. ``` ```typescript function getCommands(): Map { // Aggregate commands from 4 sources (priority order) const commands = new Map(); // 1. Hardcoded built-ins (highest priority) for (const cmd of getBuiltinCommands()) { commands.set(cmd.name, cmd); // /help, /compact, /clear, /resumeโ€ฆ } // 2. Plugin commands for (const plugin of getInstalledPlugins()) { for (const cmd of plugin.getCommands()) { if (!commands.has(cmd.name)) commands.set(cmd.name, cmd); } } // 3. Skill directory files (~/.claude/skills/, .claude/skills/) for (const skillDir of SKILL_DIRS) { for (const mdFile of glob(\`\${skillDir}/*.md\`)) { const skill = parseSkill(mdFile); if (!commands.has(skill.name)) { commands.set(skill.name, { name: skill.name, type: "prompt", content: skill.body, }); } } } // 4. Bundled commands (lowest priority) for (const cmd of getBundledCommands()) { if (!commands.has(cmd.name)) commands.set(cmd.name, cmd); } return commands; } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Design a plugin system where users can extend an AI agent with markdown files.
Answer Skills live in named directories (~/.claude/skills//SKILL.md) with YAML frontmatter (name, description, whenToUse, allowed-tools). The system scans skill directories at startup and registers each as a
### โ˜…โ˜†โ˜† _(Google, Anthropic)_ **Q:** How do you decide what runs locally vs what goes to the LLM?
Answer The split is based on whether the operation requires LLM reasoning. /compact, /help, /clear are deterministic โ€” they always do the same thing regardless of context, so they run locally (instant, no API cost).
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** How would you implement a command registry that aggregates commands from 4 different sources?
Answer The getCommands() function aggregates from: (1) Hardcoded built-in commands (/help, /compact, /clear), (2) Skill directories (markdown files from ~/.claude/skills/ and .claude/skills/), (3) Plugin-provided commands (from installed plugins), (4) Bundled commands (shipped with the agent). Each source returns commands with the same interface: { name, type, description, execute/content }. Priority order handles conflicts: hardcoded > plugins > skills > bundled. The registry is rebuilt on each invocation (not cached) so newly added skills are picked up immediately without restart. Tab completion uses the registry for autocomplete.
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Design a skill that chains multiple other skills โ€” what are the error handling challenges when a mid-chain skill fails?
Answer A chaining skill invokes sub-skills sequentially, passing each output as input to the next (e.g., /fetch-url โ†’ /summarize โ†’ /file-to-obsidian). The core error challenge: failure at step N has already committed side effects from steps 1 through N-1 with no automatic rollback. Mitigations: (1) Make each step idempotent so re-runs are safe, (2) Write intermediate results to temp files before committing โ€” if step N fails, the user can resume from step N-1
## Further Reading - [Claude Code (source)](https://github.com/anthropics/claude-code) Production implementation of the command/skill system described in this module. - [VS Code Extension API](https://code.visualstudio.com/api) The gold standard for editor extensibility โ€” similar command/contribution patterns. - [Model Context Protocol Specification](https://modelcontextprotocol.io/specification) The protocol that enables external tools and skills to integrate with AI agents. - [YAML Specification](https://yaml.org/spec/1.2.2/) The YAML spec underlying skill frontmatter โ€” understanding anchors, block scalars, and type coercion prevents subtle parsing bugs. - [Bash Tab Completion Guide](https://www.gnu.org/software/bash/manual/html_node/Programmable-Completion.html) GNU Bash programmable completion โ€” the shell mechanism that agent CLIs mirror for /command tab completion. - [Anthropic Claude Code Skills Documentation](https://docs.anthropic.com/en/docs/claude-code/slash-commands) Official docs for custom slash commands in Claude Code โ€” creating, organizing, and distributing skills. ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Plugins & MCP ยท State Management --- --- title: "Plugins & MCP" part: "AI Engineering" number: 49 emoji: "๐Ÿ”Œ" subtitle: "Model Context Protocol, external tool servers, plugin lifecycle, and transport layers" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ”Œ Plugins & MCP > Model Context Protocol, external tool servers, plugin lifecycle, and transport layers > [!question] Key Question > Claude doesn't know if a tool is built-in or from an MCP server โ€” by design โ† Commands & Skills | โ†’ State Management ## Key Insights > [!tip] Insight > Making MCP tools indistinguishable prevents the LLM from developing bias. If the LLM knew some tools were "external," it might prefer built-in tools (assuming they're more reliable) or avoid MCP tools (assuming they're slower). Equal treatment ensures the LLM picks the best tool for the job. > [!tip] Insight > MCP is to AI tools what USB is to peripherals: a standard interface that lets any server provide capabilities to any agent. Before MCP, every agent had its own bespoke tool integration. MCP makes tools portable across agents. > [!tip] Insight > MCP adoption is growing rapidly. The ecosystem now includes servers for databases (PostgreSQL, SQLite), cloud providers (AWS, GCP), development tools (GitHub, Jira), and knowledge bases (Notion, Confluence). Each server adds specialized tools without modifying the agent itself. ## Code Examples ```json { "mcpServers": { "my-server": { "command": "node", "args": ["server.js"], "transport": "stdio" }, "remote-api": { "url": "https://api.example.com/mcp", "transport": "sse" } } } ``` ```typescript class MCPClient { private transport: Transport | null = null; connect(config: MCPServerConfig): void { // Start server process, establish transport if (config.transport === "stdio") { const process = spawn(config.command, config.args); this.transport = new StdioTransport(process); } else if (config.transport === "sse") { this.transport = new SSETransport(config.url); } } async listTools(): Promise { // Get tools from server โ€” same schema as built-in return this.transport!.request("tools/list"); } async callTool(name: string, input: unknown): Promise { // Execute tool on server โ€” returns same format as built-in return this.transport!.request("tools/call", { name, arguments: input }); } } ``` ```typescript async function dispatchToolCall(call: ToolCall): Promise { // Route to built-in or MCP โ€” transparent to LLM // Check if this tool belongs to an MCP server const mcpServer = findMCPServerForTool(call.name); let result: unknown; if (mcpServer) { // MCP dispatch โ€” JSON-RPC over transport result = await mcpServer.callTool(call.name, call.input); } else { // Built-in dispatch โ€” direct execution const tool = registry.get(call.name); result = await tool.call(call.input); } // Same tool_result format regardless of source return new ToolResult({ content: result }); } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Design a protocol for extending an AI agent with external tool servers.
Answer The Model Context Protocol (MCP) approach: (1) Define a transport layer โ€” stdio (local processes), HTTP/SSE (remote servers). (2) Standard message format โ€” JSON-RPC for request/response (tools/list, tools/call). (3) Tool schema compatibility โ€” MCP tools use the same JSON Schema as built-in tools, so the LLM sees no difference. (4) Lifecycle management โ€” the agent starts server processes on demand, maintains connections, and handles crashes/restarts. Key design decisions: tools are stateless (server can restart without losing tool state), discovery is dynamic (tools/list at connection time, not hardcoded), and the LLM never knows whether a tool is built-in or external โ€” this prevents bias in tool selection.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** What are the security implications of running external tool servers?
Answer MCP servers run as separate processes with access to the host system. Risks: (1) A malicious MCP server could exfiltrate data via its tool responses โ€” the LLM sends file contents as tool input, the server sends them to a remote endpoint. (2) A compromised server could return poisoned tool results that manipulate the LLM
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** How would you design a plugin system that provides skills, hooks, MCP servers, and custom tools through a single package?
Answer A plugin is a package with a manifest declaring its contributions: { skills: [
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** What security risks does stdio-based MCP transport introduce compared to HTTP, and how would you mitigate them?
Answer Stdio transport launches the MCP server as a child process โ€” the agent controls its stdin/stdout directly. Risks: (1) Supply-chain attacks โ€” a malicious npm package in the server
## Further Reading - [Model Context Protocol Specification](https://modelcontextprotocol.io/specification) The open standard for connecting AI agents to external tools and data sources. - [Claude Code (source)](https://github.com/anthropics/claude-code) Production implementation of MCP client integration and plugin architecture. - [JSON-RPC 2.0 Specification](https://www.jsonrpc.org/specification) The transport protocol underlying MCP communication between client and server. - [Language Server Protocol](https://microsoft.github.io/language-server-protocol/) Inspiration for MCP โ€” a similar protocol for editor-language server communication. - [MCP Servers Registry](https://github.com/modelcontextprotocol/servers) Official list of reference MCP server implementations โ€” filesystem, GitHub, Postgres, Puppeteer, and more. - [MCP Quickstart](https://modelcontextprotocol.io/quickstart) Step-by-step guide to building and connecting your first MCP server โ€” covers stdio and Streamable HTTP transports. - [OAuth 2.0 Authorization Framework (RFC 6749)](https://datatracker.ietf.org/doc/html/rfc6749) The auth standard underlying MCP ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท State Management --- --- title: "State Management" part: "AI Engineering" number: 50 emoji: "๐Ÿ—„๏ธ" subtitle: "Dual state systems: React context for UI, module state for services" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ—„๏ธ State Management > Dual state systems: React context for UI, module state for services > [!question] Key Question > Two state systems coexist โ€” one triggers re-renders, one doesn't. Mix them up and the terminal freezes. โ† Plugins & MCP | โ†’ Context Compaction ## Key Insights > [!tip] Insight > The custom store uses memoized selectors instead of Redux or Zustand โ€” it's a deliberately minimal implementation. Each component subscribes to exactly the slice it needs, preventing cascade re-renders when unrelated state changes. > [!tip] Insight > The boundary between the two systems is clear: if a human needs to see the change, it goes in React state. If it's internal bookkeeping, it goes in module state. When in doubt, start in module state โ€” it's cheaper. Promote to React state only when you need UI reactivity. > [!tip] Insight > The custom store is intentionally minimal โ€” no middleware, no devtools, no time-travel debugging. For a terminal app with a small state surface, the overhead of a full state library (Redux: ~7KB, Zustand: ~1KB) isn't justified. The custom implementation is ~100 lines and does exactly what's needed: selective subscriptions with memoized equality checks. ## Code Examples ```typescript interface AppState { // Provider pattern โ€” wraps React tree settings: Settings; // model, theme, preferences (settings.model, settings.theme) permissions: string; // 'default' | 'bypass' | 'plan' | 'acceptEdits' tasks: Task[]; // background sub-agent tasks mcpConnections: string[]; // active MCP server connections } // Real implementation uses useSyncExternalStore for slice-based subscriptions: // a component only re-renders when its selected slice changes. // Plain useContext would re-render on ANY context update โ€” not what we want. function useAppState(selector: (state: AppState) => T): T { const store = useContext(AppStoreContext); const get = () => selector(store.getState()); // useSyncExternalStore: re-renders only when selected value changes (Object.is) return useSyncExternalStore(store.subscribe, get, get); } // Usage in components: const model = useAppState(s => s.settings.model); // re-renders on model change only const perms = useAppState(s => s.permissions); // re-renders on perm change only ``` ```typescript // Module state โ€” no re-renders, no subscriptions let _sessionId: string | null = null; let _totalCost: number = 0.0; let _cwd: string | null = null; const _modelUsage: Record = {}; function getSessionId(): string | null { return _sessionId; } function setSessionId(id: string): void { _sessionId = id; } function addCost(amount: number): void { _totalCost += amount; // no UI update โ€” just bookkeeping } function getTotalCost(): number { return _totalCost; // read synchronously when needed } ``` ```typescript function saveSession(sessionId: string): void { // Serialize both state systems to disk const data = { messages: getMessages(), appState: { settings: appState.settings, permissions: appState.permissions, }, moduleState: { cost: getTotalCost(), cwd: getCwd(), modelUsage: getModelUsage(), }, }; writeJson(\`~/.claude/sessions/\${sessionId}.json\`, data); } function resumeSession(sessionId: string): void { // Reconstruct both state systems from disk const data = readJson(\`~/.claude/sessions/\${sessionId}.json\`); setMessages(data.messages); restoreAppState(data.appState); setTotalCost(data.moduleState.cost); setCwd(data.moduleState.cwd); // System prompt rebuilt fresh โ€” dynamic sections may have changed } ``` ## Interview Questions ### โ˜…โ˜†โ˜† _(Google, Meta)_ **Q:** Why would you use two state systems instead of one unified store?
Answer Performance. React state triggers re-renders โ€” every state change redraws the UI. In a terminal-based agent, re-rendering is expensive (full screen redraw). Some state changes are frequent but invisible: cost increments (every API call), token counts (every message), session timers. Putting these in React state would cause hundreds of unnecessary re-renders per task. Solution: two systems. React state (AppState) holds UI-visible state (settings, permissions, task list) that should trigger re-renders. Module state (plain getters/setters) holds high-frequency invisible state (cost, tokens, session ID) that shouldn
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Design a state system where some state triggers UI updates and some doesn
Answer Two layers: (1) Reactive layer โ€” useSyncExternalStore with a custom store and selector (useAppState(s => s.settings.model)). Components only re-render when their selected slice changes (Object.is comparison), not on every state update. Custom store (not Redux/Zustand) for minimal overhead. (2) Module layer โ€” plain TypeScript variables with getter/setter functions. No subscriptions, no observers, no re-renders. Read synchronously when needed. The boundary is clear: if the user needs to see the change, it goes in the reactive layer. If it
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** How would you implement session persistence and resumption for a stateful agent?
Answer Serialize the conversation transcript as JSONL to ~/.claude/projects//.jsonl (Claude Code
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** How do you ensure critical state survives context window compaction without bloating the prompt?
Answer Context compaction collapses conversation history into a summary, discarding raw messages. State that lives only in conversation turns (e.g.,
## Further Reading - [Claude Code (source)](https://github.com/anthropics/claude-code) Production implementation of the dual state architecture described in this module. - [React Context API (useContext)](https://react.dev/reference/react/useContext) React - [Zustand](https://github.com/pmndrs/zustand) A minimal state library for React โ€” similar memoized selector pattern, different implementation. - [Redux Selector Pattern (Reselect)](https://reselect.js.org/) The canonical memoized selector library โ€” the pattern behind AppState slice subscriptions that prevent unnecessary re-renders. - [XState: State Machines for JavaScript](https://stately.ai/docs) Formal state machines for agent control flow โ€” the alternative to ad-hoc React state for tracking agent lifecycle (idle โ†’ running โ†’ waiting โ†’ done). - [Immer: Immutable State Made Simple](https://immerjs.github.io/immer/) Copy-on-write immutable update library โ€” used in agent state managers to safely update deeply nested AppState without mutation bugs. ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Context Compaction" part: "AI Engineering" number: 51 emoji: "๐Ÿ—œ๏ธ" subtitle: "Auto-compact, reactive compact, microcompact, context collapse, and token budgets" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ—œ๏ธ Context Compaction > Auto-compact, reactive compact, microcompact, context collapse, and token budgets > [!question] Key Question > At 80% context usage, the agent silently summarizes its own history to keep going โ† State Management | โ†’ Terminal UI (Ink) ## Key Insights > [!tip] Insight > The 13,000 token buffer is carefully chosen: it leaves enough room for one more API call (system prompt + response) while triggering early enough that the compaction summary itself fits within the remaining space. > [!tip] Insight > The four strategies form a defense-in-depth: context collapse removes structural bloat (free), microcompact shrinks individual results (cheap), auto-compact summarizes history (moderate cost), and reactive compact is the emergency fallback (highest cost but prevents total failure). Each layer catches what the previous one missed. > [!tip] Insight > Without compaction, agents fail after ~20 tool calls on a 200K context window. With all four strategies active, agents can sustain 100+ tool calls in a single session โ€” each compaction cycle frees ~60% of the context, allowing the agent to continue indefinitely. ## Code Examples ```typescript const AUTOCOMPACT_BUFFER = 13_000; // safety margin in tokens const WARNING_BUFFER = 20_000; function shouldAutoCompact(messages: Message[], model: string): boolean { const used = countTokens(messages); const limit = getContextLimit(model); return used > (limit - AUTOCOMPACT_BUFFER); } function shouldWarn(messages: Message[], model: string): boolean { const used = countTokens(messages); const limit = getContextLimit(model); return used > (limit - WARNING_BUFFER); } ``` ```typescript async function compact(messages: Message[]): Promise { // Summarize old messages, keep recent ones const old = messages.slice(0, -5); // everything except last 5 const recent = messages.slice(-5); // keep recent context intact const summary = await callApi({ system: "Summarize this conversation concisely. Preserve: " + "current task, file paths, decisions made, errors.", messages: old, }); return [{ role: "system", content: summary }, ...recent]; } async function reactiveCompact( messages: Message[], error: Error ): Promise { // Emergency compaction after API 'prompt too long' error if (hasAttemptedReactiveCompact) { throw new Error("Compaction failed twice โ€” escalate to user"); } hasAttemptedReactiveCompact = true; return compact(messages); // caller retries the API call } ``` ```typescript async function microcompact(toolResult: string): Promise { // Shrink a single large tool result if (toolResult.length > 10_000) { const summary = await callApi({ system: "Summarize this tool output briefly. " + "Keep: key findings, paths, line numbers, errors.", messages: [{ role: "user", content: toolResult }], }); return \`[microcompacted] \${summary}\`; } return toolResult; // small enough, keep as-is } function contextCollapse(messages: Message[]): Message[] { // Remove stale system-reminder sections (no API call) return messages.filter( msg => !(isSystemReminder(msg) && isStale(msg)) ); } ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** An agent hits the context limit mid-task. Design a recovery strategy.
Answer Reactive compaction: (1) Catch the
### โ˜…โ˜…โ˜… _(Meta)_ **Q:** Compare proactive vs reactive compaction. When does each fail?
Answer Proactive (auto-compact at ~80%): triggers before the limit is hit, giving the compaction summary room to fit. Fails when: (1) A single tool result is so large it pushes past the limit in one step (no time to compact), (2) The compaction summary itself is too long (recursive problem). Reactive (on API error): handles the edge cases proactive misses โ€” when the limit is exceeded despite auto-compact. Fails when: (1) The conversation is already minimal (nothing to summarize), (2) The API error is not
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** How would you decide what to keep vs what to summarize?
Answer Recency + relevance heuristic: (1) Always keep the last N messages (typically 5) โ€” these represent the current task state and the agent
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** Design a microcompact system that shrinks individual tool results without losing critical information.
Answer Microcompact targets a single tool result, not the whole conversation. Trigger: tool result exceeds a threshold (e.g., 10K characters). Process: (1) Send the tool result to the LLM with the prompt
## Further Reading - [Claude Code (source)](https://github.com/anthropics/claude-code) Production implementation of all four compaction strategies described in this module. - [Lost in the Middle: How Language Models Use Long Contexts](https://arxiv.org/abs/2307.03172) Liu et al., 2023 โ€” LLMs struggle with information in the middle of long contexts, motivating compaction. - [Scaling Transformer to 1M tokens with RingAttention](https://arxiv.org/abs/2310.01889) Liu et al., 2023 โ€” extending context windows, reducing but not eliminating the need for compaction. - [MemGPT: Towards LLMs as Operating Systems](https://arxiv.org/abs/2310.08560) Packer et al., 2023 โ€” virtual context management with paging, a complementary approach to compaction. - [In-Context Retrieval-Augmented Language Models](https://arxiv.org/abs/2302.00083) Ram et al., 2023 โ€” dynamically retrieving only the relevant chunks rather than keeping the full context, the retrieval complement to compaction. - [Anthropic: Long Context Tips](https://docs.anthropic.com/en/docs/build-with-claude/long-context-tips) Anthropic - [Lilian Weng: The Transformer Family v2](https://lilianweng.github.io/posts/2023-01-27-the-transformer-family-v2/) Deep dive on context window extensions and memory-augmented transformers โ€” the research landscape that motivates runtime compaction. ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Terminal UI (Ink)" part: "AI Engineering" number: 52 emoji: "๐Ÿ–ฅ๏ธ" subtitle: "React reconciler for terminals, Yoga flexbox, ANSI rendering, and keyboard focus" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ–ฅ๏ธ Terminal UI (Ink) > React reconciler for terminals, Yoga flexbox, ANSI rendering, and keyboard focus > [!question] Key Question > It's React โ€” but instead of DOM nodes, it writes ANSI escape codes to stdout โ† Context Compaction | โ†’ Memory System ## Key Insights > [!tip] Insight > The same pattern powers react-native (mobile), react-three-fiber (3D/WebGL), and react-pdf (PDF generation). If you understand Ink's reconciler, you understand how to build a React renderer for any target. > [!tip] Insight > Terminal rendering is actually simpler than browser rendering: monospace fonts mean every character is the same width, so text measurement is trivial (string length = display width, ignoring wide/emoji characters). No font loading, no sub-pixel rendering, no reflow complexity. > [!tip] Insight > Ink powers CLIs used by millions: Gatsby, Prisma, Parcel, and Claude Code. The React mental model transfers directly โ€” if you can build a web app, you can build a terminal app. ## Code Examples ```typescript // Translates React tree to terminal output class InkReconciler { createInstance(type: string, props: Record): VirtualNode { if (type === "Box") return new VirtualBox(props); // like
if (type === "Text") return new VirtualText(props); // like throw new Error(\`Unknown type: \${type}\`); } commitUpdate(node: VirtualNode, newProps: Record): void { node.update(newProps); scheduleRerender(); } appendChild(parent: VirtualNode, child: VirtualNode): void { parent.children.push(child); } removeChild(parent: VirtualNode, child: VirtualNode): void { parent.children = parent.children.filter(c => c !== child); } } ``` ```typescript function renderToTerminal(rootNode: VirtualNode): void { // 1. Yoga calculates layout (flexbox in char cells) const yogaTree = calculateLayout(rootNode, { cols: 80, rows: 24 }); // 2. Walk tree, generate ANSI escape codes let output = ""; for (const node of walk(yogaTree)) { const { left: x, top: y } = node.layout; output += \`\\x1b[\${y + 1};\${x + 1}H\`; // move cursor output += colorize(node.content, node.style); // apply colors } // 3. Atomic write to stdout (prevents flickering) process.stdout.write(output); } // Frame scheduling โ€” batch state changes const pendingRenders: boolean[] = []; let rootNode: VirtualNode | null = null; // set once at startup function scheduleRerender(): void { if (pendingRenders.length === 0) { setImmediate(flushRenders); // next tick } pendingRenders.push(true); } function flushRenders(): void { pendingRenders.length = 0; if (rootNode) renderToTerminal(rootNode); // rootNode must be defined } ``` ```typescript class FocusManager { // Tab-based focus navigation for terminal components private focusable: VirtualNode[] = []; // ordered list of focusable nodes private activeIndex: number = 0; register(node: VirtualNode): void { this.focusable.push(node); } handleKey(key: string): void { if (key === "tab") { this.activeIndex = (this.activeIndex + 1) % this.focusable.length; } else if (key === "shift+tab") { this.activeIndex = (this.activeIndex - 1 + this.focusable.length) % this.focusable.length; } this.focusable[this.activeIndex].focus(); scheduleRerender(); } } // In React: useFocus() hook returns { isFocused: boolean } // Equivalent to document.activeElement in the browser ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Meta, Google)_ **Q:** How would you build a custom React renderer for a non-browser target?
Answer Use react-reconciler to create a custom host config. You implement ~25 methods: createInstance() maps React element types to your target
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** Compare the tradeoffs of a full React TUI vs a simpler curses-based approach.
Answer React TUI (Ink): declarative updates, component reuse, hooks for state/effects, familiar mental model for web devs. Cost: ~15MB bundle, startup overhead, abstractions add latency. Curses (blessed/ncurses): direct terminal control, minimal overhead, fine-grained performance tuning. Cost: imperative spaghetti at scale, manual state management, no component model. Decision framework: if your TUI has complex state (multi-panel dashboards, forms, real-time updates), React
### โ˜…โ˜…โ˜† _(Meta)_ **Q:** How does flexbox layout work in a terminal context?
Answer Yoga (Facebook
### โ˜…โ˜…โ˜… _(Google)_ **Q:** How would you implement progressive rendering of tool results that arrive out of order?
Answer Assign each tool call a stable ID at dispatch time and maintain an ordered slot array in state. When a result arrives, write it into its slot regardless of arrival order โ€” the render pass always displays slots in dispatch order, showing a placeholder spinner for unfilled slots. This gives users a stable visual layout (results don
## Further Reading - [Ink: React for interactive command-line apps](https://github.com/vadimdemedes/ink) The production React renderer for terminals โ€” used by Gatsby, Parcel, Prisma, and Claude Code. - [React Reconciler documentation](https://github.com/facebook/react/tree/main/packages/react-reconciler) The official package for building custom React renderers โ€” the foundation Ink is built on. - [Yoga Layout](https://yogalayout.dev/) Facebook - [Blessed: A high-level terminal interface library](https://github.com/chjj/blessed) The older, imperative approach to terminal UIs โ€” useful comparison point to understand what React TUI improves on. - [ANSI Escape Codes Reference](https://en.wikipedia.org/wiki/ANSI_escape_code) The low-level sequences that all TUI libraries emit โ€” understanding CSI codes (cursor movement, color, erase) demystifies what the reconciler produces. - [Textual: Python TUI Framework](https://textual.textualize.io/) The Python equivalent of Ink with CSS-style layout โ€” useful cross-language comparison of the reactive TUI approach. - [Building a Custom React Renderer](https://github.com/nitin42/Making-a-custom-React-renderer) Step-by-step walkthrough of building a React custom renderer โ€” the same techniques Ink uses to target the terminal instead of the DOM. ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Memory System" part: "AI Engineering" number: 53 emoji: "๐Ÿง " subtitle: "File-based persistent memory, memory types, auto-save triggers, and cross-session recall" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿง  Memory System > File-based persistent memory, memory types, auto-save triggers, and cross-session recall > [!question] Key Question > Claude remembers you're a senior engineer โ€” across sessions, without a database โ† Terminal UI (Ink) | โ†’ Hooks & Permissions ## Key Insights > [!tip] Insight > The index file (MEMORY.md) is the key design choice. It's always loaded into context, but it only contains one-line summaries with pointers. This means 50 memories cost ~50 lines of context, not 50 full documents. > [!tip] Insight > The test: if you could figure it out by reading the repo, don't memorize it. Memory should store intent and preferences (stable), not implementation details (volatile). > [!tip] Insight > The 200-token context cost for the index is fixed โ€” whether you have 10 memories or 100. This is the key scalability property: memory grows on disk without growing the context window. ## Code Examples ```typescript class MemorySystem { private memoryDir: string; private index: MemoryIndex; constructor(projectPath: string) { this.memoryDir = \`~/.claude/projects/\${projectPath}/memory/\`; this.index = this.loadIndex(); // MEMORY.md } private loadIndex(): MemoryIndex { // Always loaded into context at session start return parseMarkdown(readFile(\`\${this.memoryDir}/MEMORY.md\`)); } save(name: string, content: string, type: string, description: string): void { // Step 1: Write memory file with YAML frontmatter const frontmatter = \`---\\nname: \${name}\\ndescription: \${description}\\ntype: \${type}\\n---\\n\`; writeFile(\`\${this.memoryDir}/\${name}.md\`, frontmatter + content); // Step 2: Update index (one-line pointer) this.index.addEntry(\`[\${name}](\${name}.md) โ€” \${description}\`); } recall(query: string): Memory[] { // Search memories by relevance to current task return this.memories.filter(m => relevant(m, query)); } } ``` ```typescript type SaveDecision = { save: true; type: string } | { save: false }; function shouldSave(event: AgentEvent): SaveDecision { // Decide when to persist information to memory // HIGH SIGNAL โ€” always save if (event.type === "user_correction") { // "Don't use Jest, we use Vitest" โ†’ save to feedback return { save: true, type: "feedback" }; } if (event.type === "user_role_info") { // "I'm a Staff SWE at Google" โ†’ save to user return { save: true, type: "user" }; } if (event.type === "approach_confirmed") { // User confirms non-obvious approach โ†’ save to feedback return { save: true, type: "feedback" }; } if (event.type === "external_resource") { // "Here's the API docs: https://..." โ†’ save to reference return { save: true, type: "reference" }; } // LOW SIGNAL โ€” do NOT save if (event.type === "code_pattern") return { save: false }; // derive from code if (event.type === "debug_solution") return { save: false }; // fix is in the code if (event.type === "git_history") return { save: false }; // use git log return { save: false }; } ``` ```typescript // Three persistence layers โ€” different lifetimes // 1. MEMORY โ€” cross-session (survives restart) // Where: ~/.claude/projects//memory/ // What: user role, preferences, corrections // When: loaded at every session start // 2. TASKS โ€” current session only // Where: in-memory task list // What: "implement feature X", "fix bug Y" // When: cleared when session ends // 3. PLANS โ€” current session only // Where: ~/.claude/plans// // What: step-by-step execution plans // When: deleted after task completion ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Design a persistent memory system for an AI assistant that works across sessions.
Answer Key decisions: (1) Storage format โ€” markdown files with YAML frontmatter. Why: human-readable, git-trackable, no infrastructure dependency. A database adds operational complexity for what
### โ˜…โ˜…โ˜† _(Google)_ **Q:** How do you decide what to remember vs what to derive from the codebase?
Answer The rule: if the information exists in the codebase or git history, derive it โ€” don
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What
Answer Stale memories are worse than no memories โ€” they cause the agent to confidently apply outdated information. Example: memory says
### โ˜…โ˜…โ˜… _(OpenAI)_ **Q:** Compare semantic search vs. recency-based retrieval for agent memory โ€” when does each strategy fail?
Answer Semantic search (embedding similarity) retrieves memories whose content matches the current query โ€” it excels at finding relevant past decisions regardless of when they were made, but fails when the query is vague or when the relevant memory uses different vocabulary than the current context (e.g.,
## Further Reading - [MemGPT: Towards LLMs as Operating Systems](https://arxiv.org/abs/2310.08560) Packer et al., 2023 โ€” virtual context management with explicit memory tiers for LLM agents. - [Generative Agents: Interactive Simulacra of Human Behavior](https://arxiv.org/abs/2304.03442) Park et al., 2023 โ€” memory retrieval, reflection, and planning for believable agent behavior. - [Claude Code Memory System](https://github.com/anthropics/claude-code) The file-based memory implementation this module describes โ€” MEMORY.md index with markdown memory files. - [YAML Frontmatter Specification](https://jekyllrb.com/docs/front-matter/) The metadata format used in memory files โ€” structured data at the top of markdown documents. - [Cognitive Architecture for Language Agents (CoALA)](https://arxiv.org/abs/2309.02427) Sumers et al., 2023 โ€” formal taxonomy of agent memory: working, episodic, semantic, and procedural. The framework behind the file-based memory design. - [Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks](https://arxiv.org/abs/2005.11401) Lewis et al., 2020 โ€” the foundational RAG paper; the MEMORY.md index pattern is a lightweight, file-based approximation of RAG-style selective retrieval. - [Lilian Weng: LLM-Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) Comprehensive survey of agent memory, planning, and tool-use โ€” covers short-term vs. long-term memory taxonomies in depth. ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Hooks & Permissions" part: "AI Engineering" number: 54 emoji: "๐Ÿ”’" subtitle: "PreToolUse/PostToolUse hooks, 5-layer permission hierarchy, and safety gates" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ”’ Hooks & Permissions > PreToolUse/PostToolUse hooks, 5-layer permission hierarchy, and safety gates > [!question] Key Question > A shell script you wrote can veto any tool call before Claude even sees the result โ† Memory System | โ†’ Prompt Engineering (System) ## Key Insights > [!tip] Insight > Hooks can also modify tool input. If the hook writes JSON to stdout, that JSON replaces the original input. This enables input sanitization (redact secrets), normalization (resolve relative paths), or augmentation (add required flags) โ€” all without changing the agent code. > [!tip] Insight > The permission check is the boundary between untrusted AI output and trusted execution. The LLM can hallucinate any command โ€” the permission system ensures only approved commands actually run. > [!tip] Insight > Hook failure mode is "fail open" (any nonzero exit code other than 2 = non-blocking error, tool continues). This prevents a broken hook from permanently blocking the agent. The alternative โ€” fail closed โ€” is safer in theory but causes support tickets when a hook has a bug and blocks all tool execution. ## Code Examples ```typescript async function checkPermission( tool: Tool, input: ToolInput, hooks: Hooks, rules: Rules, mode: PermissionMode ): Promise { // Layer 1: Pre-tool hooks (highest priority) for (const hook of hooks.preToolUse) { const result = await runShell(hook.command, { stdin: JSON.stringify({ tool, input }) }); if (result.exitCode === 0) { // exit 0 = hook passed (continue to next layer) // Input modification via hookSpecificOutput.updatedInput in JSON stdout const json = result.stdout ? JSON.parse(result.stdout) : {}; if (json.hookSpecificOutput?.updatedInput) input = json.hookSpecificOutput.updatedInput; // Note: exit 0 does NOT unconditionally allow โ€” subsequent layers still run } else if (result.exitCode === 2) { return { decision: "deny", reason: result.stderr }; // explicit deny } // other nonzero = non-blocking error (tool continues) } // Layer 2: Deny rules (absolute blocks) for (const rule of rules.alwaysDeny) { if (matches(rule, tool, input)) return { decision: "deny" }; } // Layer 3: Allow rules (auto-approve patterns) for (const rule of rules.alwaysAllow) { if (matches(rule, tool, input)) return { decision: "allow" }; } // Layer 4: Permission mode if (mode === "bypassPermissions") return { decision: "allow" }; if (mode === "plan" && !tool.isReadOnly()) return { decision: "deny" }; if (mode === "acceptEdits" && ["Edit", "Write"].includes(tool.name)) return { decision: "allow" }; // Layer 5: Ask user (interactive TUI dialog) return promptUser(tool, input); } ``` ```typescript async function runHook( hookCommand: string, toolName: string, toolInput: ToolInput ): Promise { // Execute a shell hook and interpret the result const stdinData = JSON.stringify({ tool: toolName, input: toolInput }); const result = await runProcess(hookCommand, { input: stdinData, captureOutput: true, timeout: 600_000, // 10min default โ€” configurable via hook.timeout field }); if (result.exitCode === 0) { // exit 0: parse stdout JSON for structured response // To modify input, return hookSpecificOutput.updatedInput in the JSON const json = result.stdout ? JSON.parse(result.stdout) : {}; const updatedInput = json.hookSpecificOutput?.updatedInput ?? toolInput; return { decision: "allow", input: updatedInput }; } else if (result.exitCode === 2) { // exit 2: blocking error โ€” tool call is denied, stderr shown to user return { decision: "deny", reason: result.stderr }; } else { // other nonzero: non-blocking error โ€” logged, tool proceeds console.warn(\`Hook failed: \${result.stderr}\`); return { decision: "continue", input: toolInput }; } } ``` ```typescript function matches(rulePattern: string, toolName: string, toolInput: ToolInput): boolean { // Match tool calls against rule patterns // // Patterns: // "Read" โ†’ matches all Read tool calls // "Bash(npm test)" โ†’ matches Bash with 'npm test' in command // "Edit(src/**)" โ†’ matches Edit with file_path matching glob // Parse pattern: "ToolName(argument_pattern)" const match = rulePattern.match(/^(\\w+)(?:\\((.+)\\))?$/); if (!match) return false; const [, ruleTool, ruleArg] = match; if (ruleTool !== toolName) return false; if (ruleArg === undefined) return true; // matches all calls to this tool // Check if argument pattern appears in any input field return Object.values(toolInput).some(v => String(v).includes(ruleArg)); } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** Design a permission system for AI tool execution that
Answer Layer the system with clear priority: (1) Hooks (highest) โ€” shell scripts that run before/after tool calls. Enterprise admins deploy hooks via MDM that enforce org policies (e.g.,
### โ˜…โ˜…โ˜† _(OpenAI)_ **Q:** How would you implement user-configurable hooks that can modify tool inputs?
Answer Hooks are shell commands that receive context on stdin and communicate via exit codes + stdout. For input modification: the hook reads the tool input JSON from stdin, transforms it (e.g., redact secrets, normalize paths, add required flags), and writes the modified JSON to stdout. The system parses the JSON output and uses it as the new input. Exit code semantics: exit 0 = success (proceed with original or modified input โ€” stdout JSON replaces the input if present), exit 2 = blocking deny (tool call is blocked, stderr message is shown to the user), any other nonzero (e.g., exit 1) = non-blocking error (logged but tool proceeds as if no hook ran). The hook runs as a subprocess with a timeout (5s default). If it hangs, it
### โ˜…โ˜…โ˜… _(Anthropic, Meta)_ **Q:** What
Answer Defense in depth: (1) Schema validation โ€” Bash tool validates input before execution (must be a string, within length limits). (2) Permission layers โ€” deny rules block dangerous patterns (
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Design a hook system that prevents supply-chain attacks through MCP servers while remaining usable for legitimate plugins.
Answer The attack surface: a malicious MCP server registers a tool named
## Further Reading - [Claude Code Hooks Documentation](https://github.com/anthropics/claude-code) Official documentation for PreToolUse and PostToolUse hooks โ€” shell-based extensibility for tool execution. - [OWASP LLM Top 10](https://owasp.org/www-project-top-10-for-large-language-model-applications/) Security risks specific to LLM applications โ€” including prompt injection and insecure tool use. - [Principle of Least Privilege (NIST)](https://csrc.nist.gov/glossary/term/least_privilege) The foundational security principle behind the permission hierarchy โ€” grant minimum necessary access. - [Git Hooks Documentation](https://git-scm.com/docs/githooks) The design pattern that inspired AI tool hooks โ€” shell scripts triggered by lifecycle events. - [Indirect Prompt Injection Attacks on LLMs](https://arxiv.org/abs/2302.12173) Greshake et al., 2023 โ€” how attackers inject instructions via tool results; the threat model behind PreToolUse deny rules and input sanitization. - [Claude Code Permissions Documentation](https://docs.anthropic.com/en/docs/claude-code/security) Official reference for the allow/deny rule syntax, hook configuration, and the five-layer permission hierarchy. - [Zero Trust Architecture (NIST SP 800-207)](https://csrc.nist.gov/publications/detail/sp/800-207/final) NIST ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Prompt Engineering (System)" part: "AI Engineering" number: 55 emoji: "๐Ÿ“‹" subtitle: "System prompt assembly, cache boundary optimization, dynamic sections, and prompt variants" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ“‹ Prompt Engineering (System) > System prompt assembly, cache boundary optimization, dynamic sections, and prompt variants > [!question] Key Question > The system prompt has a secret boundary โ€” everything before it is cached, everything after is fresh โ† Hooks & Permissions | โ†’ Configuration & Schemas ## Key Insights > [!tip] Insight > Over 50+ API calls per task, cache hits on the static prefix save significant cost. With ~5K cacheable tokens at ~90% cache hit rate, you avoid re-computing ~225K tokens of KV cache per task. > [!tip] Insight > The prompt is not just text โ€” it is a carefully engineered artifact where section order, content stability, and injection point all affect performance and cost. Moving a single section from static to dynamic can cost hundreds of dollars per month at scale. > [!tip] Insight > The static/dynamic split is a design constraint that propagates through the entire system. Adding a new feature to the system prompt requires deciding: is this static (same across requests) or dynamic (changes per session)? Getting it wrong costs real money at scale. ## Code Examples ```typescript function buildSystemPrompt(tools: Tool[], model: string, mcpClients: McpClient[]): string { const sections: string[] = []; // STATIC sections (cacheable โ€” same across requests) sections.push(introSection()); // "You are Claude Code..." sections.push(systemRules()); // Core behavior rules sections.push(doingTasksSection()); // Task execution patterns sections.push(actionsSection()); // Safety/reversibility sections.push(toolsSection(tools)); // Tool descriptions (sorted!) sections.push(toneStyleSection()); // Output formatting sections.push(efficiencySection()); // "Be concise" // === CACHE BOUNDARY === sections.push("SYSTEM_PROMPT_DYNAMIC_BOUNDARY"); // DYNAMIC sections (change per session) sections.push(sessionGuidance()); // Runtime context sections.push(memoryPrompt()); // Memory system instructions sections.push(envInfo(model)); // OS, shell, model name sections.push(mcpInstructions()); // MCP server docs sections.push(skillsGuidance()); // Available skills return sections.join("\\n\\n"); } ``` ```typescript function assembleToolPool(builtinTools: Tool[], mcpTools: Tool[]): Tool[] { // Deterministic order = cache-stable tool descriptions const allTools = [...builtinTools, ...mcpTools]; // Alphabetical sort โ€” same order every time allTools.sort((a, b) => a.name.localeCompare(b.name)); // Deduplicate (MCP tool might shadow a built-in) const seen = new Set(); const unique: Tool[] = []; for (const tool of allTools) { if (!seen.has(tool.name)) { seen.add(tool.name); unique.push(tool); } } return unique; } // Why this matters: // Request 1 tools: [Bash, Edit, Glob, Grep, Read, Write] // Request 2 tools: [Bash, Edit, Glob, Grep, Read, Write] // Same order โ†’ same tokens โ†’ cache HIT // // Without sorting: // Request 1: [Read, Bash, Write, Edit, Grep, Glob] // Request 2: [Bash, Read, Edit, Write, Glob, Grep] // Different order โ†’ different tokens โ†’ cache MISS ``` ```typescript // Cache hit savings calculation const staticPrefixTokens = 5000; // identity + rules + tools + tone const dynamicSuffixTokens = 500; // git status + date + memory const apiCallsPerTask = 50; // typical complex task // WITHOUT cache optimization const totalInputTokens = (staticPrefixTokens + dynamicSuffixTokens) * apiCallsPerTask; // = 5500 * 50 = 275,000 tokens computed // WITH cache optimization (~90% cache hit on static prefix) const cachedTokens = staticPrefixTokens * apiCallsPerTask * 0.9; // = 225,000 tokens reused from cache (not re-computed) const computedTokens = 275000 - 225000; // = 50,000 tokens actually computed // Cache hit tokens cost ~10% of regular input tokens // Effective savings: ~80% on the static prefix portion ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** How would you optimize an AI system
Answer The key insight: API providers cache prompt prefixes. If the first N tokens of request B match request A, the cached KV computations are reused โ€” saving latency and cost. To optimize: (1) Split the system prompt into static sections (identity, rules, tool descriptions, tone) and dynamic sections (git status, date, memory). Put static first, dynamic last. (2) Insert a cache boundary marker between them. (3) Sort tool descriptions deterministically โ€” if tools appear in different orders, the prefix changes and the cache misses. (4) Inject per-project content (CLAUDE.md) as user context, not system prompt, to avoid invalidating the system prompt cache. (5) Minimize dynamic section size โ€” the shorter the dynamic tail, the more prefix is cacheable. Result: ~90% cache hit rate on the static prefix, saving ~3K+ tokens of computation per request.
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Design a system prompt that
Answer Architecture: a prompt builder with ~15 section functions, each returning a string. Sections are concatenated in a fixed order. The builder inserts a DYNAMIC_BOUNDARY marker that splits static from dynamic. Static sections (before boundary): identity (
### โ˜…โ˜…โ˜† _(OpenAI)_ **Q:** What are the tradeoffs of a static vs dynamic system prompt?
Answer Static prompt: maximum cache hits, simpler implementation, but can
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** When should you use exact-match vs. prefix-match caching for prompts, and what are the cost tradeoffs?
Answer Exact-match caching (the full prompt must match byte-for-byte) has a near-100% hit rate for truly static prompts but breaks on any dynamic insertion โ€” even adding the current timestamp invalidates the cache. Prefix-match caching (Anthropic
## Further Reading - [Anthropic Prompt Caching Documentation](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) Official documentation on how prompt prefix caching works โ€” the mechanism this module - [Claude Code (source)](https://github.com/anthropics/claude-code) The open-source implementation of the dynamic prompt builder with cache boundary optimization. - [Efficient Transformers: A Survey](https://arxiv.org/abs/2009.06732) Tay et al., 2020 โ€” covers KV cache and attention computation optimizations that make prompt caching possible. - [System Prompt Design Best Practices](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/system-prompts) Anthropic - [PagedAttention: Efficient Memory Management for LLM Serving (vLLM)](https://arxiv.org/abs/2309.06180) Kwon et al., 2023 โ€” the KV cache memory management technique that makes server-side prompt caching economically feasible at scale. - [Prefix Caching in vLLM](https://docs.vllm.ai/en/latest/features/automatic_prefix_caching.html) How open-source serving frameworks implement automatic prefix caching โ€” the same mechanism Anthropic exposes via cache_control breakpoints. - [OpenAI Prompt Caching Guide](https://platform.openai.com/docs/guides/prompt-caching) OpenAI ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Configuration & Schemas" part: "AI Engineering" number: 56 emoji: "โšก" subtitle: "Settings.json, Zod validation, feature flags, MDM policies, and config hierarchy" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # โšก Configuration & Schemas > Settings.json, Zod validation, feature flags, MDM policies, and config hierarchy > [!question] Key Question > Three config files merge at startup โ€” user, project, and enterprise MDM policies โ† Prompt Engineering (System) | โ†’ Bridges & IDE Integration ## Key Insights > [!tip] Insight > The allowedTools pattern syntax is surprisingly expressive:{" "} "Bash(npm test)" matches Bash with "npm test" in the command, "Read" matches all Read calls, and "Edit(src/**)" matches Edit with paths matching the glob. > [!tip] Insight > Zod schemas serve as living documentation. When a new contributor asks "what settings are available?", the schema is the authoritative answer โ€” not a README that might be outdated. > [!tip] Insight > Zod validation at startup adds ~5ms to load time but prevents hours of debugging from silent config errors. The most common user-reported bugs before validation: misspelled field names and wrong types in settings.json. ## Code Examples ```typescript import { z } from "zod"; // Hook entry schema (command-type hooks) const HookSchema = z.object({ type: z.literal("command"), command: z.string(), timeout: z.number().positive().optional(), // seconds; default 600 (10 min) }); // Define the schema โ€” single source of truth const SettingsSchema = z.object({ permissions: z.object({ allow: z.array(z.string()).optional(), // ["Bash(npm test)", "Read"] deny: z.array(z.string()).optional(), // ["Bash(rm -rf)"] mode: z.enum(["default", "bypassPermissions", "plan", "acceptEdits", "dontAsk"]).optional(), }).optional(), model: z.string().optional(), // "opus", "sonnet", "haiku" hooks: z.object({ PreToolUse: z.array(HookSchema).optional(), PostToolUse: z.array(HookSchema).optional(), }).optional(), }); // Derive TypeScript type from the schema โ€” always in sync type Settings = z.infer; ``` ```typescript function loadConfig(): Settings { // Three sources merge โ€” enterprise MDM wins conflicts // 1. Load each source let user = loadJson("~/.claude/settings.json"); let project = loadJson(".claude/settings.json"); const mdm = loadMdmPolicies(); // enterprise managed // 2. Validate each with Zod (catches typos, wrong types) const userResult = SettingsSchema.safeParse(user); if (!userResult.success) { warn(\`Invalid user config: \${userResult.error}\`); user = {}; // fall back to defaults } const projectResult = SettingsSchema.safeParse(project); if (!projectResult.success) { warn(\`Invalid project config: \${projectResult.error}\`); project = {}; } // MDM is pre-validated by enterprise tooling const validatedMdm = SettingsSchema.parse(mdm); // 3. Merge with priority: MDM > project > user return deepMerge(user, project, validatedMdm); } function deepMerge(user: Partial, project: Partial, mdm: Settings): Settings { // Field-specific merge strategies โ€” result nests under permissions key return { permissions: { mode: mdm.permissions?.mode ?? project.permissions?.mode ?? user.permissions?.mode ?? "default", deny: union(user.permissions?.deny, project.permissions?.deny, mdm.permissions?.deny), allow: intersect(user.permissions?.allow, project.permissions?.allow, mdm.permissions?.allow), }, }; } ``` ```typescript // Feature flag evaluated at build time if (feature("HISTORY_SNIP")) { // This entire block is REMOVED from the bundle // when HISTORY_SNIP is off enableSnipCompaction(); registerSnipCommands(); } // vs. Runtime flag (code always included) if (config.enableSnip) { // Both branches exist in the bundle // Larger bundle, more attack surface enableSnipCompaction(); } // Build-time: smaller bundle, requires rebuild to toggle // Runtime: larger bundle, instant toggle // CLI tools prefer build-time (users install a binary) ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google)_ **Q:** Design a config system with three priority levels that validates all input.
Answer Three sources merge at startup: user settings (~/.claude/settings.json), project settings (.claude/settings.json), and enterprise MDM policies. Merge order: user (lowest) โ†’ project โ†’ MDM (highest). For each source: (1) Load JSON file, (2) Validate against Zod schema โ€” catches typos (
### โ˜…โ˜†โ˜† _(Meta)_ **Q:** How do feature flags enable dead code elimination at build time?
Answer Feature flags via GrowthBook: the feature(
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** How would you handle config migration when the schema changes?
Answer Config migration is the hardest part of config management. Strategies: (1) Backward-compatible changes โ€” add new optional fields with defaults. Old configs still validate, new features have sensible defaults. This covers 80% of changes. (2) Deprecated field aliasing โ€” old field name maps to new field name during validation (Zod transform). Warn the user, keep working. (3) Breaking changes โ€” version the config schema. On load, check the version field, run migration functions in sequence (v1โ†’v2โ†’v3). Each migration is a pure function: old config in, new config out. (4) Validation error recovery โ€” when a field fails validation, don
### โ˜…โ˜…โ˜† _(Google)_ **Q:** How would you design a config schema that supports both JSON and TypeScript definitions while keeping them in sync?
Answer Use Zod as the single source of truth: define the schema once in TypeScript using Zod, then derive both the runtime validator and the TypeScript type from it via z.infer<>. For JSON Schema output (useful for IDE autocomplete and external tooling), use zod-to-json-schema to generate the JSON Schema automatically from the Zod definition โ€” never write JSON Schema by hand. The flow: Zod schema โ†’ z.infer<> (TypeScript types) + zod-to-json-schema (JSON Schema for settings.json editors). To prevent drift, run schema generation as a build step and check the output into version control; CI fails if the generated JSON Schema doesn
## Further Reading - [Zod: TypeScript-first schema validation](https://zod.dev/) The runtime validation library used for config schemas โ€” bridges the gap between TypeScript types and runtime data. - [GrowthBook Feature Flags](https://www.growthbook.io/) The feature flag service used for build-time dead code elimination and gradual rollouts. - [Apple MDM Protocol Reference](https://developer.apple.com/documentation/devicemanagement) The enterprise device management protocol used to deploy organization-wide settings policies. - [JSON Schema Specification](https://json-schema.org/specification) The standard that Zod - [12-Factor App: Config](https://12factor.net/config) The canonical rule for config management โ€” strict separation of config from code, environment-variable-first, directly applicable to agent settings design. - [Launch Darkly Feature Flags Guide](https://launchdarkly.com/feature-management/) The industry standard for feature flag systems โ€” covers build-time vs. runtime flags, targeting rules, and rollout strategies used in agent feature gating. - [TypeScript satisfies Operator](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-4-9.html) The TypeScript operator that validates config objects against a schema without widening the type โ€” the compile-time complement to Zod ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Bridges & IDE Integration" part: "AI Engineering" number: 57 emoji: "๐ŸŒ‰" subtitle: "WebSocket bridge, VS Code/JetBrains extensions, permission callbacks, and message routing" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐ŸŒ‰ Bridges & IDE Integration > WebSocket bridge, VS Code/JetBrains extensions, permission callbacks, and message routing > [!question] Key Question > Same QueryEngine, different face โ€” terminal, IDE panel, or browser โ† Configuration & Schemas | โ†’ Streaming & API Layer ## Key Insights > [!tip] Insight > The bridge is bidirectional: the IDE sends user messages and permission responses downstream, while the engine streams events and permission requests upstream. This is the same pattern used by language server protocol (LSP) โ€” a generic transport with typed messages. > [!tip] Insight > Remote sessions add another layer: WebSocket with exponential backoff reconnect, message routing for multiple viewers, and a viewer-only mode that injects a permission_callback always returning false โ€” observers can watch but never approve tool calls. > [!tip] Insight > The bridge pattern means a new frontend (e.g., a mobile app) only requires a new thin adapter โ€” the QueryEngine, tool execution, and permission model are already built and tested. ## Code Examples ```typescript class Bridge { // Connects IDE extension to CLI engine via WebSocket private ws: WebSocket; private engine: QueryEngine | null = null; constructor(wsUrl: string) { this.ws = new WebSocket(wsUrl); } onConnect(): void { createSession(); this.engine = new QueryEngine({ permissionCallback: (tool, input) => this.idePermissionDialog(tool, input), }); } onMessage(userMsg: string): void { for (const event of this.engine!.submit(userMsg)) { this.ws.send(JSON.stringify(event)); } } async idePermissionDialog(tool: string, input: unknown): Promise { this.ws.send(JSON.stringify({ type: "permission_request", tool, input })); // ws.receive() is pseudocode โ€” real impl wraps ws.onmessage in a Promise const response = await this.ws.receive(); return response.allowed; } } ``` ```typescript type InkNodeType = "Box" | "Text" | "Spinner" | "Static"; // Adapts Ink terminal components to browser HTML equivalents const componentMap: Record string> = { Box: (props) => \`
\`, Text: (props) => \`\`, Spinner: (_props) => '', Static: (_props) => '
', }; function* renderInkTree(inkTree: InkNode): Generator { // Walk Ink component tree, emit HTML for (const node of walk(inkTree)) { const adapter = componentMap[node.type as InkNodeType]; yield adapter(node.props); yield* renderInkTree(node.children); yield closingTag(node.type); } } function flexboxCss(props: InkProps): string { const direction = props.flexDirection ?? "column"; const justify = props.justifyContent ?? "flex-start"; const padding = props.padding ?? 0; return \`flex-direction:\${direction}; justify-content:\${justify}; padding:\${padding}ch;\`; } ``` ```typescript type SessionMode = "terminal" | "ide" | "web" | "viewer"; type PermissionCallback = (tool: string, input: unknown) => Promise; // Factory: same engine, different permission UIs function createSession(mode: SessionMode): QueryEngine { let callback: PermissionCallback; if (mode === "terminal") { callback = terminalPermission; // stdin prompt } else if (mode === "ide") { callback = idePermission; // WebSocket dialog } else if (mode === "web") { callback = webPermission; // Zustand modal } else { callback = async () => false; // viewer โ€” read-only } return new QueryEngine({ permissionCallback: callback }); } async function terminalPermission(tool: string, _input: unknown): Promise { const answer = await readLine(\`Allow \${tool}? [y/n] \`); return answer.toLowerCase() === "y"; } async function idePermission(tool: string, input: unknown): Promise { ws.send(JSON.stringify({ type: "permission_request", tool, input })); // ws.receive() is pseudocode โ€” real impl: wrap ws.onmessage in a Promise const response = await ws.receive(); return response.allowed; } async function webPermission(tool: string, _input: unknown): Promise { // store.dispatch/waitFor is pseudocode โ€” real Zustand impl uses // setState + a subscribe-based Promise wrapper (Zustand has no built-in waitFor) store.dispatch({ type: "SHOW_PERMISSION_MODAL", tool }); return store.waitFor("PERMISSION_RESPONSE").then((r) => r.allowed); } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, Meta)_ **Q:** Design a system where the same AI engine serves terminal, IDE, and web interfaces.
Answer Use the bridge pattern: one shared QueryEngine handles all AI logic (tool execution, context management, streaming), and each frontend connects through an adapter layer. Terminal: Ink renders React components to ANSI via stdout. IDE: a WebSocket bridge connects the extension to a headless CLI process โ€” the extension sends user messages, receives streaming events, and renders them in IDE panels. Web: a Next.js app with Zustand state management and an ink-compat adapter that maps Ink components (Box, Text) to HTML equivalents so tool renderers are shared across all three surfaces. The key architectural decision: permission callbacks are injected into QueryEngine at session creation, so terminal prompts stdin, IDE shows a dialog via WebSocket, and web shows a modal โ€” same engine, different UI.
### โ˜…โ˜…โ˜† _(Google)_ **Q:** How would you handle permission dialogs across different UI frontends?
Answer Inject a permission_callback into the QueryEngine at session creation time. In terminal mode, the callback reads from stdin (blocking prompt). In IDE bridge mode, the callback sends a permission_request message over WebSocket, then awaits the response โ€” the IDE extension renders a native dialog (VS Code showInformationMessage, JetBrains DialogWrapper) and sends back {allowed: true/false}. In web mode, the callback dispatches to a Zustand store that triggers a React modal. The pattern is dependency injection: the engine never knows which UI is rendering the dialog. This also enables viewer-only mode โ€” inject a callback that always returns false, so read-only observers can watch but never approve tool use.
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What
Answer Ink components (Box, Text, Spinner, etc.) are designed for terminal rendering via ANSI escape codes. The ink-compat adapter maps these to browser-equivalent HTML elements: Box becomes a div with flexbox, Text becomes a span with CSS styles, Spinner becomes a CSS animation. This means tool output renderers โ€” the components that display file diffs, search results, command output โ€” are written once using Ink primitives and work in all three frontends. Without ink-compat, you
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What are the tradeoffs between IDE-native and stdio-based bridges for editor integration?
Answer stdio bridges (spawning a CLI subprocess and communicating over stdin/stdout) are simpler to implement and work across any editor that can spawn a process, but they lack access to IDE-native APIs โ€” they can
## Further Reading - [VS Code Extension API](https://code.visualstudio.com/api) Official docs for building VS Code extensions โ€” the primary IDE integration surface for Claude Code. - [WebSocket RFC 6455](https://datatracker.ietf.org/doc/html/rfc6455) The protocol spec underlying IDE-to-engine communication in bridge mode. - [Ink: React for interactive command-line apps](https://github.com/vadimdemedes/ink) The terminal React renderer whose components are adapted by ink-compat for cross-platform rendering. - [VS Code Extension Host Architecture](https://code.visualstudio.com/api/advanced-topics/extension-host) How VS Code isolates extensions in a separate process โ€” the same isolation model used by the Claude Code IDE bridge to sandbox the engine from the editor. - [Adapter Pattern (Refactoring Guru)](https://refactoring.guru/design-patterns/adapter) The structural design pattern at the heart of the bridge โ€” converting one interface (QueryEngine) into multiple frontend-specific interfaces. - [React Native Architecture Overview](https://reactnative.dev/docs/the-new-architecture/landing-page) The gold standard for a single React tree rendering to multiple native targets โ€” the same multi-renderer problem Claude Code ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Streaming & API Layer" part: "AI Engineering" number: 58 emoji: "๐ŸŒŠ" subtitle: "Async generators, queryModelWithStreaming, SSE parsing, and backpressure" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐ŸŒŠ Streaming & API Layer > Async generators, queryModelWithStreaming, SSE parsing, and backpressure > [!question] Key Question > Tokens appear one by one because five async generators pipe data like Unix pipes โ† Bridges & IDE Integration | โ†’ Error Recovery ## Key Insights > [!tip] Insight > The key property of async generators is{" "} lazy evaluation. The producer only runs when the consumer asks for the next value. This gives you backpressure for free โ€” if the terminal renderer is slow, the entire pipeline naturally slows down. > [!tip] Insight > Streaming is not just about UX โ€” it fundamentally changes the agent architecture. Without streaming, you wait for the entire response before knowing if there are tool calls. With streaming, you can start rendering text immediately while still accumulating tool_use blocks. > [!tip] Insight > At ~50 tokens/second, a 2K token response takes ~40 seconds to generate. With streaming, the user sees the first token in under a second. Without streaming, they see nothing for 40 seconds then everything at once. ## Code Examples ```typescript // The streaming pipeline โ€” a chain of async generators async function* queryModelWithStreaming( messages: Message[], systemPrompt: string, tools: Tool[] ): AsyncGenerator { // Calls the API and yields streaming events const response = sdk.messages.stream({ model: "claude-opus-4-6", messages, system: systemPrompt, tools, max_tokens: 8096, // required by the Messages API }); for await (const event of response) { yield event; // text_delta, tool_use, message_stop, etc. } } async function* queryLoop( messages: Message[], tools: Tool[], systemPrompt: string ): AsyncGenerator { // The agentic loop โ€” consumes streaming events while (true) { const toolBlocks: ToolUseEvent[] = []; for await (const event of queryModelWithStreaming(messages, systemPrompt, tools)) { if (event.type === "text_delta") { yield event; // pass through to REPL for rendering } else if (event.type === "tool_use") { toolBlocks.push(event); } } if (toolBlocks.length === 0) return; // no tools = done const results = await runTools(toolBlocks); messages.push(...results); // loop continues โ€” call API again with tool results } } // The REPL consumes the outermost generator: for await (const event of queryLoop(messages, tools, prompt)) { renderToTerminal(event); // each token appears immediately } ``` ```typescript // SSE wire format from the API // // event: message_start // data: {"type":"message_start","message":{"id":"msg_01..."}} // // event: content_block_delta // data: {"type":"content_block_delta","delta":{"type":"text_delta","text":"Hello"}} // // event: message_stop // data: {"type":"message_stop"} async function* parseSseStream(response: ReadableStream): AsyncGenerator { // Parse Server-Sent Events into typed objects let buffer = ""; for await (const chunk of response) { buffer += new TextDecoder().decode(chunk); while (buffer.includes("\\n\\n")) { const idx = buffer.indexOf("\\n\\n"); const eventStr = buffer.slice(0, idx); buffer = buffer.slice(idx + 2); const event = parseEvent(eventStr); yield event; } } } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** Design a streaming pipeline for an AI agent that handles tool calls mid-stream.
Answer The key insight is that tool_use events arrive interleaved with text_delta events in the same stream. Your pipeline must: (1) buffer text deltas for immediate rendering, (2) accumulate tool_use blocks until complete (they arrive as start + delta + stop events), (3) when message_stop arrives, check for pending tool blocks, (4) execute tools and append results to messages, (5) loop back to the API with updated messages. The streaming pipeline is a chain of async generators: the inner generator yields raw SSE events, the middle layer parses them into typed objects, and the outer generator (query_loop) handles the tool-use-then-retry logic. Each layer yields progressively, so the user sees text tokens immediately even when tool calls are pending.
### โ˜…โ˜…โ˜† _(Meta)_ **Q:** Explain backpressure in async generators. Why does it matter for LLM streaming?
Answer Backpressure is the mechanism where a slow consumer naturally slows down the producer. In async generators,
### โ˜…โ˜…โ˜† _(Google, Databricks)_ **Q:** How would you handle network disconnection during a streaming API call?
Answer Layer the solution: (1) Stream interruption recovery is handled at the application level โ€” the SDK provides event callbacks and error handlers, but the agent harness decides whether to retry the full request or attempt to continue from the last received event. Recovery of partial tool-use blocks requires careful state management. (2) At the application layer, detect stalled streams with a heartbeat timeout (e.g., no event for 30s = stale connection). (3) Implement idempotent retry: if the stream dies mid-response, you have partial text โ€” include it in the retry request context so the model can continue rather than restart. (4) For tool calls interrupted mid-execution, check tool idempotency: Read/Grep are safe to retry, but Bash may need rollback. The key tradeoff: aggressive reconnection wastes tokens (re-generating seen content), while conservative reconnection loses progress.
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** How do you handle partial tool-call JSON in a streaming response where the connection drops mid-chunk?
Answer A tool call
## Further Reading - [MDN: Async iteration and generators](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of) Reference for async function*, for await...of, and the async iteration protocol. - [Anthropic Streaming API](https://docs.anthropic.com/en/api/streaming) Official docs for streaming message responses via Server-Sent Events. - [SSE Specification (WHATWG)](https://html.spec.whatwg.org/multipage/server-sent-events.html) The standard behind text/event-stream โ€” event types, data fields, reconnection. - [Claude Code (source)](https://github.com/anthropics/claude-code) Open-source reference for the streaming pipeline architecture described in this module. - [WHATWG Streams API](https://streams.spec.whatwg.org/) The browser standard for backpressure-aware streaming โ€” ReadableStream, WritableStream, and the pipe chain that async generators implement natively. - [Anthropic SDK Streaming (TypeScript)](https://github.com/anthropics/anthropic-sdk-typescript/blob/main/helpers.md) The official SDK - [Node.js Stream Backpressure Guide](https://nodejs.org/en/docs/guides/backpressuring-in-streams) Official Node.js guide on backpressure โ€” the mechanism that prevents unbounded memory growth when the consumer is slower than the producer. ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Error Recovery" part: "AI Engineering" number: 59 emoji: "๐Ÿ›Ÿ" subtitle: "Reactive compact retry, max output tokens escalation, abort handling, and graceful degradation" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ›Ÿ Error Recovery > Reactive compact retry, max output tokens escalation, abort handling, and graceful degradation > [!question] Key Question > The API says 'prompt too long' โ€” the agent silently compacts and retries before you notice โ† Streaming & API Layer | โ†’ Speculative Execution ## Key Insights > [!tip] Insight > Compaction preserves recent tool results (expensive to regenerate) but summarizes older assistant text. This keeps the model's working memory intact while freeing space from conversational history. > [!tip] Insight > Rate limit handling is proactive, not just reactive. The agent parses rate-limit headers from every response and slows down before hitting the limit, rather than waiting for a 429 error. > [!tip] Insight > The one-shot flag for reactive compaction is a deliberate design choice. Allowing multiple compaction attempts risks an infinite loop where the agent keeps compacting and retrying but never makes progress โ€” burning tokens and time on a fundamentally impossible request. ## Code Examples ```typescript const Transition = { COMPLETED: "completed", // success โ€” no more tool calls TOOL_USE: "tool_use", // normal โ€” execute tools, loop back REACTIVE_COMPACT: "reactive_compact_retry", // prompt too long MAX_TOKENS: "max_output_tokens_recovery", // output truncated MODEL_ERROR: "model_error", // permanent API error MAX_TURNS: "max_turns", // safety limit hit ABORTED: "aborted_streaming", // user cancelled } as const; async function queryLoop(messages: Message[], tools: Tool[], config: Config): Promise { let hasAttemptedCompact = false; let maxTokensRetries = 0; while (true) { try { const response = await callApi(messages, tools); const toolBlocks = extractToolUse(response); if (toolBlocks.length === 0) return Transition.COMPLETED; const results = await runTools(toolBlocks); messages.push(...results); // loop back } catch (err) { if (err instanceof PromptTooLongError) { if (hasAttemptedCompact) return Transition.MODEL_ERROR; // already tried, give up messages = await compact(messages); hasAttemptedCompact = true; continue; // retry with compacted messages } else if (err instanceof MaxOutputTokensError) { if (maxTokensRetries >= 3) return Transition.MODEL_ERROR; maxTokensRetries++; config.maxTokens *= 2; // escalate limit continue; } else if (err instanceof RateLimitError) { await sleep(err.retryAfter); // backoff continue; } else if (err instanceof AbortError) { return Transition.ABORTED; } throw err; } } } ``` ```typescript function handleRateLimit(responseHeaders: Headers): void { const remaining = parseInt(responseHeaders.get("x-ratelimit-remaining") ?? "0"); const resetAt = parseTime(responseHeaders.get("x-ratelimit-reset") ?? ""); const utilization = 1.0 - (remaining / limit); const timeRemainingPct = (resetAt - now()) / windowSize; if (utilization > timeRemainingPct) { warn("Approaching rate limit โ€” slowing down"); addDelay(calculateBackoff(utilization)); } } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** Design error recovery for an AI agent that handles context overflow mid-task.
Answer The agent maintains a transition system where each loop iteration classifies the outcome. When the API returns
### โ˜…โ˜…โ˜† _(Databricks, Meta)_ **Q:** How would you implement graceful degradation when an LLM API rate-limits you?
Answer Layer the solution: (1) Parse rate limit headers (x-ratelimit-remaining, x-ratelimit-reset) from every response โ€” not just error responses. (2) Calculate utilization ratio: if you
### โ˜…โ˜…โ˜† _(OpenAI)_ **Q:** What
Answer Transient errors are recoverable by retrying the same request: rate limits (429), network timeouts, server errors (500/503), and
### โ˜…โ˜…โ˜… _(OpenAI)_ **Q:** Design an error recovery system that distinguishes between transient failures (retry) and permanent failures (escalate) for LLM tool calls.
Answer Build a typed error taxonomy at the tool boundary. Transient failures: network timeouts, rate limit 429s, Bash exit codes from flaky external services (curl timeout, test runner OOM). Permanent failures: missing file paths (ENOENT on a Read), schema validation errors (the tool was called with malformed input), content policy blocks, and budget exhaustion. The classification is encoded in the tool executor
## Further Reading - [Anthropic API Error Codes](https://docs.anthropic.com/en/api/errors) Official reference for API error types, status codes, and recommended handling strategies. - [Exponential Backoff and Jitter (AWS)](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/) AWS architecture blog on backoff strategies โ€” full jitter outperforms equal jitter and decorrelated jitter. - [Circuit Breaker Pattern (Martin Fowler)](https://martinfowler.com/bliki/CircuitBreaker.html) The pattern for preventing cascading failures when a downstream service is unhealthy. - [Claude Code (source)](https://github.com/anthropics/claude-code) Open-source reference for the error recovery and transition system described in this module. - [Release It! โ€” Production-Ready Software (Nygard)](https://pragprog.com/titles/mnee2/release-it-second-edition/) The book that codified circuit breakers, bulkheads, and timeouts โ€” the stability patterns directly applied in agent error recovery. - [AbortController and AbortSignal (MDN)](https://developer.mozilla.org/en-US/docs/Web/API/AbortController) The browser/Node.js API for cooperative cancellation โ€” the mechanism behind Ctrl+C propagation through the streaming pipeline. - [Google SRE Book: Handling Overload](https://sre.google/sre-book/handling-overload/) Google SRE ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Speculative Execution" part: "AI Engineering" number: 60 emoji: "๐Ÿ”ฎ" subtitle: "Parallel speculation, overlay filesystems, safe tool subsets, and acceptance criteria" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ”ฎ Speculative Execution > Parallel speculation, overlay filesystems, safe tool subsets, and acceptance criteria > [!question] Key Question > While you're still typing, a speculative agent already searched the codebase for you โ† Error Recovery | โ†’ Coordinator/Worker Pattern ## Key Insights > [!tip] Insight > The overlay filesystem is the key safety mechanism. Like Docker's layered filesystem, reads fall through to the real FS while writes go to a temporary layer. This is an application-managed copy-on-write abstraction: merge copies changed files back to the real FS, discard deletes the temp layer. Both operations are lightweight (proportional to files changed, not total codebase size). > [!tip] Insight > The accept/reject decision compares the user's actual message against the speculation's predicted intent โ€” not exact string matching, but semantic alignment. "Fix the bug" and "Can you fix that error?" both align with a speculation that investigated the error. > [!tip] Insight > Speculation is suppressed ~60% of the time because most turns are either cheap (not worth speculating) or read-only (nothing actionable to predict). The 40% of turns where speculation runs tend to be high-value: after file edits, after bug fixes, after complex tool chains โ€” exactly the moments when pre-computation saves the most time. ## Code Examples ```typescript const SpeculationState = { IDLE: "idle", RUNNING: "running", ACCEPTED: "accepted", REJECTED: "rejected", } as const; type SpeculationStateValue = typeof SpeculationState[keyof typeof SpeculationState]; class SpeculativeExecutor { private state: SpeculationStateValue = SpeculationState.IDLE; private overlayFs: OverlayFileSystem = new OverlayFileSystem(); private safeTools: string[] = ["Read", "Glob", "Grep", "TaskGet", "TaskList"]; private result: unknown = null; // Run speculative work in background async speculate(conversation: Conversation): Promise { if (this.shouldSuppress(conversation)) return; this.state = SpeculationState.RUNNING; // Predict next steps const prediction = await predictNextAction(conversation); // Run with overlay FS and restricted tools const engine = new QueryEngine({ tools: filterTools(this.safeTools), filesystem: this.overlayFs, // writes go to overlay }); this.result = await engine.submit(prediction); } // Check if speculation matches user intent onUserMessage(message: string): void { if (this.state !== SpeculationState.RUNNING) return; if (alignsWithSpeculation(message, this.result)) { this.state = SpeculationState.ACCEPTED; this.overlayFs.mergeToReal(); // apply cached work } else { this.state = SpeculationState.REJECTED; this.overlayFs.discard(); // throw away, no harm } } // Don't speculate if it's not worth it private shouldSuppress(conversation: Conversation): boolean { if (conversation.lastTurnCost < threshold) return true; // cheap turn if (conversation.lastToolWasReadOnly) return true; // nothing to speculate return false; } } ``` ```typescript // Copy-on-write filesystem โ€” reads from real, writes to temp class OverlayFileSystem { private overlay: Map = new Map(); // path -> content read(path: string): string { if (this.overlay.has(path)) { return this.overlay.get(path)!; } return realFs.read(path); } write(path: string, content: string): void { this.overlay.set(path, content); // never touches real FS } mergeToReal(): void { for (const [path, content] of this.overlay) { realFs.write(path, content); } } discard(): void { this.overlay.clear(); } } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** Design a speculative execution system for an AI agent. How do you ensure safety?
Answer Three isolation layers: (1) Overlay filesystem โ€” reads from the real FS but writes go to a temporary copy-on-write layer. If speculation is wrong, discard the overlay; if right, merge it. This is the same principle as OverlayFS in Docker containers. (2) Tool filtering โ€” only allow read-only tools (Read, Glob, Grep, TaskGet, TaskList). No Bash, no Edit, no network calls. Even if the speculative agent hallucinates, it can
### โ˜…โ˜…โ˜† _(Meta)_ **Q:** What
Answer Overlay FS is a copy-on-write layer: reads fall through to the real FS, writes go to a temp directory. It
### โ˜…โ˜…โ˜† _(OpenAI, Databricks)_ **Q:** How would you decide when speculation is worth the compute cost?
Answer Build a suppression heuristic with three signals: (1) Last turn cost โ€” if the previous turn was cheap (simple question, no tool calls), speculation is unlikely to save time. Only speculate after expensive turns (multiple tool calls, code generation). (2) Task type โ€” after a file edit, the user likely wants to test or review; after a bug fix, they likely want to verify. Read-only exploration turns don
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** What verification strategy prevents speculative execution from committing side effects that the user hasn
Answer Two complementary layers: tool filtering and overlay commit gating. Tool filtering is the first line โ€” the speculative agent
## Further Reading - [OverlayFS Documentation (Linux Kernel)](https://docs.kernel.org/filesystems/overlayfs.html) The kernel filesystem that inspired the copy-on-write pattern used in speculative execution. - [Speculative Execution in CPUs (Hennessy & Patterson)](https://en.wikipedia.org/wiki/Speculative_execution) The CPU architecture concept โ€” predict the branch, execute speculatively, commit or rollback. - [Branch Prediction (Wikipedia)](https://en.wikipedia.org/wiki/Branch_predictor) How CPUs predict which branch to take โ€” the same predict-execute-verify pattern applies to agent speculation. - [Claude Code (source)](https://github.com/anthropics/claude-code) Open-source agentic coding tool that may contain related implementation ideas for speculative execution and overlay-based isolation. - [Spectre and Meltdown: Lessons for Software Design](https://meltdownattack.com/) The real-world consequences of CPU speculative execution gone wrong โ€” illustrates why side-effect isolation (overlay FS) is non-negotiable before committing speculative work. - [Copy-on-Write Semantics (Linux man page: fork)](https://man7.org/linux/man-pages/man2/fork.2.html) The OS-level COW primitive that makes fork cheap โ€” the same copy-on-write principle applied to filesystem overlays in agent speculative execution. - [Git Stash and Worktrees](https://git-scm.com/docs/git-stash) The git primitives for saving and restoring working state โ€” the lightweight alternative to overlay FS for speculative edits confined to git-tracked files. ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Coordinator/Worker Pattern" part: "AI Engineering" number: 61 emoji: "๐Ÿ‘”" subtitle: "Multi-agent coordination, restricted tool sets, environment gating, and task distribution" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ‘” Coordinator/Worker Pattern > Multi-agent coordination, restricted tool sets, environment gating, and task distribution > [!question] Key Question > The coordinator writes prompts, not code โ€” it manages a team of worker agents โ† Speculative Execution | โ†’ Session Persistence ## Key Insights > [!tip] Insight > The coordinator's context window stays clean for high-level decisions. It never fills up with implementation details, diffs, or compiler output โ€” that stays in worker contexts. > [!tip] Insight > This is the pattern behind complex multi-file refactors: the coordinator reads the codebase, plans the decomposition, spawns 2-5 workers for independent subtasks, then verifies integration. It's MapReduce for code changes. > [!tip] Insight > The coordination overhead (10-15% of tokens spent on planning instead of executing) pays for itself when tasks have dependencies. For a 4-worker refactor, the alternative is 4 independent agents that each spend 30%+ of their context re-discovering the plan. ## Code Examples ```typescript // Coordinator manages workers โ€” never writes code directly class CoordinatorMode { private coordinatorTools: string[] = [ "Agent", // spawn workers "SendMessage", // communicate with running workers "TeamCreate", // create worker teams "Read", "Glob", "Grep", // can READ code to plan // NO Bash, Edit, Write โ€” coordinator doesn't code ]; private workerTools: string[] = [ "Bash", "Read", "Write", "Edit", "Glob", "Grep", // NO Agent โ€” workers can't spawn sub-workers ]; async coordinate(task: Task): Promise { // 1. Analyze the task const plan = await this.planDecomposition(task); // 2. Spawn workers for each subtask const workers: Agent[] = []; for (const subtask of plan.subtasks) { const worker = await spawnAgent({ prompt: subtask.description, tools: this.workerTools, background: true, // parallel execution }); workers.push(worker); } // 3. Monitor and aggregate results const results = await gatherResults(workers); // 4. Verify and integrate return this.verifyIntegration(results); } } ``` ```typescript const COORDINATOR_TOOLS: string[] = [ "Agent", "SendMessage", "TeamCreate", "TeamDelete", "Read", "Glob", "Grep", // read-only access ]; const WORKER_TOOLS: string[] = [ "Bash", "Read", "Write", "Edit", "Glob", "Grep", // No Agent โ€” strict two-level hierarchy ]; const ALL_TOOLS: string[] = [...COORDINATOR_TOOLS, ...WORKER_TOOLS]; // Tool set determined at startup, not by the agent function getAvailableTools(mode: string): string[] { if (process.env.COORDINATOR_MODE) { return COORDINATOR_TOOLS; // management only } return ALL_TOOLS; // full access for single-agent mode } // Workers are spawned with explicit tool lists async function spawnWorker(taskDescription: string): Promise { return await Agent({ prompt: taskDescription, tools: WORKER_TOOLS, // enforced at registry level // Agent tool NOT in WORKER_TOOLS โ€” no recursion possible }); } ``` ```typescript // Spawn workers for independent subtasks concurrently async function runParallelWorkers( subtasks: Subtask[], workerTools: string[] ): Promise { async function runOne(subtask: Subtask): Promise { const result = await spawnAgent({ prompt: subtask.description, tools: workerTools, background: true, }); return { subtask: subtask.id, result }; } // All workers run simultaneously const results = await Promise.all(subtasks.map(runOne)); // Check for conflicts const modifiedFiles = new Map(); for (const r of results) { for (const f of r.result.filesModified) { if (modifiedFiles.has(f)) { throw new ConflictError( \`\${f} modified by workers \${modifiedFiles.get(f)} and \${r.subtask}\` ); } modifiedFiles.set(f, r.subtask); } } return results; } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Design a multi-agent system where a coordinator delegates to specialized workers.
Answer The coordinator is a special agent that only has management tools (Agent, SendMessage, Read, Grep) โ€” it can plan and observe but never write code. It decomposes tasks, spawns worker agents with restricted tool sets (Bash, Edit, Write โ€” but no Agent tool, preventing uncontrolled recursion). Workers execute in parallel on independent subtasks and report results via tool_result. The coordinator aggregates results, detects conflicts (e.g., two workers editing the same file), and decides next steps. Key design choices: (1) tool-level isolation prevents recursion, (2) parallel workers maximize throughput, (3) coordinator
### โ˜…โ˜…โ˜† _(Google)_ **Q:** How do you prevent uncontrolled recursion in a system where agents can spawn agents?
Answer Remove the Agent tool from worker tool sets. Workers can execute code (Bash, Edit, Write) but cannot spawn sub-workers. This is enforced at the tool registry level โ€” when a worker is created, its available tools are filtered to exclude Agent. This creates a strict two-level hierarchy: coordinator spawns workers, workers execute and return. No deeper nesting. The alternative โ€” depth limits โ€” is fragile because a depth-3 agent tree consumes 3x context and 3x API cost with no coordination. The flat coordinator/worker pattern is simpler, cheaper, and easier to debug.
### โ˜…โ˜…โ˜… _(Meta, Anthropic)_ **Q:** What
Answer Flat pool: all agents are equal, no coordinator. Simple to implement, but no one owns the plan โ€” agents may duplicate work, conflict on shared files, or miss integration issues. Works for embarrassingly parallel tasks (lint 10 files). Hierarchical: coordinator owns the plan, workers own execution. Higher overhead (coordinator consumes tokens just to manage), but critical for tasks with dependencies โ€” multi-file refactors, cross-module changes, anything requiring integration. The coordinator pattern pays for itself when subtasks interact: it detects conflicts early and re-plans, whereas a flat pool discovers conflicts at merge time.
### โ˜…โ˜…โ˜… _(Google)_ **Q:** How would you implement work-stealing between coordinator-worker agents when one worker finishes early?
Answer Model the task queue as a shared priority queue that the coordinator owns and workers pull from, rather than pre-assigning all subtasks at spawn time. Workers request the next task when they complete their current one: the coordinator
## Further Reading - [MapReduce: Simplified Data Processing on Large Clusters](https://research.google/pubs/pub62/) Dean & Ghemawat, 2004 โ€” the original coordinator/worker pattern for distributed computation. - [A Universal Modular ACTOR Formalism for Artificial Intelligence](https://dl.acm.org/doi/10.5555/1624775.1624804) Hewitt et al., 1973 โ€” the Actor Model that underpins modern multi-agent message passing. - [Large Language Model based Multi-Agents: A Survey of Progress and Challenges](https://arxiv.org/abs/2402.01680) Recent survey of LLM-based multi-agent architectures and coordination patterns. - [Claude Code (source)](https://github.com/anthropics/claude-code) Open-source reference implementing the coordinator/worker pattern for real-world coding tasks. - [LLM-Compiler: Parallel Function Calling](https://arxiv.org/abs/2312.04511) Kim et al., 2023 โ€” DAG-based parallel function call planning, the same dependency-aware parallelism coordinators use to maximize worker throughput. - [Anthropic: Building Effective Agents โ€” Orchestrator Subagent](https://www.anthropic.com/research/building-effective-agents) Anthropic - [Celery: Distributed Task Queue](https://docs.celeryq.dev/) The production distributed task queue โ€” the software engineering analog of the coordinator/worker pattern, with retries, priorities, and result backends. ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Session Persistence" part: "AI Engineering" number: 62 emoji: "๐Ÿ’พ" subtitle: "Session JSON, /resume reconstruction, message history, file snapshots, and attribution" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ’พ Session Persistence > Session JSON, /resume reconstruction, message history, file snapshots, and attribution > [!question] Key Question > Close the terminal, reopen it, type --resume โ€” the conversation continues exactly where you left off โ† Coordinator/Worker Pattern | โ†’ Cost Tracking & Budgets ## Key Insights > [!tip] Insight > Sessions are typically 50KB-5MB depending on conversation length. Long coding sessions with many tool results can grow larger because tool_result content (file contents, grep output) is stored verbatim in the message array. > [!tip] Insight > Input history (arrow-key recall) is stored separately from sessions in ~/.claude/history.jsonl. It persists across all sessions โ€” your previous inputs are always available regardless of which session you resume. > [!tip] Insight > Session files accumulate over time and are never automatically deleted. Heavy users can have hundreds of session files totaling 100MB+. A pruning strategy (delete sessions older than 30 days, or keep only the last 50) would help, but risks deleting sessions users want to resume. ## Code Examples ```typescript import fs from "fs"; import path from "path"; import os from "os"; // Sessions live under ~/.claude/projects//.jsonl const PROJECTS_DIR = path.join(os.homedir(), ".claude", "projects"); function getSessionPath(projectDir: string, sessionId: string): string { return path.join(PROJECTS_DIR, projectDir, \`\${sessionId}.jsonl\`); } class SessionStorage { // Persist session: append one JSON event per line (JSONL) save(projectDir: string, sessionId: string, state: SessionState): void { const filePath = getSessionPath(projectDir, sessionId); const event = { id: sessionId, messages: serializeMessages(state.messages), model: state.model, cost_usd: state.totalCost, file_history: state.filesTouched, created_at: state.createdAt, updated_at: new Date().toISOString(), cwd: state.workingDirectory, }; fs.appendFileSync(filePath, JSON.stringify(event) + "\\n"); } // Reconstruct session state: read all lines, use last event restore(projectDir: string, sessionId: string): QueryEngine { const filePath = getSessionPath(projectDir, sessionId); const lines = fs.readFileSync(filePath, "utf-8").trim().split("\\n"); const data = JSON.parse(lines[lines.length - 1]!); // Reconstruct multi-domain state const engine = new QueryEngine({ initialMessages: deserializeMessages(data.messages), model: data.model, cwd: data.cwd, }); // Restore auxiliary state restoreFileHistory(data.file_history); restoreAttribution(data); extractPendingTodos(data.messages); return engine; } // List recent sessions for selection listRecent(projectDir: string, limit: number = 20): SessionMetadata[] { const dir = path.join(PROJECTS_DIR, projectDir); const files = fs.readdirSync(dir) .filter((f) => f.endsWith(".jsonl")) .map((f) => ({ f, mtime: fs.statSync(path.join(dir, f)).mtime })) .sort((a, b) => b.mtime.getTime() - a.mtime.getTime()) .slice(0, limit); return files.map((f) => this.readMetadata(f.f)); } } ``` ```typescript // Arrow-key recall of previous inputs โ€” persists across all sessions class InputHistory { // ~/.claude/history.jsonl โ€” shared across all projects/sessions private historyPath: string = path.join(os.homedir(), ".claude", "history.jsonl"); private entries: string[] = this.load(); private cursor: number = this.entries.length; add(inputText: string): void { this.entries.push(inputText); this.save(); // append to file immediately } // Up/down arrow through history navigate(direction: "up" | "down"): string { if (direction === "up") { this.cursor = Math.max(0, this.cursor - 1); } else { this.cursor = Math.min(this.entries.length - 1, this.cursor + 1); } return this.entries[this.cursor]; } private load(): string[] { if (fs.existsSync(this.historyPath)) { // Each line is a JSON object; extract the display field for arrow-key recall return fs.readFileSync(this.historyPath, "utf-8") .split("\\n") .filter(Boolean) .map((line) => { try { return JSON.parse(line).display ?? line; } catch { return line; } }); } return []; } private save(): void { // Keep last 100 entries (MAX_HISTORY_ITEMS in source) const recent = this.entries.slice(-100); fs.writeFileSync(this.historyPath, recent.map((e) => JSON.stringify({ display: e })).join("\\n") + "\\n"); } } ``` ```typescript // Resume creates a NEW session that inherits old context function resumeSession(oldSessionId: string): Session { const storage = new SessionStorage(); const oldData = storage.restore(oldSessionId); // New session ID โ€” the old session is read-only now const newSessionId = generateUuid(); // The new session starts with old messages as context const newSession = new Session({ id: newSessionId, parentId: oldSessionId, // adoption link initialMessages: oldData.messages, model: oldData.model, cwd: oldData.cwd, }); // Cost tracking starts fresh for the new session, // but we can show cumulative cost across the chain newSession.inheritedCost = oldData.cost_usd; return newSession; } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Design a session persistence system for an AI agent that handles multi-domain state.
Answer The session isn
### โ˜…โ˜…โ˜† _(Google)_ **Q:** How would you handle session corruption or migration when the schema changes?
Answer Version the schema: every session JSON includes a schema_version field. On load, check the version and run migration functions if needed (v1 -> v2 adds cost tracking, v2 -> v3 renames fields). For corruption: validate JSON structure before deserializing โ€” if parsing fails, try to recover the message array (the most valuable part) and discard corrupted auxiliary data. Never silently drop sessions โ€” surface a warning. For large-scale migrations: lazy migration on load (don
### โ˜…โ˜†โ˜† _(OpenAI)_ **Q:** What
Answer Save every turn: durable against crashes (no lost work), but high I/O overhead โ€” writing 50KB-5MB JSON on every API response. Save on exit: minimal I/O, but a crash loses the entire session. The pragmatic middle ground: save on exit + periodic checkpoints (every N turns or every M seconds). Use write-ahead logging if durability matters: append each turn to a log file (fast, sequential writes), and periodically compact the log into a full snapshot. This gives crash recovery (replay the log) with low per-turn overhead.
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Design a session persistence format that allows resuming a conversation after a crash, including in-flight tool calls.
Answer The session format must capture not just completed turns but the agent
## Further Reading - [Event Sourcing Pattern](https://martinfowler.com/eaaDev/EventSourcing.html) Martin Fowler โ€” storing state as a sequence of events, the pattern behind session replay. - [SQLite Write-Ahead Logging](https://www.sqlite.org/wal.html) The WAL mechanism that enables concurrent reads during writes โ€” relevant to session checkpoint design. - [Redis Persistence: RDB vs AOF](https://redis.io/docs/latest/operate/oss_and_stack/management/persistence/) Two persistence strategies (snapshot vs append-only) that mirror the session save tradeoffs. - [Claude Code (source)](https://github.com/anthropics/claude-code) Open-source reference for session persistence, /resume, and multi-domain state reconstruction. - [CQRS and Event Sourcing (Microsoft Azure Docs)](https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs) Command/Query Responsibility Segregation with event sourcing โ€” the pattern behind session replay: store events, not snapshots, then replay to reconstruct state. - [SQLite: The Appropriate Uses for SQLite](https://www.sqlite.org/whentouse.html) SQLite - [tmux Session Management](https://github.com/tmux/tmux/wiki) The gold standard for terminal session persistence โ€” background processes, detach/attach, and named sessions; the UX model Claude Code ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Cost Tracking & Budgets" part: "AI Engineering" number: 63 emoji: "๐Ÿ’ฐ" subtitle: "Token counting, budget limits, per-model pricing, rate limit handling, and spend alerts" tags: ["engineering", "ml", "ai-engineering", "interview-prep", "agent-sdk"] --- # ๐Ÿ’ฐ Cost Tracking & Budgets > Token counting, budget limits, per-model pricing, rate limit handling, and spend alerts > [!question] Key Question > Every tool call has a price โ€” the agent tracks spend in real-time and stops before you go broke โ† Session Persistence ## Key Insights > [!tip] Insight > Output tokens cost 5x more than input tokens on Claude models ($75/M vs $15/M for Opus). A verbose agent that generates long explanations costs far more than one that gives concise answers. This is why agent prompts often say "be concise." > [!tip] Insight > The cost tracker can also estimate remaining turns: divide remaining budget by average cost per turn. This lets the agent prioritize โ€” if only 3 turns remain, skip exploration and go straight to implementation. > [!tip] Insight > The 5x output-to-input cost ratio means that a concise agent (generating 500 output tokens per turn) costs 5x less in output than a verbose one (2,500 tokens). Over 50 turns, that's $3.75 vs $9.38 in output costs alone โ€” conciseness is a cost optimization. ## Code Examples ```typescript // Per-model pricing (approximate, illustrative) interface ModelPricing { input: number; output: number; cacheRead: number; cacheWrite: number; } const PRICING: Record = { "claude-opus-4": { input: 15.00 / 1_000_000, // $15/M tokens output: 75.00 / 1_000_000, // $75/M tokens cacheRead: 1.50 / 1_000_000, // $1.50/M (10x cheaper!) cacheWrite: 18.75 / 1_000_000, }, "claude-sonnet-4": { input: 3.00 / 1_000_000, output: 15.00 / 1_000_000, cacheRead: 0.30 / 1_000_000, cacheWrite: 3.75 / 1_000_000, }, }; ``` ```typescript interface TokenUsage { inputTokens: number; outputTokens: number; cacheReadTokens: number; cacheCreationTokens: number; } class CostTracker { private totalCost: number = 0; private turnCosts: number[] = []; constructor( private model: string, private maxBudget?: number, ) {} // Called after every API response recordUsage(usage: TokenUsage): number { const pricing = PRICING[this.model]; const cost = usage.inputTokens * pricing.input + usage.outputTokens * pricing.output + usage.cacheReadTokens * pricing.cacheRead + usage.cacheCreationTokens * pricing.cacheWrite; this.totalCost += cost; this.turnCosts.push(cost); if (this.maxBudget && this.totalCost >= this.maxBudget) { throw new BudgetExceededError( \`Budget $\${this.maxBudget.toFixed(2)} exceeded (spent $\${this.totalCost.toFixed(2)})\` ); } return cost; } // Predict how many more turns the budget allows estimateRemainingTurns(): number { if (!this.turnCosts.length || !this.maxBudget) return Infinity; const avgCost = this.turnCosts.reduce((a, b) => a + b, 0) / this.turnCosts.length; const remaining = this.maxBudget - this.totalCost; return Math.floor(remaining / avgCost); } } ``` ```typescript class RateLimitTracker { private remaining: number = 0; private limit: number = 0; private resetAt: Date = new Date(); // Parse rate limit headers from API response update(headers: Record): string { this.remaining = parseInt(headers["x-ratelimit-remaining-tokens"]); this.limit = parseInt(headers["x-ratelimit-limit-tokens"]); this.resetAt = parseTime(headers["x-ratelimit-reset"]); // Are we using tokens faster than they replenish? const utilization = 1.0 - this.remaining / this.limit; const timePct = timeRemainingPct(this.resetAt); if (utilization > timePct + 0.1) { return "WARNING: approaching rate limit"; } return "OK"; } // True if we should slow down to avoid hard limit shouldThrottle(): boolean { return this.remaining < this.limit * 0.1; // less than 10% remaining } } ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, Databricks)_ **Q:** Design a cost tracking system for an AI agent that handles multiple pricing tiers.
Answer Each API response includes token counts: input_tokens, output_tokens, cache_read_input_tokens, cache_creation_input_tokens. Multiply each by the per-model rate (different for input vs output vs cached). Track at three granularities: (1) per-turn โ€” how much did this API call cost, (2) per-session โ€” cumulative cost for budget enforcement, (3) per-tool-call โ€” attribute cost to specific operations. Store per-model pricing as a lookup table, updated when pricing changes. Key: cache_read tokens cost ~10% of uncached input โ€” so the tracker must distinguish them. Add budget limits (maxBudgetUsd) that raise BudgetExceededError when the session total exceeds the cap. Display real-time cost in the TUI status bar.
### โ˜…โ˜†โ˜† _(Google, OpenAI)_ **Q:** How does prompt caching affect the economics of AI agent systems?
Answer Cached input tokens cost ~10% of uncached ones ($1.50/M vs $15/M for Opus). For an agent making 50+ API calls per task with a ~10K token system prompt, caching saves ~$7 per task on Opus. The system prompt (instructions + tool definitions) is the same across calls โ€” it
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What
Answer All three, layered. Per-turn limits catch runaway single calls (an agent generating a 100K token response). Per-session limits cap total spend for a task ($10 default). Per-project limits enforce organizational budgets across all sessions. Implementation: check per-turn first (cheapest check), then per-session, then per-project. When any limit is hit, stop gracefully โ€” save session state so the user can resume after increasing the limit. The tricky part is estimation: before executing an expensive operation, estimate its cost and warn if it would exceed the budget. This requires tracking average cost per turn type.
### โ˜…โ˜…โ˜… _(OpenAI)_ **Q:** Design a cost tracking system that predicts when a conversation will exceed a budget threshold and suggests cheaper alternatives.
Answer Layer the system into three components: a retrospective tracker, a prospective estimator, and an advice engine. The retrospective tracker records cost per turn with token-type breakdown (input, output, cache_read) and computes a rolling average cost per turn type (tool-heavy turns vs. pure text turns). The prospective estimator runs before each API call: given the current session cost, the remaining budget, and the rolling average, it projects turns_remaining = (budget - cumulative_cost) / avg_cost_per_turn. When turns_remaining drops below a threshold (e.g., 5), the estimator emits a warning. The advice engine activates when the budget is tight and suggests: (1) switch from Opus to Sonnet (5x cheaper input, 5x cheaper output) if task complexity allows; (2) enable prompt caching by sorting tool definitions deterministically (recovers 90% of system prompt cost); (3) truncate verbose Bash output to 2K lines (cuts tool_result input tokens). The key design choice: advice is ranked by ROI (savings per unit of quality loss), not just absolute savings, so the agent suggests the cheapest optimization that preserves task quality.
## Further Reading - [Anthropic API Pricing](https://docs.anthropic.com/en/docs/about-claude/models) Current pricing for all Claude models โ€” input, output, and cached token rates. - [Prompt Caching with Claude](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) How prompt caching works, cache breakpoints, and cost implications for agent systems. - [Token Economics of LLM Applications](https://a16z.com/generative-ai-enterprise-2024/) a16z analysis of cost structures in production LLM applications. - [Cloud Cost Optimization Patterns](https://cloud.google.com/architecture/cost-optimization) Google Cloud cost optimization โ€” the same principles (metering, budgets, alerts) apply to LLM spend. - [LLM API Pricing Comparison (Artificial Analysis)](https://artificialanalysis.ai/) Live benchmark tracking price, throughput, and latency across all major LLM providers โ€” the reference for model selection decisions in cost-aware agents. - [OpenTelemetry for LLM Observability](https://opentelemetry.io/docs/concepts/signals/metrics/) The open standard for emitting cost, latency, and token metrics โ€” the instrumentation layer beneath production LLM cost dashboards. - [Simon Willison: Costs and Pricing for LLM APIs](https://simonwillison.net/2023/Dec/31/ai-in-2023/) Simon Willison ## Related Agent Harness Architecture ยท Tool System ยท Sub-agents ยท Commands & Skills ยท Plugins & MCP --- --- title: "Mechanistic Interpretability" part: "Trust & Evaluation" number: 64 emoji: "๐Ÿ”ฌ" subtitle: "SAE training, activation patching, attribution graphs, circuit tracing, and feature steering" tags: ["trust", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”ฌ Mechanistic Interpretability > SAE training, activation patching, attribution graphs, circuit tracing, and feature steering > [!question] Key Question > Anthropic traced a complete reasoning chain inside Claude โ€” from question to multi-step feature activation to answer โ† Safety & Alignment | โ†’ Induction Heads & ICL ## Key Insights > [!tip] Insight > The key insight: after training, each column of{" "} W_dec is a feature direction in the model's activation space. If column 4,721 consistently activates for “Golden Gate Bridge” text, that direction is{" "} the Golden Gate Bridge feature. You can read features by checking what inputs maximize each column's activation. > [!tip] Insight > This is circuit tracing โ€” the core method. It combines sparse autoencoders (to name the features) with attribution patching (to measure which features caused which). The result is a computational graph you can read, not a black box. > [!tip] Insight > Hallucination mechanism (Biology paper, 2025):{" "} Circuit analysis found “known answer” features that suppress the model's default refusal circuit. When a question is asked about something the model genuinely knows, these features fire and allow the answer to proceed. Hallucination occurs when these “known answer” features activate despite the model not actually having sufficient knowledge โ€” confidence gates open when they shouldn't. > [!tip] Insight > CLT limitations to know for interviews: Attribution graphs succeed on only{" "} ~25% of attempted prompts {" "} (the rest are too complex or ambiguous to yield clean sparse graphs). The replacement model โ€” which substitutes CLT features for the original MLP computations โ€”{" "} explains ~61% of end-to-end computation (replacement score 0.61) {" "} and matches the model's top-1 next-token prediction on ~50% of a filtered evaluation set (prompts where the base model predicts correctly with confidence below 80%), with ~11.5% normalized mean reconstruction error. > [!tip] Insight > Feature steering is the interpretability equivalent of unit testing: if the feature truly represents a concept, amplifying it should reliably inject that concept into outputs. It does. This is how we know SAE features are real computational objects, not just post-hoc labels. > [!tip] Insight > The “biology” framing is intentional but cautious. Anthropic draws analogies to neuroscience (circuits, features, neurons) but emphasizes these are mechanistic descriptions, not claims about consciousness or intent. The features are real computational objects; what they “mean” is inferred by humans looking at activation patterns. > [!tip] Insight > Start with Neuronpedia to build intuition for what SAE features look like, then move to TransformerLens when you want to run your own experiments. ARENA exercises bridge the gap between “I understand the theory” and “I can find circuits myself.” ## Code Examples ```python import torch import torch.nn as nn from torch.optim import Adam class SparseAutoencoder(nn.Module): def __init__(self, d_model: int, expansion: int = 64): super().__init__() d_sae = d_model * expansion self.W_enc = nn.Linear(d_model, d_sae, bias=True) self.W_dec = nn.Linear(d_sae, d_model, bias=True) self.relu = nn.ReLU() # Normalize decoder columns to unit norm self._normalize_decoder() def _normalize_decoder(self): with torch.no_grad(): norms = self.W_dec.weight.norm(dim=0, keepdim=True).clamp(min=1e-8) self.W_dec.weight.div_(norms) def forward(self, x: torch.Tensor): # Center around decoder bias before encoding x_cent = x - self.W_dec.bias f = self.relu(self.W_enc(x_cent)) # sparse features x_hat = self.W_dec(f) # reconstruction return x_hat, f def sae_loss(x, x_hat, f, lam: float = 5e-3): recon = (x - x_hat).pow(2).mean() # MSE reconstruction sparsity = f.abs().mean() # L1 on features return recon + lam * sparsity, recon, sparsity # Training loop sketch sae = SparseAutoencoder(d_model=4096, expansion=64) opt = Adam(sae.parameters(), lr=2e-4) for activations in dataloader: # activations: [B, d_model] opt.zero_grad() x_hat, f = sae(activations) loss, recon, sparse = sae_loss(activations, x_hat, f, lam=5e-3) loss.backward() opt.step() sae._normalize_decoder() # keep decoder cols unit norm ``` ```python import torch from contextlib import contextmanager @contextmanager def patch_activation(model, layer_name: str, patch_value: torch.Tensor): """Context manager to swap one layer's output mid-forward-pass.""" hooks = [] def hook_fn(module, input, output): return patch_value # replace with clean-run activation handle = dict(model.named_modules())[layer_name].register_forward_hook(hook_fn) hooks.append(handle) try: yield finally: for h in hooks: h.remove() def activation_patching_score(model, clean_tokens, corrupt_tokens, layer_name, clean_cache, metric_fn): """ Measure how much layer_name causally matters for the metric. metric_fn(logits) -> scalar (e.g., logit diff between two tokens) """ # Baseline: corrupted run with torch.no_grad(): corrupt_logits = model(corrupt_tokens) baseline = metric_fn(corrupt_logits) # Patched: corrupted run but swap in the clean activation clean_act = clean_cache[layer_name] with torch.no_grad(): with patch_activation(model, layer_name, clean_act): patched_logits = model(corrupt_tokens) patched = metric_fn(patched_logits) return (patched - baseline).item() # positive = component helps ``` ```python # Logit lens: project intermediate residual stream to vocab space import torch def logit_lens(model, tokens: torch.Tensor): """ At each layer, unembed the residual stream directly to get a probability distribution over vocabulary โ€” no more processing. Shows what the model 'thinks' the next token is at each depth. """ unembed = model.lm_head # W_U: (d_model, vocab) ln_f = model.transformer.ln_f # final layer norm residual_stream = [] def save_residual(module, inp, out): # out[0] is the hidden state after this transformer block h = out[0] if isinstance(out, tuple) else out residual_stream.append(h.detach().clone()) hooks = [block.register_forward_hook(save_residual) for block in model.transformer.h] with torch.no_grad(): model(tokens) for h in hooks: h.remove() # Project each layer's residual stream through the unembedding layer_logits = [] for h in residual_stream: normed = ln_f(h) # apply final norm logits = normed @ unembed.weight.T # (B, seq, vocab) layer_logits.append(logits[:, -1, :].softmax(-1)) # last position return layer_logits # list of (B, vocab) โ€” one per layer ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** Walk through training a sparse autoencoder on transformer activations. What are the key hyperparameters?
Answer Training an SAE: (1) Collect activations from a target layer (e.g., MLP output or residual stream) across a large corpus โ€” typically 1B+ tokens. (2) Center inputs by subtracting the decoder bias before encoding (prevents the bias from absorbing signal). (3) Train with loss = MSE(x, x_hat) + ฮป * ||f||_1. Key hyperparameters: expansion factor (d_sae / d_model) โ€” Anthropic uses 32xโ€“256x; sparsity coefficient ฮป โ€” typically 1e-3 to 1e-1, tune so average L0 (features active per token) is in the range 20โ€“100; learning rate โ€” 1e-4 to 5e-5 with Adam; normalize decoder columns to unit norm after each gradient step to prevent feature collapse. Monitor: reconstruction loss (should be >95% variance explained), L0 sparsity, and fraction of dead features (features that never activate โ€” indicates ฮป too high).
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** How does attribution patching differ from activation patching? When would you use each?
Answer Activation patching (causal tracing): run two passes โ€” clean and corrupted (e.g., replace subject token). For each component, swap its activation from the clean run into the corrupted run and measure the effect on the output metric. This gives an exact causal estimate but requires O(N) forward passes for N components โ€” expensive at scale. Attribution patching: first-order Taylor approximation. Compute the gradient of the output metric with respect to each activation, then multiply by the difference between clean and corrupted activations: attr โ‰ˆ (โˆ‚output/โˆ‚f_i) * (f_i^clean - f_i^corrupt). This runs in O(1) passes (one forward + one backward) while closely approximating full patching. Use activation patching when you have a small targeted circuit and need exact results. Use attribution patching when sweeping across all features/components, building full attribution graphs, or when compute is constrained. Attribution patching can miss nonlinear effects; activation patching catches them but doesn
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What is the L1/reconstruction trade-off in SAE training and how do you pick the right sparsity coefficient?
Answer The SAE loss is MSE + ฮป * L1. Too high ฮป: the model sacrifices reconstruction accuracy to achieve extreme sparsity โ€” features become coarse and miss fine-grained concepts; many features die (never activate). Too low ฮป: reconstruction is near-perfect but features are dense and polysemantic โ€” SAE fails to decompose superposition, defeating the purpose. Picking ฮป: sweep over values and monitor three metrics: (1) Explained variance of reconstruction (target: >95%), (2) Mean L0 โ€” average number of features active per token (target: 20โ€“100 for interpretability), (3) Dead feature fraction (target: <5% dead after 100M tokens). The
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Describe how you would find the circuit responsible for a specific model behavior (e.g., gendered pronoun resolution).
Answer Circuit discovery workflow: (1) Define a contrastive pair: clean input (
### โ˜…โ˜…โ˜† _(Anthropic, Google, OpenAI)_ **Q:** What are the limitations of current mechanistic interpretability methods? What can
Answer Current limitations: (1) Scale: full circuit tracing works on individual inputs, not on aggregate model behavior across all possible inputs โ€” we trace one computation, not the general algorithm. (2) Completeness: SAEs capture a subset of model computation; some features are uninterpretable or semantically ambiguous even to human annotators. (3) Superposition in SAEs: SAEs can themselves develop superposition if ฮป is too low or d_sae is too small โ€” partial solution, not total fix. (4) Attention vs. MLP asymmetry: SAEs work well on MLP outputs; attention head decomposition is harder because attention mixes token positions non-linearly. (5) Causal vs. correlational: a feature that activates doesn
## Further Reading - [Circuit Tracing: Revealing Computational Graphs in Language Models](https://transformer-circuits.pub/2025/attribution-graphs/methods.html) Ameisen et al. 2025 โ€” combining SAEs with attribution patching to trace full computational circuits in Claude 3.5 Haiku - [On the Biology of a Large Language Model](https://transformer-circuits.pub/2025/attribution-graphs/biology.html) Lindsey et al. 2025 โ€” probing Claude 3.5 Haiku - [Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet](https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html) Templeton et al. 2024 โ€” dictionary learning at scale finds ~34M features in a production frontier model - [Towards Monosemanticity: Decomposing Language Models with Dictionary Learning](https://transformer-circuits.pub/2023/monosemantic-features/index.html) Bricken et al. 2023 โ€” the first successful SAE decomposition of a one-layer transformer; established the field - [Toy Models of Superposition](https://transformer-circuits.pub/2022/toy_model/index.html) Elhage et al. 2022 โ€” controlled experiments showing how and why neural networks encode more features than dimensions - [When Models Manipulate Manifolds](https://transformer-circuits.pub/2025/linebreaks/index.html) Gurnee et al. 2025 โ€” studying how models use linebreaks and whitespace as geometric pivots in activation space - [Chris Olah โ€” Neural Networks, Manifolds, and Topology](https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/) Olah 2014 โ€” the foundational visual intuition for how neural networks transform data through manifold operations - [3Blue1Brown โ€” How might LLMs store facts (Chapter 7)](https://www.youtube.com/watch?v=9-Jl0dxWQs8) Grant Sanderson 2024 โ€” visual walkthrough of how MLP layers in transformers store and retrieve facts, with connections to superposition and sparse autoencoders. - [Neel Nanda โ€” How to Become a Mechanistic Interpretability Researcher](https://www.neelnanda.io/mechanistic-interpretability/getting-started) Nanda 2023 โ€” comprehensive guide to getting started in mech interp research, with recommended papers, exercises, and learning path. - [Neuronpedia โ€” Interactive SAE Feature Explorer](https://www.neuronpedia.org/) Open-source platform for exploring 50M+ SAE features across GPT-2, Gemma, Llama, and more โ€” search, visualize activations, and steer model behavior interactively. - [ARENA โ€” Mechanistic Interpretability Exercises](https://arena3-chapter1-transformer-interpretability.streamlit.app/) Hands-on coding tutorials for transformer interpretability โ€” TransformerLens, induction heads, superposition, SAEs, and circuit analysis. ## Related LLM Evaluation ยท Eval-Driven Development ยท Interpretability ยท Safety & Alignment ยท Induction Heads & ICL --- --- title: "Induction Heads & ICL" part: "Trust & Evaluation" number: 65 emoji: "๐Ÿง " subtitle: "The two-head circuit that powers in-context learning โ€” and why it emerges as a phase transition" tags: ["trust", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿง  Induction Heads & ICL > The two-head circuit that powers in-context learning โ€” and why it emerges as a phase transition > [!question] Key Question > GPT learns to copy patterns mid-training โ€” and that single circuit explains in-context learning โ† Mechanistic Interpretability ## Contents - Circuit Diagram - The Intuition - The QK/OV Circuit - Break It โ€” See What Happens - Real-World Numbers ## Key Insights > [!tip] Insight > The previous-token head is the key: it makes every token carry its predecessor's identity. This lets the induction head do indirect lookup โ€” "find where the current token appeared before by searching for positions that say they were preceded by the current token." > [!tip] Insight > The QK circuit of the induction head "reads" from the previous-token head's output. This means the K matrix of the induction head must have been learned to be compatible with the output directions of the previous-token head โ€” a beautiful example of emergent inter-layer coordination. > [!tip] Insight > The phase transition is visible across all{" "} 16 models Olsson et al. studied โ€” from 2-layer attention-only models to full GPT-style architectures. Bigger models show the same transition, just at different training-token counts and with more sophisticated generalizations of the basic circuit. ## Code Examples ```python import torch import torch.nn.functional as F def induction_score(attn_pattern: torch.Tensor) -> float: """ Measure how strongly a head shows induction behavior. attn_pattern: (seq_len, seq_len) attention weight matrix on a repeated random sequence of length seq_len//2. An induction head attends at the [seq_len//2 - 1] diagonal: position i attends to position i - (seq_len//2 - 1), the spot where the current token appeared in the first copy. """ seq_len = attn_pattern.shape[0] half = seq_len // 2 # Extract the diagonal offset = -(half - 1) # i.e., for position i in second copy, attend to position i - half + 1 offset = -(half - 1) diag = torch.diagonal(attn_pattern, offset=offset) return diag.mean().item() def find_induction_heads( model, seq_len: int = 50, threshold: float = 0.4, device: str = "cpu" ) -> list[tuple[int, int]]: """ Run a repeated random sequence through the model and return all (layer, head) pairs with induction score above threshold. """ vocab_size = model.config.vocab_size n_layers = model.config.num_hidden_layers n_heads = model.config.num_attention_heads # Build a repeated random sequence: [A B C ... A B C ...] rand_tokens = torch.randint(1, vocab_size, (1, seq_len), device=device) tokens = torch.cat([rand_tokens, rand_tokens], dim=1) # (1, 2*seq_len) with torch.no_grad(): outputs = model(tokens, output_attentions=True) induction_heads = [] for layer_idx, layer_attn in enumerate(outputs.attentions): # layer_attn: (batch, n_heads, seq, seq) for head_idx in range(n_heads): pattern = layer_attn[0, head_idx] # (2*seq_len, 2*seq_len) score = induction_score(pattern) if score > threshold: induction_heads.append((layer_idx, head_idx)) print(f"Layer {layer_idx}, Head {head_idx}: score={score:.3f}") return induction_heads ``` ```python # Induction head detection: repeated-sequence attention score import torch def induction_score_for_head( model, layer: int, head: int, seq_len: int = 50 ) -> float: """ Feed a repeated random sequence [A...A] of length 2*seq_len. An induction head at (layer, head) will strongly attend at diagonal offset -(seq_len - 1): position i attends to i-(seq_len-1), the spot right after the previous occurrence of token[i]. Returns mean attention weight on that diagonal (0=no induction, 1=perfect). """ vocab = model.config.vocab_size rand_seq = torch.randint(1, vocab, (1, seq_len)) tokens = torch.cat([rand_seq, rand_seq], dim=1) # (1, 2*seq_len) with torch.no_grad(): out = model(tokens, output_attentions=True) # out.attentions[layer]: (batch, n_heads, 2*seq_len, 2*seq_len) attn = out.attentions[layer][0, head] # (2*seq_len, 2*seq_len) offset = -(seq_len - 1) diag = torch.diagonal(attn, offset=offset) # values on the induction diagonal return diag.mean().item() ``` ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** What is an induction head and how does it implement pattern completion?
Answer An induction head is a two-layer attention circuit that implements the rule:
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Why do induction heads emerge as a phase change during training rather than gradually?
Answer Induction heads require coordination between two separate attention heads โ€” a previous-token head and a matching head. Neither is useful alone: the previous-token head only becomes beneficial when the induction head exists to use its output, and vice versa. This creates a coordination problem where the two circuits must develop together. Olsson et al. (2022) observed a sharp phase transition around 2B tokens in small models: loss on repeated random sequences drops suddenly, all attention heads in the model change simultaneously, and a
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** How would you detect induction heads in a trained transformer? Describe the experimental setup.
Answer The canonical detection method uses a repeated random sequence: generate a random token sequence [A, B, C, D, ...] and concatenate it with itself to get [..., A, B, C, D, A, B, C, D]. Then inspect the attention patterns of each head on the second copy. An induction head will show a characteristic pattern: each token attends to the position where it appeared in the first copy, offset by +1 (attending one step ahead of where it last appeared). Quantitatively, you compute an
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Can induction heads explain generalization beyond exact copying? Give an example of fuzzy induction.
Answer Yes. In small models, induction heads do literal copying โ€” they match on exact token identity. But in larger models (GPT-2 and beyond), analogous circuits operate in embedding space, enabling
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** What is the relationship between induction heads and the in-context learning loss bump?
Answer The in-context learning loss bump is a sudden drop in loss on sequences where context helps prediction โ€” it appears mid-training as a sharp discontinuity rather than a smooth improvement. Olsson et al. (2022) showed this bump is causally linked to induction head formation: (1) the bump timing matches exactly when induction heads form across 16 different models, (2) ablating induction heads removes most of the ICL benefit, restoring pre-bump loss levels, and (3) the bump correlates with performance on held-out tasks that require using context. The bump accounts for roughly 50% of the total in-context learning performance. The mechanism is direct: before induction heads form, the model can only use the current token and learned priors; after, it can scan the context for pattern matches and use them for prediction.
## Further Reading - [In-Context Learning and Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) Olsson et al. 2022 โ€” the definitive paper showing induction heads are the mechanistic basis of in-context learning, with phase-change evidence across 16 models - [A Mathematical Framework for Transformer Circuits](https://transformer-circuits.pub/2021/framework/index.html) Elhage et al. 2021 โ€” introduces the QK/OV decomposition and residual stream view used throughout induction head analysis - [Understanding LSTM Networks](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) Olah 2015 โ€” the gold-standard visual explainer of recurrent memory; useful context for understanding why in-context learning is surprising in attention-only models - [Tracing Attention Computation Through Feature Interactions](https://transformer-circuits.pub/2025/attention-qk/index.html) Kamath et al. 2025 โ€” traces how attention QK circuits interact with features, extending induction head analysis to larger and more complex models ## Related LLM Evaluation ยท Eval-Driven Development ยท Interpretability ยท Safety & Alignment ยท Mechanistic Interpretability --- --- title: "The Design Doc" part: "Design Reviews" number: 66 emoji: "๐Ÿ“" subtitle: "Working backwards from the SLO โ€” an annotated, worked design doc for a real ML endpoint" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ“ The Design Doc > Working backwards from the SLO โ€” an annotated, worked design doc for a real ML endpoint > [!question] Key Question > Every senior engineer writes design docs โ€” nobody teaches how โ†’ Cost Accounting & Eval-Driven Design ## Key Insights > [!tip] Insight > Margin note. Notice what's NOT on the list: model choice, framework, cloud provider, even GPU type. Requirements come from the customer and the P&L, not from the tech menu. If you skip to architecture without this table, every subsequent decision is ungrounded. > [!tip] Insight > Golden-set sizing โ€” Wilson interval derivation. For a binary quality judgment, the Wilson score interval gives the 95% CI half-width as approximately{" "} w โ‰ˆ 1.96 ร— โˆš(pฬ‚(1โˆ’pฬ‚) / n) {" "} where pฬ‚ is the expected pass rate and n is the sample size. At pฬ‚ = 0.80 and n = 200, w โ‰ˆ 1.96 ร— โˆš(0.16 / 200) โ‰ˆ 0.055, so the CI is roughly ยฑ5.5 pp โ€” adequate for a top-level gate. At pฬ‚ = 0.90 the same n gives ยฑ4.2 pp; at{" "} pฬ‚ = 0.95 it narrows to ยฑ3.0 pp because the binomial variance peaks at 0.50. Note: for multi-cohort drill-downs (per tier, per prompt-length bucket), do not multiply a single pool size by the number of cells. Each cell has its own base rate and therefore its own required n. A cell where the easy-prompt tier passes at 95% needs far fewer examples to pin a ยฑ3 pp CI than a cell where the adversarial tier passes at 60%. Size each cell independently, then sum โ€” the aggregate is usually 2โ€“4ร— higher than the naive “200 ร— cells” estimate would predict. > [!tip] Insight > Margin note. The calculator gives a number. The number is wrong โ€” all back-of-envelope numbers are. The question is whether it's wrong by 1.5ร— or by 10ร—. 1.5ร— means the capacity plan survives; 10ร— means the entire architecture needs to change (routing, quantization, disaggregation). This is what calibrates how much detail the architecture deserves. > [!tip] Insight > Margin note. Two deep dives โ€” not four. An interviewer will push into the places you didn't{" "} dive, and the right answer there is “I'd follow the same structure โ€” here's the risk I'd watch.” Deep diving everything equally is a junior signal. > [!tip] Insight > The hardest SLO to write is the quality SLO.{" "} Latency and availability are percentages anyone can check. Quality regressions need the eval harness you wrote in Step 2 โ€” which is why Step 2 comes before architecture. ## Interview Questions ### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** You
Answer (1) QPS target โ€” derived from customer count ร— requests/user/day รท 86,400. (2) p95 time-to-first-token SLO โ€” usually 500โ€“800 ms for interactive use. (3) Average output token length โ€” drives total GPU-seconds per request. (4) Acceptable cost per 1K output tokens โ€” this is the constraint that kills most naive designs. Order matters: QPS without a latency SLO leads to an overspec
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** An interviewer says
Answer Push back, politely but immediately โ€” this is an SLO-dependent design. The architecture is a direct function of the latency and cost SLOs, and different SLO classes force structurally different choices. Concrete example: a 50 ms p95 TTFT SLO requires a dedicated decode-only GPU pool, speculative decoding (3โ€“5 draft tokens ahead), and likely disaggregated prefill so no long-context request can stall decode slots. A 5-second batch-completion SLO, by contrast, allows fully asynchronous queuing, large batch accumulation windows (1โ€“2 s), and no speculative decoding overhead โ€” the architecture is simpler and 2โ€“3ร— cheaper per token. Those are not the same system, and you can
### โ˜…โ˜…โ˜… _(Anthropic, Meta)_ **Q:** Your capacity math says you need 200 GPUs. Your budget is 60. What do you cut first?
Answer Quality knobs before capacity knobs. In order: (1) Shorter max_tokens ceiling โ€” often the biggest single lever. (2) Model routing โ€” send the easy 80% to a small model, keep the large model for the hard 20% (~70% cost cut). (3) Prompt caching for repeated system prompts (30โ€“50% prefill savings). (4) Tighter rate limits per tier. Only after those do you look at quantization (INT8 KV cache, INT4 weights), because quantization can hurt quality in subtle ways that only eval catches.
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** You ship the design doc. Two weeks in, p95 TTFT regresses from 420 ms to 900 ms. Your doc said
Answer The design doc is fine โ€” the regression is an ops event, not a design bug. Look at (1) admission control: is the queue depth higher, and why? (2) batch composition: are long-context requests poisoning the batch by blocking short decodes? This is the classic prefill-decode interference problem โ€” mitigation is disaggregated prefill or chunked prefill. (3) KV cache pressure: is a new feature pinning context for longer? This is where the eval harness pays off โ€” a trajectory replay of the regressed requests tells you which cohort broke.
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** An interviewer asks you to justify a decision your design doc made. You realize you can
Answer Say so immediately and with precision. The formula is:
## Further Reading - [Jeff Dean โ€” Building Software Systems at Google and Lessons Learned](https://research.google/pubs/pub40672/) The original - [Amazon Working Backwards โ€” PR/FAQ + 6-Pager](https://www.workingbackwards.com/) Not an ML piece, but the discipline of writing the customer-facing press release before the design doc is the methodological backbone of this module. - [Chip Huyen โ€” Designing Machine Learning Systems (O](https://www.oreilly.com/library/view/designing-machine-learning/9781098107956/) The canonical ML system-design textbook. Chapter 1 on business objectives is the framework chapter candidates keep ignoring at their own cost. - [Shreya Shankar โ€” Operationalizing ML](https://www.shreya-shankar.com/phd-productionizing-ml/) The thesis-length argument that the gap between ML design and ML-in-production is owned by the eval harness, not the model. - [Hamel Husain โ€” Your AI Product Needs Evals](https://hamel.dev/blog/posts/evals/) The practitioner post that converted a generation of AI engineers to eval-first design. Required reading before writing any LLM design doc. ## Related Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor ยท Case: Design Midjourney --- --- title: "Cost Accounting & Eval-Driven Design" part: "Design Reviews" number: 67 emoji: "๐Ÿ’ฐ" subtitle: "Cost-per-bad-day, LLM-judge rubrics, golden-set sizing โ€” design flows from the eval, not the architecture" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ’ฐ Cost Accounting & Eval-Driven Design > Cost-per-bad-day, LLM-judge rubrics, golden-set sizing โ€” design flows from the eval, not the architecture > [!question] Key Question > You can't design what you can't measure โ€” so write the eval first โ† The Design Doc | โ†’ Case: Design ChatGPT ## Key Insights > [!tip] Insight > The judge is a system with its own eval. An LLM-as-judge is a model whose output drives your go/no-go decisions. It deserves the same scrutiny as the production model โ€” calibration against human labels, drift monitoring, and a refresh schedule.{" "} LLM judges exhibit position bias in 10โ€“30% of pairwise comparisons {" "} โ€” another reason to calibrate against human labels rather than trust the judge out-of-the-box. Shankar's “Who Validates the Validators”{" "} (arxiv 2404.12272) {" "} documents what happens when you skip this. > [!tip] Insight > Real-world example.{" "} Shankar et al. (2024) “Who Validates the Validators?” {" "} is the canonical empirical study of LLM-judge reliability in production. The paper instruments Spearman ฯ between four judge-model configurations (GPT-4, Llama-70B, and two rubric variants) and human raters across 2,200 labeled examples, finding that{" "} judge agreement with humans ranges from ฯ = 0.47 to ฯ = 0.84 depending on judge model and rubric design {" "} โ€” a nearly 2ร— spread. The paper's central finding for practitioners: no judge works well out-of-the-box; all require domain-specific calibration sets and regular refresh. The cost-recall tradeoff between embedding and LLM judges is documented in Figure 3 of the paper. > [!tip] Insight > Real-world example.{" "} RouteLLM (Ong et al., 2024) {" "} is the most rigorous public evaluation of classifier-gated model routing. The paper benchmarks four router architectures on MT-Bench, MMLU, and GSM8K, measuring the cost-quality frontier for each. Key result: on MT-Bench, the matrix factorization router achieves a 2ร— cost reduction with <5% quality degradation vs. always routing to GPT-4. The paper also demonstrates that all router architectures degrade under distribution shift between training and test domains โ€” the routers trained on chatbot-arena data underperform by 8โ€“12 pp quality on coding-heavy benchmarks โ€” which is the same drift failure mode described above. Martian (a commercial routing-as-a-service product) extends the RouteLLM approach with online retraining but does not publish accuracy numbers for its production router. > [!tip] Insight > The reliability-is-a-dollar-number reframe.{" "} A 99.9% availability SLO allows only ~43 minutes of downtime per month {" "} โ€” yet that budget hides the asymmetry between a 10-second blip and a 4-hour partial regression. Pricing both in dollars surfaces what the SLO obscures: partial regressions are often the expensive incidents, not the full outages. > [!tip] Insight > The eval that's too green. When the offline eval consistently shows bigger wins than the online test, you almost certainly have a selection bias in the golden set โ€” it over-represents cases where your model is already strong. Fix by re-sampling from recent production traffic.{" "} Evaluation standards often emerge through the grading process itself โ€” criteria drift is not a bug but a feature of real-world eval pipelines. ## Code Examples ```python import math def golden_set_size( expected_pass_rate: float, # e.g. 0.80 target_half_width: float, # e.g. 0.02 for ยฑ2 pp n_cohorts: int = 1, confidence_z: float = 1.96, # 95% CI ) -> int: """Size a golden set for a binary pass/fail metric. Multiplies by n_cohorts when you want independent power within each stratum (per-tier, per-language, etc.). """ p = expected_pass_rate per_cohort = (confidence_z ** 2) * p * (1 - p) / (target_half_width ** 2) return int(math.ceil(per_cohort * n_cohorts)) # Example: 80% pass rate, ยฑ2pp, 4 cohorts print(golden_set_size(0.80, 0.02, n_cohorts=4)) # -> 6147 ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Your offline LLM-judge eval says a new model is 5% better. After launch, user satisfaction is flat. What
Answer Offline/online divergence is the default state, not the exception. Diagnose in order: (1) Distribution mismatch โ€” is the golden set representative of real traffic, or a curated slice? (2) Judge calibration โ€” does the LLM judge
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** Calculate cost-per-bad-day for a product at 1K QPS, $3/1M output tokens, 256 avg output tokens, if a regression routes 20% of traffic to the flagship instead of the cheap model (flagship costs 10x).
Answer Baseline cost: 1K QPS ร— 86,400 s ร— 256 tok ร— $3/1M = $66K/day. The regression means 20% of traffic costs 10ร— more; the other 80% is unchanged. Overrun on the regressed slice = 0.2 ร— $66K ร— (10โˆ’1) = $119K/day, where the 9ร— factor is the excess above baseline cost on the regressed slice (not the full 10ร—, because the baseline $66K already accounts for routing all traffic at $3/1M; the incremental delta per regressed request is 9ร— the cheap-tier cost). Detected in 5 min โ†’ $415; detected in 4 h โ†’ $19,900 (48ร— delta). That is why the eval harness that catches router drift in minutes, not hours, pays for itself in a single incident.
### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** You have a budget for 200 human-labeled eval examples. How do you allocate them across cohorts?
Answer Don
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Why is
Answer Cost per request is an average; incidents live in the tail. A product with $0.003/request average cost can absorb a 10x tail without breaking the P&L until the tail gets wide. The useful decomposition: (1) steady-state unit cost, (2) cost-per-bad-day (the integral of the tail during an incident window), (3) cost-per-user-retained (which only makes sense over cohorts, not requests). Instrument all three; the first is for capacity planning, the second is for incident severity, the third is for product decisions.
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** You
Answer Obvious: run an LLM judge over a held-out set, count hallucinations, set a threshold. Better: (1) define hallucination operationally โ€”
## Further Reading - [Shreya Shankar โ€” Who Validates the Validators?](https://arxiv.org/abs/2404.12272) The canonical paper on LLM-judge calibration. If you take one idea: the judge needs its own eval, and that eval is a human-labeled subsample you refresh on a schedule. - [RouteLLM โ€” Learning to Route in LLMs (Ong et al., 2024)](https://arxiv.org/abs/2406.18665) The paper that formally defines the cost-quality trade-off in LLM routing. Introduces the APGR/CGPT metrics and shows that a trained classifier-router can match 95% of GPT-4 quality at 40% of the cost. - [Hamel Husain โ€” Your AI Product Needs Evals](https://hamel.dev/blog/posts/evals/) The blog post that reframed eval-driven development for a generation of AI engineers. Pair with Hamel - [Eugene Yan โ€” LLM Evals](https://eugeneyan.com/writing/llm-evaluators/) The most thorough practitioner guide to LLM-as-judge design โ€” rubric construction, bias mitigation, calibration. - [Chip Huyen โ€” AI Engineering (O](https://www.oreilly.com/library/view/ai-engineering/9781098166298/) Chapter 4 on evaluation is the textbook reference. The cost-accounting chapter reframes LLM unit economics around request shape, not just token count. - [Anthropic โ€” Building Effective Agents (cost patterns)](https://www.anthropic.com/research/building-effective-agents) Not a cost paper per se, but the routing and orchestration patterns here are exactly where cost lives in agent systems. ## Related The Design Doc ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor ยท Case: Design Midjourney --- --- title: "Case: Design ChatGPT" part: "Design Reviews" number: 68 emoji: "๐Ÿ’ฌ" subtitle: "Multi-tenant chat โ€” SLOs, model routing, conversation state" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ’ฌ Case: Design ChatGPT > Multi-tenant chat โ€” SLOs, model routing, conversation state > [!question] Key Question > Two billion messages a day โ€” where does the money actually go? โ† Cost Accounting & Eval-Driven Design | โ†’ Case: Design Perplexity ## Key Insights > [!tip] Insight > Non-obvious SLO choice: separate p95 targets by tier. {" "} Most teams set a single p95 TTFT across all users. ChatGPT cannot โ€” because the model router sends free-tier traffic to a smaller model on a separate fleet segment, their tail-latency distribution is structurally different from Plus. Collapsing them into one number masks regressions on the paid tier (revenue-critical) under the larger volume of free-tier requests. Separate SLO tracking per tier is not a reporting preference; it is a detection prerequisite. This follows directly from the Google SRE Book recommendation (Chapter 4) to define SLOs for distinct user populations rather than aggregate service behavior. > [!tip] Insight > Golden-set sizing math. For a binary refusal judgment at 95% expected pass rate, a set of 500 examples gives a 95% confidence interval of roughly ยฑ2 percentage points โ€” tight enough to detect a 5-point regression before it ships. A 100-example set gives ยฑ6 points: too wide to detect gradual drift. The formula is{" "} CI = 1.96 ร— sqrt(p(1โˆ’p)/n) {" "} โ€” plug in p=0.95, n=500 to verify. Size for the signal you need, not the round number that fits in a sprint. > [!tip] Insight > The deliberate mistake in the defaults above. The preset uses 1,024 input tokens โ€” reasonable for turn 1. But multi-turn sessions accumulate history. A 10-turn session averaging 512 tokens per turn arrives with 5,000+ tokens of prefill context. The GPU memory required per active session grows proportionally. For the fleet to handle the p95 session without admission-control rejection, the KV cache must be sized for the distribution tail, not the mean โ€” and that changes the GPU count estimate substantially. > [!tip] Insight > Why two, not four. Deep diving everything equally is a junior signal. The conversation store failure cascades into every active session simultaneously and triggers a KV-cache miss storm on the GPU fleet โ€” both user-visible quality loss and a cost spike in the same event. The router failure is the fastest path to a six-figure cost incident. For everything else: “I would apply the same risk analysis โ€” here is the failure mode I would watch.” > [!tip] Insight > Gate 7 lesson: the 10x detection window. The router regression row shows that detection at 2 minutes vs. 4 hours is a 120x cost difference for the same underlying bug. Reliability is not a percentage โ€” it is a detection-window investment. The per-tier cost alarm costs nothing to build and is worth six figures per incident it catches. A team that monitors uptime but not per-tier cost is flying one-eyed. ## Interview Questions ### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** ChatGPT free tier routes to a cheaper model; Plus routes to the flagship. A naive implementation hard-codes this in the gateway. What is the single worst failure mode of that design, and how do you detect and fix it?
Answer The hardest failure mode is a silent routing regression โ€” a config push, feature-flag flip, or canary-weight bug that routes free-tier traffic to the flagship for 30โ€“60 minutes before anyone notices. Hard-coded gateway logic has no quality-check layer: the gateway cannot tell whether a request reached the correct model. Detection has two lines of defense: first, a per-tier GPU-spend alarm that fires within 2 minutes when cost deviates from baseline (20K free-tier QPS ร— 512 tokens ร— a ~$5/M delta between models is roughly $92K for a 30-minute window โ€” the cost spike is not subtle); second, a shadow-judge that continuously scores 5% of mini-routed responses against flagship scores and alarms on divergence. The fix is to decouple tier-routing from the gateway and run it as a separate router service with a canary rollout (1% of traffic before 100%) and an automated rollback that triggers on cost-SLO breach. The gateway only enforces which tiers are eligible for which routing class; the routing decision and its quality gate live downstream.
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** Design the conversation state store for ChatGPT. What are the three failure modes it must survive, and why is its blast radius higher than a model-server node failure?
Answer Three failure modes: (1) Cache miss โ€” Redis unavailable or a session evicted under memory pressure. Every turn re-sends full conversation history; the model server sees no prefix-cache hit; TTFT regresses by the prefill time for the full accumulated context, roughly 200 ms per 1K tokens on H100 (per vLLM benchmarks). At a 10-turn session averaging 5K tokens of history, that is ~1 second of extra prefill per request โ€” a full SLO breach on the Plus tier. (2) State corruption โ€” partial write during a network partition; next request reads a truncated prefix; the model produces incoherent output. Mitigation: write-ahead log on the durable tier; Redis write completes only after durable confirm; prefix is versioned so the model server detects length mismatches and falls back to a cold prefill. (3) Full store outage โ€” every active multi-turn session simultaneously loses context coherence. A model-server node failure loses one in-flight request and traffic re-routes with no user-visible effect. A conversation store outage degrades every concurrent session at once and triggers a KV-cache miss storm on the GPU fleet โ€” cascading into both user-visible quality loss and a cost spike. The blast radius is the product of active sessions, not one request.
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** The CFO asks why prefix caching on the system prompt matters to the bottom line. Give a number-backed answer.
Answer Every ChatGPT request carries a system prompt โ€” on the order of 500โ€“1,500 tokens of instruction, policy, and tool definitions. Without prefix caching, every request pays full prefill cost for those tokens. The vLLM paper (Kwon et al., 2023, arXiv:2309.06180) reports 2โ€“4x throughput improvement from prefix caching on repeated prefixes versus naive serving. Translating to cost: a 30% prefix-cache hit rate reduces prefill GPU-seconds proportionally โ€” if prefill accounts for P% of fleet compute, caching delivers a 0.3P% effective fleet saving. At 2B-messages-per-day scale, that is directionally hundreds of GPU-days per month. The cache hit rate is therefore tracked as a direct business metric, not a latency metric.
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** p95 TTFT regresses from 400 ms to 1,100 ms after a traffic spike. No model changes shipped. Where do you look, in what order?
Answer Start from the gateway and work downstream. First, check queue depth by tier: is the Plus or flagship queue deeper than baseline? A queue-depth spike without a request-rate spike points to a batch composition problem, not a capacity problem. Second, check batch composition: are long-context requests โ€” specifically, multi-turn sessions with 8K+ tokens of history โ€” dominating the prefill phase? This is the classic prefill-decode interference problem: one 8K-token prefill monopolizes decode slots for roughly 300โ€“500 ms (inferred from vLLM chunked-prefill benchmarks showing ~200 ms per 1K-token atomic prefill on H100). Without chunked prefill, every short request queued behind it sees that penalty in their TTFT. Third, check prefix-cache hit rate: a sudden drop suggests a system-prompt change or serialization format drift that invalidated cached prefixes. Fourth, check KV-cache memory utilization on the model-server fleet: above 90%, admission control should kick in and the queue grows. The mitigation hierarchy is chunked prefill to cap per-request prefill interference, then disaggregated prefill/decode if the prefill share of total GPU-seconds crosses roughly 30%.
### โ˜…โ˜…โ˜… _(Anthropic)_ **Q:** Anthropic
Answer A standard safety classifier is a single-pass binary gate: request in, allow or block out, sub-10 ms. Constitutional AI (Bai et al., 2022, arXiv:2212.08073) adds a self-critique-and-revision loop: the model generates a response, scores it against a set of constitutional principles, and rewises before the output is returned. From a serving architecture perspective, this adds at least one extra generation pass โ€” meaning latency roughly doubles for any request that enters the revision path. The critical integration decision is therefore the trigger condition: you cannot afford to run the full CAI loop on every request. The practical approach is to gate it on the output of the cheap post-model classifier: only requests scoring above a harmfulness threshold enter the CAI revision path. This preserves latency for the 95%+ of benign requests while applying the principled revision where it matters. A second integration decision is capping revision rounds โ€” typically one to two โ€” to bound worst-case latency. Third, log the revision diffs to the eval harness: a revision that introduces hallucinations while removing a safety issue is not a win, and only the eval harness can detect that pattern systematically. The shadow-run (5% of production through the full CAI path even when below threshold) surfaces classifier-calibration drift before it becomes a production incident.
## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor ยท Case: Design Midjourney --- --- title: "Case: Design Perplexity" part: "Design Reviews" number: 69 emoji: "๐Ÿ”ญ" subtitle: "RAG + live web search โ€” freshness, citations, retrieval fusion" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”ญ Case: Design Perplexity > RAG + live web search โ€” freshness, citations, retrieval fusion > [!question] Key Question > Half retrieval system, half LLM โ€” which half should you optimize first? โ† Case: Design ChatGPT | โ†’ Case: Design Claude Code / Cursor ## Key Insights > [!tip] Insight > Margin note. Notice that the latency SLO is{" "} looser than a pure chat product ( 1.5 s vs ~800 ms). This is intentional: retrieval adds budget. Trying to hit 800 ms would force you to cut the reranker โ€” and the reranker is where citation precision lives. Do not let a naive latency target kill the quality component that defines your product. > [!tip] Insight > Shreya Shankar's “Who Validates the Validators” problem. {" "} Your LLM judge for groundedness and citation precision is itself an LLM. It needs calibration: run the judge on a 50-example human-labeled subsample and measure judge accuracy before trusting its scores at scale. An uncalibrated judge that overestimates groundedness by 8 pp gives you a false sense of security โ€” and in a system where citations are the trust signal, false security has direct user-trust consequences. > [!tip] Insight > The LLM is cheap; retrieval is the bill. A common mistake in Perplexity-style system design is treating the LLM as the dominant cost center and optimizing there first. In practice, at steady-state scale, the embedding model for query encoding, the vector index serving layer, and the cross-encoder reranker together account for a substantial fraction of per-request spend (community estimates:{" "} 30โ€“50%). Before cutting the LLM size to save money, check whether the reranker can be made leaner or the cache hit rate can be improved. > [!tip] Insight > Margin note. Most candidates deep-dive the LLM selection. The LLM is the least differentiating component โ€” any sufficiently capable model can summarize five retrieved chunks. The reranker and citation binder are where Perplexity's product quality actually lives. Deep dive those. > [!tip] Insight > The hardest SLO to operationalize is citation precision. {" "} Latency and availability fire binary alerts. Citation precision requires continuous sampling, an NLI inference pipeline on production traffic, and a calibrated judge โ€” all running at non-trivial cost. The reranker regression row above shows why this is worth building: a 4-hour detection window vs. a 2-minute detection window is a 120ร— cost multiplier, and the cost compounds non-linearly if the regression persists for days. The organizations that skip citation-precision monitoring discover the regression from a viral tweet, not a dashboard. ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** Your eval shows LLM generation quality increased 3% after swapping to a larger model, but user satisfaction is flat. Where do you look first?
Answer Retrieval and citation quality. Users experience Perplexity through the surface of citations โ€” a well-cited but merely-OK answer is trusted; a well-written answer with a wrong or missing citation is distrusted. A 3% generation improvement is invisible if the retrieval recall is unchanged or degraded. Check citation precision (does source X actually support claim Y?), check retrieval recall@5 on the golden query set, and check groundedness score (NLI entailment between claims and cited source). Only when those are stable does generation quality become the marginal lever.
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** Design the freshness subsystem for Perplexity. How do you decide which queries trigger a live web fetch versus serving from the vector index?
Answer Two-signal routing: (1) Query classifier โ€” a lightweight model that identifies freshness-critical intent from keywords and semantic patterns (e.g.,
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** The cross-encoder reranker was upgraded. Citation precision silently dropped from 95% to 85%. How was this not caught before it reached production?
Answer The likely gap: the upgrade was evaluated on standard relevance benchmarks (NDCG, MRR) where the new model improved, but citation precision is a downstream metric โ€” it depends on what the LLM does with the reranked documents, not just which documents score highest. This is the
### โ˜…โ˜…โ˜… _(Google)_ **Q:** Perplexity
Answer Partial query degradation: queries whose relevant documents lived on the offline shard will silently receive worse answers โ€” the system won
### โ˜…โ˜…โ˜† _(Anthropic)_ **Q:** On low-evidence queries (
Answer The correct behavior is calibrated uncertainty, not confident generation from weak context. A groundedness gate checks whether the top retrieved sources actually contain relevant evidence (using NLI entailment or LLM scoring). If groundedness falls below a threshold, the system should: (1) Disclose low-confidence explicitly (
## Further Reading - [Perplexity Engineering Blog](https://www.perplexity.ai/hub/blog) Primary source for Perplexity - [Shreya Shankar โ€” Who Validates the Validators? Towards LLM-Assisted Evaluation](https://arxiv.org/abs/2405.03600) The foundational paper for understanding why the LLM judge evaluating your RAG system needs its own calibration. Essential reading before designing any groundedness or citation-precision eval. - [Dense Passage Retrieval for Open-Domain Question Answering (Karpukhin et al., 2020)](https://arxiv.org/abs/2004.04906) The DPR paper that established dual-encoder dense retrieval as the production baseline. The retrieval recall numbers here are the standard against which all Perplexity-style systems are measured. - [ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction](https://arxiv.org/abs/2004.12832) Khattab & Zaharia 2020 โ€” the architecture that keeps per-token embeddings and uses MaxSim scoring. Relevant to understanding why single-vector bi-encoders are the retrieval floor, not the ceiling. - [Eugene Yan โ€” Patterns for Building LLM-Based Systems & Products](https://eugeneyan.com/writing/llm-patterns/) Practitioner-level survey of RAG, evals, guardrails, and citation patterns. The sections on retrieval, memory, and guardrails map directly to the Perplexity design problem. - [Chip Huyen โ€” Building LLM Applications for Production](https://huyenchip.com/2023/04/11/llm-engineering.html) The canonical post on production LLM engineering. The hallucination and evaluation sections ground the citation-precision and groundedness design choices in this module. ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Claude Code / Cursor ยท Case: Design Midjourney --- --- title: "Case: Design Claude Code / Cursor" part: "Design Reviews" number: 70 emoji: "๐Ÿค–" subtitle: "Coding agent at scale โ€” context builder, tools, sandboxing" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿค– Case: Design Claude Code / Cursor > Coding agent at scale โ€” context builder, tools, sandboxing > [!question] Key Question > The model is cheap. The context is what costs you. โ† Case: Design Perplexity | โ†’ Case: Design Midjourney ## Key Insights > [!tip] Insight > Margin note. Sandbox escape has a special status: it's not a degraded SLO, it's a program-stopper. The asymmetry between cost incidents (detectable in minutes, recoverable by scaling) and trust incidents (discovered via bug bounty, covered in press) dictates that the sandbox architecture get disproportionate engineering time relative to its steady-state contribution. > [!tip] Insight > Hamel's north star. Hamel Husain's framing: your evals are only as good as the failure modes they surface. For coding agents, the failure modes that matter are silent ones โ€” an edit that compiles but regresses a test the agent never ran, or a context builder that retrieves the wrong file and the model confidently uses it anyway. Write evals that catch these specifically; generic “does it succeed” benchmarks miss them entirely. > [!tip] Insight > The multi-turn amplification trap. The calculator above assumes every QPS unit is an independent request. Coding agents violate this assumption badly. A single user task generates a cascade: turn 1 (plan) โ†’ tool calls โ†’ turn 2 (decide) โ†’ more tool calls โ†’ turn 3 (write the patch). Each turn's input context includes all prior tool results, so context length grows each turn. A 3-turn task with tool results accumulating costs roughly 5ร— more than the single-inference number suggests. Plan your GPU fleet around task completions, not individual model calls โ€” then multiply back. > [!tip] Insight > You've now seen all three RAG shapes. The ChatGPT case study showed conversation-retrieval: dense vector search over a knowledge base, ranked by semantic similarity to the query. The Perplexity case study showed{" "} web-retrieval: live crawl + recency-weighted ranking against a query that has a freshness requirement. This module showed{" "} context-retrieval: multi-layer (file index + repo graph + embeddings) retrieval under a hard latency budget where the “document” is live, mutable code. The shared shape across all three:{" "} retrieve โ†’ compose โ†’ generate โ†’ cite/use. The differences are (1) latency budget (200 ms for web, 100 ms for coding, flexible for chat), (2) freshness requirement (seconds for web, milliseconds for code, hours for static KB), and (3) what “context” means (web pages, code chunks, conversation history). This is the pattern. You didn't need a dedicated “RAG chapter” because three concrete case studies embedded it better than an abstraction ever could. > [!tip] Insight > The asymmetry that shapes the entire architecture.{" "} Cost incidents and latency incidents are recoverable. Trust incidents โ€” corrupted files, silent failures, sandbox escapes โ€” are not. The overlay filesystem, the PreToolUse hooks, and the microVM sandbox are expensive engineering investments that exist entirely to prevent the unrecoverable class of failure. Price them accordingly when writing the resource allocation for the platform team. ## Interview Questions ### โ˜…โ˜…โ˜† _(Anthropic, Google)_ **Q:** Your engineering manager says the new coding agent has
Answer 300 ms is the LLM first-token time for a single inference call. A coding agent issues many model calls per user-visible task โ€” one call to plan, one per tool result to decide what to do next, sometimes a final synthesis call. The user-visible latency is the sum of all those turns plus the I/O time for each tool execution. Reporting single-inference latency for a multi-turn agent is like reporting car engine cycle time as the answer to
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Your LLM compute cost is tripling month-over-month but user count is flat and average task count per user is flat. What
Answer The amplifier is inside the agent loop, not the user-facing metrics. Flat users ร— flat tasks means the same number of tasks are being started โ€” but each task is doing more model calls. The most common causes, in order: (1) Tool-loop bloat โ€” the agent is calling more tools per task (possibly because context changed and it re-explores more). (2) Context window expansion โ€” longer conversations mean longer input context for every subsequent call, driving up prefill cost even with the same number of turns. (3) A routing regression that
### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** Describe the overlay filesystem used by a coding agent
Answer An overlay filesystem layers a writable
### โ˜…โ˜…โ˜… _(Anthropic, Meta)_ **Q:** You
Answer Layer 1 โ€” Static file index (ripgrep-class): full-text and symbol search over the repo, built once and maintained by fs-watch. Latency: <10 ms for most queries. Trade-off: must be kept fresh after rebases and bulk renames; stale index leads to the agent
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** An interview question at a company building coding agents:
Answer SWE-bench measures task success rate โ€” did the agent produce a diff that passes the test suite? It does not penalize for token cost or wall-clock time. An agent that uses 10ร— more tokens and 3ร— more turns can score higher on SWE-bench while being materially worse on the product dimensions users feel. Resolution: weight SWE-bench success against token efficiency (useful tokens / total tokens per completed task) and task-completion latency as a multi-objective eval. A Pareto-dominant agent improves success rate without degrading efficiency โ€” that is the correct optimization target. Concretely: add a
## Further Reading - [Anthropic โ€” Building Effective Agents](https://www.anthropic.com/research/building-effective-agents) Anthropic - [ReAct: Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629) Yao et al., 2022 โ€” the paper that formalized the observe-think-act loop underpinning all modern coding agents. Read before designing any tool-call architecture. - [SWE-bench Verified: Can Language Models Resolve Real GitHub Issues?](https://arxiv.org/abs/2310.06770) Princeton / Chicago, 2023 โ€” the benchmark that made coding-agent trajectory eval rigorous. Essential reading for any team designing agent eval harnesses. - [Hamel Husain โ€” Your AI Product Needs Evals](https://hamel.dev/blog/posts/evals/) The practitioner post that converted a generation of AI engineers to eval-first design. The section on - [Toolformer: Language Models Can Teach Themselves to Use Tools](https://arxiv.org/abs/2302.04761) Schick et al., 2023 โ€” the foundational paper on training LLMs to use tools self-supervised. Provides theoretical grounding for tool-call precision/recall eval design. - [Simon Willison โ€” Things I](https://simonwillison.net/2023/Nov/18/complex-tool-use/) Hard-won practitioner lessons on tool-use reliability, prompt design for tool selection, and the gap between benchmark performance and real-world correctness. ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Midjourney --- --- title: "Case: Design Midjourney" part: "Design Reviews" number: 71 emoji: "๐ŸŽจ" subtitle: "Multi-tenant diffusion โ€” queueing, step budgets, content safety, GPU economics" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽจ Case: Design Midjourney > Multi-tenant diffusion โ€” queueing, step budgets, content safety, GPU economics > [!question] Key Question > A 50-step generation that fails at step 48 still costs you 48 steps โ† Case: Design Claude Code / Cursor | โ†’ Case: Design TikTok For-You Ranking ## Key Insights > [!tip] Insight > Non-obvious SLO: track queue wait and denoising latency separately. {" "} A p95 end-to-end SLO of 45 s can be missed two entirely different ways: the queue is backed up (capacity problem) or individual denoising runs are slow (GPU health problem). Collapsing them into one number hides the root cause and leads to the wrong remediation. Separate dashboards, separate alert thresholds. For a video counterpart with the same queue-starvation dynamics, see the{" "} Sora video generation case study . For a cross-system SLO and cost comparison, see the{" "} SLO & Cost Compare {" "} module. > [!tip] Insight > Why image evals need larger calibration sets. Human raters agree on text quality ~85% of the time. On images, inter-rater agreement drops to ~70% for aesthetic quality โ€” judges disagree on style, composition, and “good enough.” A smaller set that would give ยฑ3 percentage points on a text eval gives ยฑ6+ on images. Budget for 2ร— the calibration set size compared to an equivalent text eval, and run monthly human-anchor refreshes on a 100-image subsample to detect judge drift. > [!tip] Insight > The 50-step multiplier. An LLM uses one forward pass for prefill and one per output token. A diffusion model uses one forward pass per denoising step โ€” 50 passes for a standard generation. Each pass processes the full spatial latent (e.g.,{" "} 128ร—128 at 4 channels for 1024ร—1024 output ), which is compute-intensive in a way that has no LLM analogue. Rule of thumb: a single H100 can serve{" "} roughly 1โ€“3 images per second at 50 steps {" "} (per SwiftDiffusion,{" "} arxiv.org/abs/2402.10781 ยง4 ), compared to hundreds of LLM decode tokens per second. Design your fleet sizing from this measured baseline, not from LLM throughput numbers. > [!tip] Insight > Two deep dives, not four. The priority queue and checkpointer are the components with the highest blast radius if wrong โ€” the queue affects every user's wait time and every paying customer's SLO, and the checkpointer determines what every GPU failure costs. Every other component (CDN, post-filter, API gateway) has a clear off-the-shelf design with well-understood failure modes. Deep-dive the novel parts; reference-design the commodity parts. > [!tip] Insight > Asymmetry: image-gen incidents skew toward reputational, not financial. {" "} An LLM service that goes down costs SLA credits. An image-gen service that generates one viral bad image costs the trust of an entire user base and potentially triggers regulatory action. The engineering implication: invest in safety infrastructure at a level that looks disproportionate relative to the financial downside โ€” because the reputational downside is existential. ## Interview Questions ### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** A generation fails at step 48 of 50. How do you design the system so you don
Answer Three interlocking mitigations: (1) Safety pre-filter on the text prompt โ€” cheap classifier rejects policy violations before any GPU cycles are allocated. This is the highest-ROI mitigation because adversarial prompts fail text screening at a much higher rate than benign ones, and they are the dominant source of
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** How do you enforce per-tier step budgets at the scheduler level without modifying the diffusion model itself?
Answer The scheduler wraps the denoising loop: it maintains a counter per job and halts the loop once it reaches the tier
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** Your safety post-filter has a 1% false-positive rate (blocks one in 100 legitimate images). At 50 QPS with 4 images per generation, what does that cost in GPU-seconds per day? How do you detect this regression?
Answer 50 QPS ร— 4 images ร— 86,400 seconds/day = ~17.3 million images/day. At 1% FP rate that is ~173,000 images wasted per day. At roughly 5 GPU-seconds per image (50 steps ร— ~0.1s/step on H100), that is ~865,000 GPU-seconds (~240 GPU-hours) of wasted compute daily. Detection: maintain a
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** How would you explain to a new engineer why the CapacityCalculator built for LLM serving gives the wrong answer for a diffusion service?
Answer An LLM processes a prompt in roughly one forward pass (prefill) plus one pass per output token (decode). Total compute is proportional to input + output tokens โ€” typically a few hundred forward passes at most. A diffusion model runs the denoising network 30โ€“100 times per image with a full U-Net or DiT pass each time. A single 1024ร—1024 image generation on SDXL costs roughly 30โ€“50 U-Net forward passes โ€” each much heavier than an LLM decode step because the spatial resolution is large. The LLM calculator treats
### โ˜…โ˜…โ˜… _(OpenAI, Anthropic)_ **Q:** Midjourney surfaces a high-profile content violation that bypassed both text-level pre-filter and image-level post-filter. Walk through the immediate response and the three-week follow-up.
Answer Immediate (hours): (1) Identify and delete the offending content; (2) Temporarily lower the detection threshold on the post-filter to cast a wider net, accepting higher false-positive rate as a safety-first tradeoff during investigation; (3) Pull the prompt and full generation parameters for forensic analysis. Week one: characterize the bypass โ€” was it a novel adversarial prompt, a gap in the pre-filter
## Further Reading - [Denoising Diffusion Probabilistic Models (Ho et al., 2020)](https://arxiv.org/abs/2006.11239) The foundational DDPM paper. Understanding the denoising loop is prerequisite knowledge for reasoning about step budgets, checkpointing, and why failures at step 48 are expensive. - [High-Resolution Image Synthesis with Latent Diffusion Models (Rombach et al., 2022)](https://arxiv.org/abs/2112.10752) Introduced latent diffusion โ€” the architecture behind Stable Diffusion. Shows why denoising in latent space (not pixel space) is tractable at scale, and how the VAE bottleneck interacts with generation quality. - [DALL-E 3 Technical Report (OpenAI, 2023)](https://cdn.openai.com/papers/dall-e-3.pdf) OpenAI - [Stability AI Research](https://stability.ai/research) Primary source for Stable Diffusion architecture notes, SDXL improvements, and the open-weight model family that forms the technical baseline for most independent diffusion services. - [Efficient Diffusion Serving โ€” Ying Sheng et al., SwiftDiffusion (2024)](https://arxiv.org/abs/2402.10781) A practitioner paper on batching strategy, LoRA switching, and GPU utilization for diffusion serving at scale. The most directly relevant systems paper for this case study - [C2PA Content Credentials Specification](https://c2pa.org/specifications/specifications/2.0/specs/C2PA_Specification.html) The open standard for embedding AI-generation provenance metadata in images. Relevant to the OpenAI company-lens discussion on watermarking and traceability. ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Case: Design TikTok For-You Ranking" part: "Design Reviews" number: 72 emoji: "๐Ÿ“ฑ" subtitle: "Two-tower retrieval + ranker + feature store โ€” classical ML@scale canon" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ“ฑ Case: Design TikTok For-You Ranking > Two-tower retrieval + ranker + feature store โ€” classical ML@scale canon > [!question] Key Question > Why the Explore/Exploit slider matters more than the model โ† Case: Design Midjourney | โ†’ Case: Design an Embeddings Platform ## Key Insights > [!tip] Insight > The setpoint as a product SLO. The explore/exploit ratio does not appear in the table above as a fixed number โ€” and that is intentional. It is a variable controlled by Product, tuned via A/B experiments on retention and creator health metrics. ML teams that treat it as a model hyperparameter and tune it on offline NDCG will optimize it in the wrong direction. Offline NDCG rewards exploit (known-good items); long-run retention often rewards explore (novel content that prevents filter-bubble fatigue). The SLO table is where Product declares the goal; the setpoint is one dial they turn to achieve it. The candidate generation stage relies on the same ANN index infrastructure covered in the{" "} Embeddings Platform case study . For a cross-system failure taxonomy comparison, see{" "} Failure Taxonomy Compare . > [!tip] Insight > The bitter experience of recsys eval. The field has decades of evidence that offline NDCG improvements do not reliably translate to online retention gains. The YouTube two-tower paper noted this explicitly: the most important signal was whether the model improved live A/B metrics, not offline numbers. Design the eval harness to treat offline metrics as regression detectors (did something break?) and online A/B as the source of truth for improvements. > [!tip] Insight > The real bottleneck is not the ranker. At{" "} 100K QPS, the GPU budget for the heavy ranker is manageable because the model is small and the computation per request (scoring ~500 candidates) is highly parallelizable. The harder engineering problem is the feature-store read latency: every request needs to assemble real-time user features (last-N interactions) plus item features (freshness score, engagement rate) for all candidates within the{" "} p99 200 ms{" "} budget. Optimizing feature-store read latency โ€” batching reads, pre-computing hot user embeddings, sharding by user ID โ€” is where the real capacity work lives. > [!tip] Insight > Why the diversity re-ranker is separate. It is tempting to add diversity constraints directly into the heavy ranker's loss function (e.g., a diversity regularization term). Resist this. Entangling relevance and diversity in a single model means every policy change โ€” a new safety rule, a new creator-fairness target โ€” requires re-training and re-deploying the ranker. A separate re-ranker is deterministic, fast, and policy-configurable without ML involvement. The division of labor: ML maximizes relevance; the re-ranker applies constraints. This mirrors the Product/ML ownership boundary in the explore/exploit setpoint. > [!tip] Insight > The diversity re-ranker is your safety circuit breaker. {" "} Because it sits between the relevance ranker and the user, it is the correct place to enforce policy constraints, creator-fairness floors, and content caps. Putting safety logic in the relevance ranker couples two concerns that should evolve independently โ€” a policy change should not require a model re-train. ## Interview Questions ### โ˜…โ˜…โ˜† _(Meta, Google)_ **Q:** The PM asks to
Answer This is a product decision wearing an ML costume. Before touching anything: (1) Define the current setpoint โ€” what fraction of each user
### โ˜…โ˜…โ˜… _(Meta, Google)_ **Q:** Offline NDCG@10 improved by 1.5 points in your candidate generator experiment. The online A/B shows flat retention and a small drop in creator fairness. Explain why, and what you do next.
Answer The classic offline/online gap in recsys. Three likely causes: (1) Training data bias โ€” the offline set reflects past impressions, which were already filtered by the old ranker. Your new generator retrieves different candidates that the user has never been shown, so engagement labels are missing for them (counterfactual gap). NDCG improves on seen items but the model is blind on unseen ones. (2) Distribution shift โ€” offline eval uses a static snapshot; online users respond to position, context, and session state that offline eval doesn
### โ˜…โ˜…โ˜… _(Meta, Databricks)_ **Q:** Design the feature store for a TikTok-scale feed ranker. What features live in which tier, and what is the failure mode if online/offline feature parity breaks?
Answer Three tiers: (1) Real-time (sub-second latency): user
### โ˜…โ˜…โ˜† _(Meta, Google)_ **Q:** A new video goes viral within 10 minutes of upload. Your ranker gives it near-zero relevance scores. What architectural components are failing, and what is the fix?
Answer This is the cold-start / fresh-item problem. The ranker relies on engagement history (watch rate, like rate, share rate) to score items. A video uploaded 10 minutes ago has no engagement history โ€” it falls to the bottom of the ranked list regardless of quality. The failure is in two places: (1) The candidate generator
### โ˜…โ˜…โ˜… _(Databricks, Meta)_ **Q:** Databricks asks: how do you structure the ML training pipeline so that a new ranker version can be shadow-tested, compared to the champion, and promoted โ€” without taking the feature store offline or requiring a full data re-backfill?
Answer The pattern is a champion/challenger shadow pipeline. (1) Feature store versioning: features are versioned by name + version tag (e.g.,
## Further Reading - [Eugene Yan โ€” Patterns for Personalization in Recommendations](https://eugeneyan.com/writing/patterns-for-personalization/) Practitioner-grade breakdown of retrieval, ranking, and re-ranking patterns at scale. The canonical starting point for recsys system design. - [Covington et al. โ€” Deep Neural Networks for YouTube Recommendations (RecSys 2016)](https://research.google/pubs/pub45530/) The paper that introduced the two-tower architecture for candidate generation at scale. Still the reference implementation for user-tower + item-tower + ANN retrieval. - [Chip Huyen โ€” Designing Machine Learning Systems (O](https://www.oreilly.com/library/view/designing-machine-learning/9781098107956/) Chapter 6 on feature engineering and chapter 9 on the feedback loop are directly relevant to the online/offline feature-store parity problem and counterfactual logging. - [Tecton โ€” The Feature Store Explained](https://www.tecton.ai/blog/what-is-a-feature-store/) The clearest public explanation of the online/offline feature store architecture, backfill strategies, and train/serve skew. Written by practitioners who built the Uber Michelangelo feature store. - [Pinterest Engineering โ€” Pinnability: Machine Learning in the Pinterest Home Feed](https://medium.com/pinterest-engineering/pinnability-machine-learning-in-the-home-feed-64be2074bf60) A real-world case study of the explore/exploit tradeoff, diversity re-ranking, and the product/ML boundary in a large-scale feed system. - [Instagram Engineering โ€” Powered by AI: Instagram](https://ai.meta.com/blog/powered-by-ai-instagrams-explore-recommender-system/) Meta ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Case: Design an Embeddings Platform" part: "Design Reviews" number: 73 emoji: "๐Ÿงญ" subtitle: "Pinterest-style โ€” backfill, drift, model upgrades, serving with HNSW" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿงญ Case: Design an Embeddings Platform > Pinterest-style โ€” backfill, drift, model upgrades, serving with HNSW > [!question] Key Question > The day you change your embedding model, every index goes stale โ† Case: Design TikTok For-You Ranking | โ†’ Case: Design Llama Training Infra ## Key Insights > [!tip] Insight > The migration SLO is the least obvious. Without a 7-day budget cap, teams underestimate dual-index storage costs.{" "} 10M items ร— 768 dims ร— 4 bytes ร— 2 indexes โ‰ˆ 60GB {" "} โ€” manageable. At 1B items, that is 6TB of extra storage that must be provisioned, warmed, and then decommissioned in a bounded window. The feed ranking pipeline that consumes these embeddings is covered in the{" "} Feed Ranking case study . For the retrieval-augmented generation pattern that uses embedding lookup at inference time, see the{" "} RAG comparison module . > [!tip] Insight > Eval-first discipline pays off most during rollback. {" "} If the migration goes wrong mid-way, the eval harness determines exactly which consumer crossed the recall regression threshold, enabling selective rollback (roll back ads but keep recsys on the new index) rather than a full revert. > [!tip] Insight > Storage math matters more than compute here.{" "} At 10M items/day ร— 768 dims ร— 4 bytes = 30GB of new vectors per day. After 1 year that is ~10TB. {" "} During a 7-day migration window, dual-write adds another ~210GB of temporary storage. Budget for this in your capacity plan โ€” it is the infra cost that constrains the migration window, not GPU time. > [!tip] Insight > Interview framing. Every interviewer will ask “how do you upgrade the model?” The wrong answer is “retrain and redeploy.” The right answer starts with: “model upgrade is a migration event โ€” here's the dual-write protocol, here's the backfill SLA, and here's the eval gate that triggers cutover.” > [!tip] Insight > Silent degradation is the hardest incident type.{" "} The platform API returns 200 OK. The embedder is running. The HNSW index is healthy. But recall@k has dropped 15pp because a shard is serving stale embeddings from before the last rebuild. Only the per-consumer recall@k monitor catches this โ€” which is why building that monitor is not optional. ## Interview Questions ### โ˜…โ˜…โ˜… _(Meta, Google)_ **Q:** You upgrade your embedding model. All existing HNSW indexes are now stale. How do you plan the migration without regressing search quality overnight?
Answer The migration has four phases: (1) Dual-write โ€” the Embedder Service begins writing to both the old index and the new index for every incoming item. This prevents the new index from falling behind on fresh content. (2) Backfill โ€” an offline pipeline re-embeds the full corpus with the new model and inserts into the new index; priority queue by item recency so high-traffic items land first. (3) Blended read โ€” the retrieval layer blends results from both indexes with a sliding weight (100% old โ†’ 0% old over ~3 days), controlled by a feature flag per consumer. (4) Cutover โ€” once the new index matches or exceeds the old index on recall@k golden queries for all consumers, the old index is taken offline and the dual-write layer is removed. Failure mode: index divergence during writes (network partition writes to only one index). Mitigation: consistency check job that samples 1% of items per hour and alerts if the two indexes differ by more than 5%.
### โ˜…โ˜…โ˜† _(Meta, Anthropic)_ **Q:** You have four internal consumers (search, recsys, ads, dedup) sharing the same embedding platform. How do you design SLOs that satisfy all four without over-provisioning for the most demanding one?
Answer Segment SLOs by consumer tier and access pattern. Ads requires the tightest recall@k (0.90) and lowest latency because a missed embedding directly costs revenue; it gets dedicated online capacity with p95 <30ms. Search (recall@k 0.85) and recsys (0.75) share an online serving pool with p95 <50ms โ€” their tolerance for occasional cache misses is higher. Dedup is a batch consumer with no latency SLO; it uses the async endpoint and shares GPU time with the backfill pipeline during off-peak hours. The key design principle: each consumer owns its own HNSW shard replica with the right recall tuning (ef_search parameter), so one consumer
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** The semantic drift monitor fires an alert โ€” the cosine similarity distribution of new embeddings has shifted relative to last month
Answer First question:
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** An interviewer asks why you chose HNSW over an exact k-NN index or a flat FAISS index. Give a number-backed answer.
Answer For a corpus of 10M+ items, exact k-NN requires O(N) distance computations per query โ€” at 10M items and a 768-dim embedding, that is 10M dot products per query, roughly 10ms on a modern CPU. At 10,000 QPS, you need ~100 CPU cores just for retrieval with zero overhead. HNSW (Hierarchical Navigable Small World) achieves sub-linear query time by building a multi-layer graph; at M=16, ef_construction=200, recall@10 of ~0.95, query latency is ~1ms on a single core (per Malkov & Yashunin 2018, Table 2 โ€” https://arxiv.org/abs/1603.09320). The tradeoff is memory: HNSW stores the graph structure at ~100 bytes/item overhead beyond the raw vectors. At 10M items ร— (768 dims ร— 4 bytes + 100 bytes overhead) = ~34GB โ€” fits on one 40GB GPU or a couple of CPU nodes. FAISS flat is appropriate for corpora under 1M items or for offline eval; HNSW is the standard choice for online serving at Pinterest/Meta scale.
### โ˜…โ˜…โ˜… _(Meta, OpenAI)_ **Q:** Describe the
Answer When a viral item category spikes (e.g., a breaking news event), queries cluster around a narrow region of the embedding space. If the HNSW index is partitioned by item type or topic cluster, one shard receives a disproportionate fraction of QPS while others sit idle. The hot shard
## Further Reading - [Pinterest Engineering โ€” Unifying Visual Embeddings for Visual Search at Pinterest](https://medium.com/pinterest-engineering/unifying-visual-embeddings-for-visual-search-at-pinterest-74ea7ea103f0) Primary source for Pinterest - [Malkov & Yashunin โ€” Efficient and Robust Approximate Nearest Neighbor Search Using HNSW (2018)](https://arxiv.org/abs/1603.09320) The foundational HNSW paper. Read Section 4 on layered graph construction and Section 5 on query complexity โ€” essential for justifying M, ef_construction, and ef_search tradeoffs in an interview. - [Eugene Yan โ€” Patterns for Building LLM-Based Systems & Products](https://eugeneyan.com/writing/llm-patterns/) Eugene - [Chip Huyen โ€” Designing Machine Learning Systems (O](https://www.oreilly.com/library/view/designing-machine-learning/9781098107956/) Chapter 7 on feature pipelines and Chapter 10 on infrastructure cover the embedding lifecycle โ€” freshness, serving, versioning โ€” at the right abstraction level for a senior design interview. - [Weaviate Engineering Blog โ€” HNSW vs. Flat Index Performance](https://weaviate.io/blog/ann-algorithms-vamana-vs-hnsw) Benchmark-grounded comparison of ANN algorithms with real recall/latency/memory numbers. Use this to back up the HNSW justification in the architecture deep dive. - [Shreya Shankar โ€” Who Validates the Validators? Verifying Parity in ML Pipelines](https://www.shreya-shankar.com/rethinking-ml-monitoring/) The argument that online/offline parity is the hardest SLO to enforce in an embedding platform. Directly relevant to the eval and canary sections of this module. ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Case: Design Llama Training Infra" part: "Design Reviews" number: 74 emoji: "๐Ÿ”ฅ" subtitle: "Data pipeline + checkpoint management + failure-tolerant orchestration" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ”ฅ Case: Design Llama Training Infra > Data pipeline + checkpoint management + failure-tolerant orchestration > [!question] Key Question > At 16K GPUs, a GPU fails every 3 hours โ€” design for it โ† Case: Design an Embeddings Platform | โ†’ Case: Design an Agent Platform ## Key Insights > [!tip] Insight > Goodput is not GPU utilization. A GPU running at 100% utilization on repeated identical micro-batches because the data pipeline stalled has 0% goodput for those steps. Goodput counts only steps whose output advances the accepted training trajectory โ€” it penalizes failures, stalls, and corrupted batches equally. > [!tip] Insight > Eval-before-commit is load-bearing. The eval fleet gates checkpoint promotion โ€” it is not a reporting dashboard. A checkpoint that writes successfully to the object store but has not passed the eval harness is stored as{" "} pending, not{" "} committed . Recovery rolls back only to the last{" "} committed {" "} checkpoint, avoiding the silent-corruption failure mode. > [!tip] Insight > Goodput is the real axis. A{" "} 16K H100 cluster at{" "} $3.50/GPU-hour (spot-market estimate, 2024; on-demand rates higher) {" "} costs ~$57,500/hour. The difference between 70% and 90% goodput on a 90-day run is roughly 4,320 wasted GPU-hours ร— 16,384 GPUs ร— $3.50 = on the order of tens of millions of dollars. This is why the SLO table lists goodput first, before any latency metric. > [!tip] Insight > Both components are operationally invisible when working. {" "} The training researcher sees a smooth loss curve and doesn't know that the ring-health monitor replaced three nodes overnight and the async checkpoint offloader ran 840 saves without pausing training. This is the correct outcome โ€” failure should be handled below the researcher's attention layer. The cost of getting it wrong is that the researcher notices, which means days of investigation and millions of dollars of wasted compute. > [!tip] Insight > Fast-detected failures are cheap; slow-detected failures are catastrophic. {" "} The rack event and the cluster hang cost roughly the same at 3 hours of undetected failure. But the rack event with good detection costs less than $30K. The asymmetry is not the failure mode โ€” it is the detection window. Every architectural choice that shrinks detection latency (ring-health monitor, continuous loss alerting, parity monitoring) is actually a cost-reduction investment, not an operational overhead. ## Interview Questions ### โ˜…โ˜…โ˜… _(Meta, OpenAI)_ **Q:** You
Answer With synchronous all-reduce, the 64 lost ranks cause every other rank to hang waiting for the collective to complete. The ring-health monitor must detect the missing ranks within 30โ€“60 seconds (not 3 hours) and signal the orchestrator. Recovery: (1) roll back to the last committed checkpoint in the object store โ€” the most recent successfully eval
### โ˜…โ˜…โ˜† _(Meta, Google)_ **Q:** Your team is debating checkpoint frequency: every 100 steps vs every 500 steps on a 16K H100 cluster. How do you decide?
Answer The decision is a recovery-cost calculation. Recovery cost = (tokens between checkpoints) ร— (GPU-hours per token) ร— (GPU cost per hour). At 16K H100s running ~$3.50/GPU-hour, a 500-step gap with micro-batch 4M tokens/step means 2B tokens of re-computation at roughly $175K per wasted hour. The checkpoint write time is a fixed overhead per save โ€” with async CPU offload + streaming to object store, this is typically 2โ€“5 minutes per checkpoint for a 70B model. So: if failures happen every 6 hours and checkpoints take 3 minutes to write, a 100-step cadence adds ~1% overhead for async offload but halves expected re-computation. The asymmetric cost (small overhead vs catastrophic re-compute) almost always favors more frequent checkpoints. The right answer is: set cadence such that expected recovery cost โ‰ค 2ร— the checkpoint overhead cost.
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Your loss curve shows a sharp spike at step 48,000, then returns to trend. The checkpoint at step 47,900 looks clean. What do you investigate and in what order?
Answer A transient spike that resolves suggests a bad batch, not a corrupted model. Investigation order: (1) Data pipeline: inspect the batch at step 48,000 โ€” high loss often comes from a tokenizer bug that introduced garbled sequences, repeated content, or wrong language distribution. Grep for outlier token IDs, unusually long sequences, or domain-distribution jumps in that batch. (2) Numeric stability: check for NaN/Inf in loss, gradient norms, and activations at that step. A nan that resolves suggests a single bad sequence was responsible. (3) Learning-rate schedule: was there a warm-up/cool-down boundary, or a scheduled LR spike at that step? (4) Hardware: did any rank show elevated error-correction counts (GPU ECC) at that step? A single bit-flip in activations produces exactly this signature. The checkpoint at 47,900 being clean is your recovery anchor โ€” if you can replay step 48,000 deterministically with the same seed and reproduce the spike, it
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** An interviewer asks:
Answer Data parallelism alone fails at two limits: (1) Memory โ€” a 70B model in bf16 needs ~140 GB for parameters + ~560 GB for Adam optimizer states. That doesn
### โ˜…โ˜…โ˜† _(Meta, Anthropic)_ **Q:** A research engineer says:
Answer Both are wrong. Goodput (effective training flops / theoretical peak flops ร— time) is not binary โ€” it has a cost-optimal point that depends on the economics of the cluster. Goodput < 85% is typically a red flag because the re-computation cost from failures + checkpoint overhead + pipeline bubbles together usually stays under 15% on a well-tuned cluster. At 72%, there
## Further Reading - [Meta โ€” Llama 3 Herd of Models (Dubey et al., 2024)](https://arxiv.org/abs/2407.21783) The primary source for Llama-scale training infrastructure at Meta. Section 3 on pre-training covers the 3D-parallel strategy, checkpoint policies, and failure-recovery design that this case study is grounded in. - [Megatron-LM: Training Multi-Billion Parameter Language Models (Narayanan et al., 2021)](https://arxiv.org/abs/2104.04473) The paper that systematized 3D parallelism (DP ร— TP ร— PP) for large-scale training. Essential reading for the orchestration and tensor-parallelism sections of this module. - [PyTorch FSDP: Fully Sharded Data Parallel (Zhao et al., 2023)](https://arxiv.org/abs/2304.11277) The engineering paper behind PyTorch FSDP. Covers the ZeRO-3 sharding strategy, memory savings, and communication overlap that complement 3D parallelism. - [PyTorch Distributed โ€” Official Docs](https://pytorch.org/docs/stable/distributed.html) Reference for torch.distributed, NCCL backend, process groups, and the DDP/FSDP/RPC APIs that underpin every production training stack. - [Chip Huyen โ€” Large Language Model Training at Scale](https://huyenchip.com/2023/05/02/rlhf.html) Practitioner overview of the economic and operational realities of large-scale training โ€” goodput, failure modes, and the org structure implications of running a cluster at this scale. ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Case: Design an Agent Platform" part: "Design Reviews" number: 75 emoji: "๐Ÿ—๏ธ" subtitle: "Multi-agent infra โ€” sandboxing, tool registries, trajectory eval, spend control" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ—๏ธ Case: Design an Agent Platform > Multi-agent infra โ€” sandboxing, tool registries, trajectory eval, spend control > [!question] Key Question > An agent that spawns agents โ€” where does the budget live? โ† Case: Design Llama Training Infra | โ†’ Case: Design Gemini ## Key Insights > [!tip] Insight > Why sandbox escape is existential but slow start is P2. {" "} An agent platform is a multi-tenant system. A sandbox escape lets one tenant read another's trajectory store, tool credentials, or model outputs โ€” this is a data breach. The company does not survive this as a hosted platform. Slow start (2.5s instead of 2s) is annoying; a sandbox escape is company-ending. Prioritization by blast radius, not by technical difficulty. > [!tip] Insight > Why trajectory eval, not final-answer eval. An agent that succeeds by burning 10ร— more tokens than necessary, or that selected the correct answer after four wrong tool calls, looks perfect on final-answer eval. Trajectory eval catches it: tool-call P/R is low, spend efficiency is low. These are the agents that blow past budgets in production. Hamel Husain's core argument: measuring only the output is measuring only the last inch of a mile-long run. > [!tip] Insight > The amplification trap. Every capacity plan for an agent platform that starts from “user tasks per second” is wrong by the average tool-call depth. For 1,000 concurrent agents with 20 tool calls each, the real LLM QPS is 20,000 โ€” before accounting for sub-agents. A naive design that provisions for 1,000 QPS at the LLM gateway will brown out immediately. Always derive LLM gateway capacity from{" "} user_tasks ร— avg_llm_calls_per_task ร— (1 + avg_child_agent_depth) . > [!tip] Insight > The recursive-agent trap. The most common spend-control bug on agent platforms: the parent agent spawns 50 child agents to parallelize a research task. Each child is below the per-trajectory cap. The parent has not been charged for child spend because child budgets were tracked independently. Total cost: 50 ร— per-child budget, which far exceeds the parent's cap. Fix: always roll child spend into the parent's envelope before the child is dispatched, not after it returns. > [!tip] Insight > Agent platforms amplify blast radius. A traditional LLM API: one bad request โ†’ one bad response. A hosted agent platform: one bad task dispatch โ†’ 50 child agents โ†’ 1,000 model calls โ†’ $200 in unaccounted spend, all before the user sees an error. The spend-control and sandboxing SLOs in this module are P0 specifically because the amplification factor makes every latent failure catastrophically larger than it would be in a stateless API. ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** An interviewer asks:
Answer The right unit is the trajectory boundary โ€” the cost of the current user-facing task. Per-model-call enforcement is too fine: a single task issues 20โ€“100 model calls, and a cap that fires per call kills the task prematurely, arbitrarily, and repeatedly. Per-tool-call is too coarse: tools vary from a cheap grep to an expensive sub-agent spawn. The trajectory is the unit the user actually cares about:
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** Design the capability-token scheme for a tool registry on a multi-tenant agent platform. What does a token contain and how does the runner validate it?
Answer A capability token is a short-lived signed credential (HMAC-SHA256 or similar) that contains: (1) tenant ID, (2) tool ID and allowed parameter schema, (3) expiry (e.g., 5 minutes), (4) trajectory ID it was issued for. The agent runner presents the token when invoking a tool; the tool registry validates the signature, checks expiry, and confirms the trajectory ID matches the current session. Tokens are issued by the Trajectory Orchestrator at session start โ€” the agent never sees raw credentials for the underlying tool APIs. Failure mode without this: a prompt-injected agent extracts the raw AWS credentials embedded in a tool and exfiltrates them. With capability tokens, the worst a compromised agent can do is invoke the permitted tools within the current session window.
### โ˜…โ˜…โ˜† _(Anthropic, OpenAI)_ **Q:** Your trajectory store goes down during a P1 incident. What are the three compounding effects, and how does each one extend MTTR?
Answer (1) Incident replay is blocked โ€” the on-call engineer cannot reconstruct the agent
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** A Google interviewer asks:
Answer Yes, with two caveats. The 125ms boot cost is a one-time cost per session โ€” it hits session-start latency, not per-tool or per-step latency. For a session that runs 20+ tool calls over several minutes, 125ms amortizes to noise. The SLO is <2s to first tool call, which leaves 1.875s after the 125ms boot for the orchestrator โ†’ runner โ†’ LLM โ†’ first tool call sequence; that
### โ˜…โ˜…โ˜… _(Meta, Anthropic)_ **Q:** Meta
Answer Trajectory eval for multi-agent systems must be hierarchical. Leaf evals measure end-to-end task success (did the root agent return a useful result?), tool-call correctness (precision/recall on tool selections vs. a golden trajectory), and spend efficiency (useful-work-$ / total-$ where useful-work is measured by a task-success judge). But leaf success can mask intermediate failures: a parent agent that succeeded only because a child agent hit a lucky path. Add intermediate eval: for each sub-agent invocation, record the child
## Further Reading - [Anthropic โ€” Building Effective Agents](https://www.anthropic.com/research/building-effective-agents) Anthropic - [ReAct: Synergizing Reasoning and Acting in Language Models (Yao et al., 2022)](https://arxiv.org/abs/2210.03629) The paper that formalized the observe-think-act loop underpinning every agent on a hosted platform. The trajectory concept in this module maps directly to a ReAct episode. - [Firecracker: Lightweight Virtualization for Serverless Applications (Agache et al., 2020)](https://www.usenix.org/conference/nsdi20/presentation/agache) AWS - [E2B โ€” Secure Open-Source Cloud Runtime for AI Agents](https://e2b.dev/blog/how-we-built-e2b) E2B - [Hamel Husain โ€” Your AI Product Needs Evals](https://hamel.dev/blog/posts/evals/) The practitioner post that reframed eval-first design for the AI engineering generation. The trajectory eval section of this module follows Hamel - [LangSmith โ€” Tracing and Evaluation for LLM Applications](https://docs.smith.langchain.com/) LangSmith ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Case: Design Gemini" part: "Design Reviews" number: 76 emoji: "๐Ÿ’Ž" subtitle: "Multi-modal frontier serving โ€” TPU stack, 1M-token attention, safety classifier chain" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ’Ž Case: Design Gemini > Multi-modal frontier serving โ€” TPU stack, 1M-token attention, safety classifier chain > [!question] Key Question > 1M-token context is cheap to promise, expensive to serve โ€” here's the bill โ† Case: Design an Agent Platform | โ†’ Case: Design NotebookLM ## Key Insights > [!tip] Insight > Non-obvious SLO choice: separate latency targets by context length. {" "} Most serving systems define a single p99 TTFT. Gemini cannot โ€” the difference between a 1K-token and 1M-token query is three orders of magnitude in prefill compute. A single p99 number would be dominated by the long-context tail and would mask regressions on the short-context path that serves 95%+ of queries. The right design is separate SLO buckets: <32K, 32Kโ€“128K, 128Kโ€“1M. This follows directly from the Google SRE Book (Chapter 4) recommendation to define SLOs for distinct user populations and workload classes, not aggregate service behavior. > [!tip] Insight > Assumptions in the above table: All compute efficiency figures are (community estimate) derived from public TPU v5p specs and measured API latencies. Google's actual cost structure is proprietary. The implied margin is a floor โ€” it does not include networking, cooling, datacenter amortization, or team costs. The table's value is the relative magnitudes and sensitivities, not the absolute numbers. > [!tip] Insight > Cross-study connections. This module connects directly to the{" "} NotebookLM case study {" "} (long-context retrieval augmentation, same 1M-token window applied to document Q&A) and the{" "} Sora case study {" "} (multi-modal generation โ€” video tokens as first-class inputs, same patch-grid tokenization math applied to video). If you've studied all three, you can describe Google's multi-modal strategy as a coherent stack: Gemini as the reasoning layer, NotebookLM as the long-context application layer, and the video understanding capability as the sensory input layer. ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** Gemini's 1M-token context window is real but serving it profitably is hard. Derive the minimum prefix-cache hit rate needed so the cost per 1M-token query stays below $10 (use publicly available API pricing as a reference point). What architectural components make or break that number?
Answer Using current public Gemini 2.5 Pro standard pricing as a reference point (Google AI for Developers pricing page, April 2026), prompts above 200K tokens are priced at $2.50 per 1M input tokens and cached input at $0.25 per 1M. That means a 1M-token cold query costs about $2.50 before output tokens โ€” already below $10. The real problem is repeated turns: if a session resends the same 900K-token prefix five times with no caching, you pay about $12.50 in repeated input cost. With a 90% cache hit on that 900K prefix, the repeated-turn input cost becomes roughly 100K uncached ร— $2.50/M + 900K cached ร— $0.25/M = $0.25 + $0.225 = $0.475 per turn. The load-bearing components are therefore: (1) a stable context hash so repeated prefixes actually hit the cache, (2) a serving path that keeps long prefixes warm on the same worker or a recoverable external cache, and (3) admission control so 1M-token sessions do not evict each other. The interview-safe conclusion is that long context is economically viable under current public pricing, but only if the cache hit path is treated as the default path rather than an optimization.
### โ˜…โ˜…โ˜† _(Google, Meta)_ **Q:** A Gemini multi-modal query arrives with a 10-image product catalog (each ~512KB JPEG). Walk through the full serving path, identifying the two highest-latency steps and how you bound them.
Answer Per the Google blog on Gemini image tokenization, each image is converted to roughly 258 tokens by the multimodal encoder (variable based on resolution, but 258 is the documented canonical value for standard inputs). Ten images = ~2,580 image tokens added to the context. The two highest-latency steps are: (1) Image encoding โ€” the SigLIP/ViT encoder processes each image into patch embeddings before the language model sees them. At batch size 1 on TPU v5p, encoding a 512KB JPEG takes on the order of 5โ€“15 ms per image (inferred from ViT-L benchmarks on comparable accelerators); ten images serial = 50โ€“150 ms. Bound this by parallelizing encoding across the 10 images โ€” independent inputs, embarrassingly parallel. At batch 10, total encoding drops to the single-image time (15 ms) plus scheduling overhead. (2) Prefill for the full prompt โ€” 2,580 image tokens + N text tokens must be prefilled on the generation model. At 1K token/ms prefill throughput on H100/TPU equivalent, 3K tokens = ~3 ms prefill โ€” fast. But if the user has a long conversation history in the 1M-context window, the prefill cost dominates (1M tokens / 1K tokens/ms = 1 second, minus any KV cache hits). Bound this with prefix caching on the conversation history and chunked prefill so the image tokens do not block decode slots for other users. The multimodal encoder path must complete before the language model starts prefill โ€” this is the hard dependency. If the encoder is on a separate TPU slice, ensure the embedding tensor is co-located (or transferred via NVLink-equivalent ICI) to avoid a D2D copy penalty.
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** Gemini 2.5 Thinking charges separately for thinking tokens. Design the serving-side token budget enforcer: what does it check, when does it fire, and what happens if the model tries to exceed the budget mid-generation?
Answer The thinking budget is a per-request parameter (e.g., max_thinking_tokens: 8192). The enforcer lives as a generation wrapper around the TPU decoding loop. On each forward pass it maintains a running count of emitted thinking tokens (tokens inside the model's internal reasoning scratchpad, delimited by a special token pair). When the running count reaches the budget cap, the enforcer injects a “stop thinking” control token that signals the model to transition to the output phase. Three checks required: (1) Token classification โ€” thinking tokens use a reserved token range or are wrapped in special delimiters; the enforcer must correctly distinguish thinking tokens from output tokens to avoid counting output against the budget (which would truncate the actual response). (2) Mid-generation preemption โ€” if the model exceeds the budget before completing its reasoning, the enforcer must inject the stop-thinking signal without corrupting the KV cache state; the model must have been trained to handle a budget-exceeded interrupt gracefully. (3) Billing accuracy โ€” thinking tokens consumed must be recorded per-request before the KV cache entry is written, so a node crash after generation but before billing does not silently undercount. The worst failure mode is a classifier bug that mistakes output tokens for thinking tokens and truncates the response when it hits the budget ceiling โ€” this manifests as abruptly cut-off answers that pass safety checks but are incoherent. Detection: monitor response-length distribution; a sudden left-shift (short answers) after a thinking-classifier deploy is the signal.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** Your team's safety post-classifier has a 2% false-positive rate on medical queries. That means 2% of legitimate doctor-patient research questions are refused. At 50K QPS and 5% medical query share, how many users per hour are wrongly blocked? What is the right architectural fix?
Answer Arithmetic: 50,000 QPS ร— 5% medical share = 2,500 medical QPS. 2% false-positive rate ร— 2,500 = 50 wrong refusals per second. Per hour: 50 ร— 3,600 = 180,000 users per hour wrongly blocked. That is not a rounding error โ€” it is a service-level failure on a user segment that includes healthcare professionals. The right architectural fix has two layers: (1) Calibrated fallback classifier โ€” instead of a single binary classifier, use a three-outcome model: BLOCK, ALLOW, and UNCERTAIN. For UNCERTAIN results (~5% of edge cases), route to a more expensive but more accurate secondary classifier or a human review queue. This reduces the hard false-positive rate at the cost of latency on the uncertain slice, which is acceptable because users who receive UNCERTAIN-routed queries are presumably not in the critical streaming path. (2) Query-type context signal โ€” feed the router's inferred query type (medical, legal, security, code) as a feature to the safety classifier. A query with strong medical intent markers (ICD codes, drug names, clinical terminology) should have a lower false-positive prior, not a higher one. The current failure mode is a context-free classifier that treats “what is the lethal dose of acetaminophen” identically whether it comes from a clinical database API or a user account with 50 prior jailbreak attempts. Personalization of the safety threshold based on trust signals is the correct direction (per Google's SafetySettings API, which already exposes per-category thresholds as a first-class feature).
## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Case: Design NotebookLM" part: "Design Reviews" number: 77 emoji: "๐Ÿ““" subtitle: "Long-context RAG over user docs โ€” source-pinned citations, audio-overview pipeline" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿ““ Case: Design NotebookLM > Long-context RAG over user docs โ€” source-pinned citations, audio-overview pipeline > [!question] Key Question > Upload 50 PDFs, ask one question โ€” which half of the stack wins? โ† Case: Design Gemini | โ†’ Case: Design Sora ## Key Insights > [!tip] Insight > Why citation precision, not accuracy, is the primary SLO. {" "} NotebookLM does not claim to be factually accurate in the world-knowledge sense โ€” it claims to be accurate relative to the sources you uploaded. A user uploading a wrong paper gets wrong citations, and that is correct behavior. The SLO is about fidelity to source, not fidelity to ground truth. This is why the system should never supplement user sources with model training memory, even when sources are sparse โ€” doing so would violate the core contract. > [!tip] Insight > The silence failure mode. When a query asks about something not in the user's sources, the correct behavior is an explicit “not found in your sources” response, not a confident answer from training memory. Measure the rate of out-of-scope answers that cite non-existent paragraphs โ€” this is the most trust-destroying failure because the user cannot detect it without reading the original document. > [!tip] Insight > The KV-cache is the business model. Without prefix caching on static document content, NotebookLM's long-context serving cost would be prohibitive at free-tier scale. The insight is that user-uploaded documents are {'"'}static prefixes{'"'} โ€” they do not change between queries. Any inference engine that supports prefix KV-cache reuse (as Gemini 1.5 does, per Google's published{" "} context caching docs ) turns the 10x{" "} input-token cost reduction into a direct margin improvement for every query after the first in a session. > [!tip] Insight > The silent-wrong-citation failure is the worst. A system outage is visible โ€” users see an error page. A citation that links to a plausible but incorrect paragraph is invisible. The user clicks it, sees related (but wrong) text, and trusts the answer anyway. This is how source-grounded AI systems erode trust: not through obvious failures, but through calibration failures that look correct on the surface. The citation assignment eval is the only defense. ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** NotebookLM offers a free tier with no clear monetization path. Long-context inference over a 200-page PDF is expensive. How does the system serve the free tier profitably, or at least sustainably?
Answer Three interlocking mechanisms keep the free tier viable. First, KV-cache reuse on the static document prefix is the primary lever. Because a user's uploaded sources rarely change between queries, the tokenized document representation can be prefix-cached on the Gemini fleet. Using current public Gemini 2.5 Flash paid pricing as a proxy (April 2026), cached input is 10x cheaper than uncached input: $0.03/M tokens cached vs $0.30/M uncached. A 200-page PDF at ~100K tokens therefore costs roughly $0.03 cold and $0.003 on warm turns before output tokens. Second, quota throttling limits worst-case cost per user: Google's current NotebookLM Help documentation allows up to 50 sources per notebook and up to 500,000 words per source, so the real control surface is query volume and feature gating rather than tiny source caps. Third, free-tier usage likely generates training signal and product-discovery value beyond the marginal serving cost. The structural bet is still freemium: most users ask a few questions, while cache reuse compresses the cost of engaged users who ask many questions over the same notebook.
### โ˜…โ˜…โ˜… _(Google)_ **Q:** A user queries across 10 uploaded PDFs. Gemini's 1M-token context window can fit them all. When should NotebookLM use full-context (all docs in the prompt) vs. RAG (retrieve top-k chunks first)?
Answer The tradeoff is cost vs. recall completeness. Full-context gives the model access to every sentence in every document โ€” ideal for queries requiring synthesis across many non-obvious locations (e.g., “find all instances where authors disagree about X”). Using Gemini 2.5 Flash paid pricing as a public proxy, a 500K-token cold context costs about $0.15 (500K ร— $0.30/M). A RAG path that retrieves top-20 paragraphs (~10K tokens total) costs about $0.003 on the input side โ€” still roughly 50x cheaper. The decision rule should be signal-driven: use a query complexity classifier to route. Narrow factual queries (“what is the author's definition of X?”) route to RAG; synthesis queries (“compare the methodologies across all papers”) route to full-context. Cache state is also a signal: if the user's document set was queried recently and the prefix is likely warm, full-context input cost drops another 10x and the tradeoff swings toward full-context. NotebookLM's architecture (community estimate, per reverse-engineered behavior) appears to lean heavily on full Gemini context for source-grounded synthesis, betting that cache reuse makes this economically viable for engaged users.
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** At Google, you're reviewing the eval spec for NotebookLM's citation correctness. What are the two most important eval dimensions and how do you measure them?
Answer Citation correctness has two distinct failure modes requiring separate evals. The first is source-paragraph entailment: does the cited paragraph actually support the generated claim? Measure with an NLI model over (claim, cited-paragraph) pairs, sampling 5% of production query-answer pairs daily. Target: โ‰ฅ92% entailment rate. The failure mode here is the model making a plausible claim from training memory and hallucinating a source paragraph that doesn't say that. The second dimension is citation assignment: when a claim is supported by source material, is it assigned to the correct document and paragraph among the user's uploaded sources? Mis-assignment is distinct from non-entailment โ€” the system might correctly identify that a claim is supported somewhere, but link it to the wrong paragraph, violating the user's trust in the navigation (clicking a citation should take them to the exact sentence). Measure with a golden query set (100+ hand-annotated Q&A pairs where correct source paragraphs are labeled) run offline on every model update. An LLM judge evaluating citation correctness itself needs calibration against the human-labeled set โ€” Shreya Shankar's EvalGen work (arXiv:2404.12272) shows uncalibrated LLM judges systematically over-report entailment by 8โ€“12 pp on grounding benchmarks.
### โ˜…โ˜…โ˜† _(Google, Anthropic)_ **Q:** The Audio Overview feature generates a two-speaker podcast from user-uploaded documents. What are the two safety failure modes unique to this feature, and how do you architect the mitigation?
Answer Two failure modes are unique to Audio Overview and absent from the text-query path. The first is voice-cloning abuse: a user could upload an audio recording of a real person (e.g., an executive's earnings call transcript with speaker audio) and attempt to get the TTS pipeline to synthesize content in that person's voice. Mitigation: the TTS models must use fixed synthetic voices that are not conditioned on user-uploaded audio. Google's publicly announced Audio Overview uses two fixed synthetic host voices (per Google Labs blog, 2024). The pipeline must include a speaker-identity guard that confirms the synthesis request routes only to pre-approved voice IDs, never to a user-supplied voice embedding. The second failure mode is PII amplification: user-uploaded documents may contain sensitive data (medical records, personal emails, internal financial docs). The two-speaker dialogue script generated from those docs could surface PII in a more legible, memorable form โ€” a podcast version of a medical record is a greater privacy risk than the PDF. Mitigation: run the script through a PII detector before TTS synthesis; redact or paraphrase PII-containing spans before audio generation. Both mitigations should be in-pipeline, not advisory โ€” the audio is not generated if the safety checks fail, with a user-facing error that explains why.
## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Case: Design Sora" part: "Design Reviews" number: 78 emoji: "๐ŸŽฌ" subtitle: "Text-to-video at scale โ€” diffusion transformer GPU economics, safety on generative video" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽฌ Case: Design Sora > Text-to-video at scale โ€” diffusion transformer GPU economics, safety on generative video > [!question] Key Question > A 10-second clip costs more GPU-hours than your laptop's lifetime โ† Case: Design NotebookLM | โ†’ Case: Design Character.ai ## Key Insights > [!tip] Insight > Non-obvious SLO: track queue wait and denoising latency separately. {" "} A p95 end-to-end SLO miss of 120 s can mean either the queue backed up (capacity problem โ€” add GPUs or shed load) or individual denoising runs are slow (GPU health problem โ€” inspect node metrics). Collapsing them into one number sends you to the wrong remediation. Additionally, track first intermediate frame latency as a separate SLO (target: <20 s) โ€” even if the final clip takes 90 s, a low-resolution preview after step 10 dramatically reduces perceived wait time. > [!tip] Insight > Video golden sets need 4ร— more clips than image sets. {" "} Inter-rater agreement for temporal coherence (~60%) is lower than for image aesthetic quality (~70%), which is already lower than text quality (~85%). At 60% agreement, a set of 200 clips gives a 95% confidence interval of roughly ยฑ7 percentage points on a binary coherence metric โ€” borderline usable. Target 500+ clips for a production-grade video quality eval. Budget proportionally. > [!tip] Insight > Three deep dives, not four โ€” latency budget was the constraint. {" "} Priority queue design for video follows the same three-lane weighted-fair-share pattern as image-gen (see{" "} Image-Gen Design Review, Deep Dive A ) with one addition: jobs have a “generation budget” in GPU-seconds at admit time so the scheduler can estimate when capacity will free up. The cost model comparison ( SLO vs Cost tradeoffs ) covers the queue scheduling math in more depth. > [!tip] Insight > Detection-window sensitivity dominates incident cost for video. {" "} The 15ร— cost delta between a 2-minute and 30-minute detection window (from the NaN explosion scenario above) holds across all three incident types. Invest in alarm sensitivity โ€” a per-tier queue-depth alarm that fires within 2 minutes of a breach, a NaN-rate alarm that fires within 1 minute โ€” before investing in faster incident response. The cheapest hour is the one you catch in the first 2 minutes. ## Interview Questions ### โ˜…โ˜…โ˜… _(OpenAI, Google)_ **Q:** A Sora generation fails at step 48 of 50, consuming almost full GPU budget with no deliverable. Walk through two structural mitigations and quantify the expected wasted GPU-seconds saved by each.
Answer Mitigation 1 โ€” text-level pre-filter: adversarial prompts are the dominant source of late-stage failures because they tend to trigger policy violations discovered only after generation completes. A fast text classifier (sub-100 ms, CPU-only) that rejects known-bad patterns before GPU allocation eliminates the spend entirely for that class. At a 2% adversarial traffic rate and 50 QPS, the pre-filter saves ~1 QPS ร— 50 steps ร— ~2.4 GPU-s/step = ~120 GPU-seconds per second of traffic โ€” roughly $7/min in saved H100 time at $3.50/hr. Mitigation 2 โ€” step-level checkpointing: saves the intermediate latent tensor every 10 steps. A failure at step 48 restarts from step 40, costing only 8 steps instead of 48 โ€” an 83% reduction in wasted compute for that job. At a GPU fault rate of 0.1% per generation and 20 QPS, checkpointing saves roughly 0.001 ร— 20 ร— (48โˆ’8) steps ร— 2.4 GPU-s/step โ‰ˆ 1.9 GPU-seconds per second of traffic โ€” a smaller saving than pre-filtering, but critical during hardware instability events when fault rates spike to 1โ€“5%.
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** Why is a 120-second p99 generation latency for Sora not directly comparable to a 120-second p99 for a long-document LLM response, and how should you design the UX and SLO differently?
Answer An LLM streaming a 120-second response is delivering tokens continuously โ€” the user sees output within the first 300โ€“500 ms and gets progressive value throughout. Sora produces nothing until all 50 denoising steps complete: the user waits 120 seconds on a progress bar before seeing any output. This makes Sora psychologically closer to a file download than a chat response, which has two architectural implications. First, SLO design: track queue wait and denoising latency separately. A 120 s total that is 5 s queue + 115 s denoising is very different from 90 s queue + 30 s denoising โ€” the latter signals a capacity crisis. Second, UX design: show a real-time denoising preview (a lower-resolution or coarser-step intermediate frame) every 10 steps so users get feedback that work is progressing. This is similar to how DALL-E shows a blurry preview before the final image. The SLO for the preview stream (e.g., first intermediate frame within 15 s) should be tracked separately from the final-clip SLO, because a broken preview pipeline is a user-experience failure even when the final clip succeeds.
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** Design the safety stack for a service that generates realistic human faces in video. What are the three hardest failure modes, and how do you detect each before a public incident?
Answer The three hardest failure modes for face-in-video generation: (1) Celebrity likeness generation โ€” a prompt that does not mention a celebrity by name but uses sufficiently specific descriptors to produce a recognizable likeness. Text-level pre-filters miss this because the violation is in the output, not the input. Detection: a frame-level celebrity-likeness classifier on every generated frame, with a known-celebrities embedding index (perceptual hash + face embedding) built from opt-out databases and updated weekly. Alert threshold: any frame scoring above 0.85 cosine similarity to an indexed celebrity face triggers hold-and-review before delivery. (2) CSAM generation โ€” even non-explicit prompts can produce frames involving minors in ambiguous contexts when combined with adversarial suffixes. Detection: a dedicated CSAM classifier running on every frame as a mandatory post-filter gate โ€” this is non-negotiable, and its false-negative rate must be tracked on a red-team golden set updated monthly. (3) Non-consensual intimate imagery (NCII) โ€” realistic face-swap or de-clothing artifacts can emerge from benign-looking prompts. Detection: a multi-class intimacy classifier that separately scores (a) nudity presence and (b) face-in-frame, and blocks any clip where both are above threshold. Each classifier runs in parallel on sampled frames (every 5th frame for efficiency) with a final pass on the first and last frame of every clip regardless.
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** The Sora team proposes shipping a free-tier that allows unlimited generations but enforces a 480p resolution cap and a 5-second duration cap. As the infra lead, what do you push back on, and what do you add?
Answer Push back on “unlimited generations.” Even at 480p and 5 s, each generation runs the full 50-step DiT denoising loop โ€” the cost reduction from resolution and duration limits is roughly 4ร— (resolution) ร— 2ร— (duration) = 8ร— cheaper than a full 1080p/10s clip, but still on the order of $0.10โ€“0.20 per generation (community estimate). At meaningful free-tier scale (1M users ร— 5 generations/day = 5M generations/day), that is $500Kโ€“$1M/day in raw GPU cost with zero revenue. The right answer is a daily generation credit, not unlimited. What to add: (1) Per-IP and per-account burst limits enforced at the API gateway to prevent batch abuse. (2) A prompt-complexity classifier that estimates generation cost (high-motion scenes are harder than static landscapes) and charges more credits for complex prompts โ€” this caps the adversarial case of a free-tier user maximizing GPU burn with complex prompts. (3) A queue priority tier below paid users: free-tier generations get best-effort throughput and are the first lane shed under capacity pressure โ€” explicit in the ToS so users do not treat free-tier latency as an SLO.
## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Case: Design Character.ai" part: "Design Reviews" number: 79 emoji: "๐ŸŽญ" subtitle: "Consumer LLM at scale โ€” MQA, int8, trained-from-scratch, sub-$1/user/month cost floor" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐ŸŽญ Case: Design Character.ai > Consumer LLM at scale โ€” MQA, int8, trained-from-scratch, sub-$1/user/month cost floor > [!question] Key Question > 20B tokens served per day on a consumer-priced subscription โ€” how? โ† Case: Design Sora | โ†’ Compare: RAG Systems ## Key Insights > [!tip] Insight > The looser p99 TTFT is a cost-engineering instrument, not a product shortcut. {" "} A 2,500 ms p99 vs. ChatGPT Plus's{" "} ~800 ms p95 sounds like a worse product. But it directly enables larger batch sizes in the serving engine. At p99 2,500 ms, the scheduler can accumulate requests for up to 1.5 additional seconds before dispatching a batch โ€” increasing average batch size from ~16 to ~48 at 50K QPS. Throughput scales approximately linearly with batch size in the decode phase (per vLLM continuous batching benchmarks, arXiv:2309.06180). The cost per token drops proportionally. This single SLO choice is worth roughly 3x in effective GPU utilization compared to a ChatGPT-Plus-equivalent SLO. Character.ai's consumer positioning enables a cost structure that a premium assistant product cannot access. > [!tip] Insight > Why cache hit rate belongs in the eval harness. At 50K QPS and a{" "} 60% cache hit rate , only 20K QPS reaches the GPU for full prefill computation. If a deploy drops hit rate from 60% to 30%, effective prefill QPS jumps from 20K to 35K โ€” a 75% increase in GPU prefill load that is not visible in latency metrics during off-peak but blows the cost SLO by end of month. The eval harness catches it in CI before the deploy lands. > [!tip] Insight > Original research caveat. Character.ai has not published per-message cost figures. The table above is a reverse-engineered estimate from the publicly disclosed fleet size (~3,000 GPUs, per the cost-engineering blog), published subscription pricing ($10/mo), and reported DAU metrics. All derived values are labeled accordingly. The exercise is useful for interviews because it demonstrates reasoning from first principles to a defensible cost model โ€” not because the exact numbers are correct. ## Code Examples ```python import torch import torch.nn.functional as F def mha_attention(x, Wq, Wk, Wv, num_heads, head_dim): """Standard multi-head attention โ€” separate K, V per head.""" B, T, D = x.shape # Project: each of num_heads heads gets its own K and V Q = (x @ Wq).view(B, T, num_heads, head_dim).transpose(1, 2) # (B, H, T, d) K = (x @ Wk).view(B, T, num_heads, head_dim).transpose(1, 2) # (B, H, T, d) V = (x @ Wv).view(B, T, num_heads, head_dim).transpose(1, 2) # (B, H, T, d) # KV cache memory: B * num_heads * T * head_dim * 2 bytes * 2 (K+V) scale = head_dim ** -0.5 attn = F.softmax(Q @ K.transpose(-2, -1) * scale, dim=-1) return (attn @ V).transpose(1, 2).reshape(B, T, -1) def mqa_attention(x, Wq, Wk_shared, Wv_shared, num_heads, head_dim): """Multi-query attention โ€” single shared K, V for all query heads.""" B, T, D = x.shape Q = (x @ Wq).view(B, T, num_heads, head_dim).transpose(1, 2) # (B, H, T, d) # K and V are shared: only 1 head's worth of K and V stored K = (x @ Wk_shared).view(B, T, 1, head_dim).transpose(1, 2) # (B, 1, T, d) V = (x @ Wv_shared).view(B, T, 1, head_dim).transpose(1, 2) # (B, 1, T, d) # KV cache memory: B * 1 * T * head_dim * 2 bytes * 2 (K+V) โ€” 32x smaller! K = K.expand(-1, num_heads, -1, -1) # broadcast to all query heads at attention time V = V.expand(-1, num_heads, -1, -1) scale = head_dim ** -0.5 attn = F.softmax(Q @ K.transpose(-2, -1) * scale, dim=-1) return (attn @ V).transpose(1, 2).reshape(B, T, -1) ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** Character.ai serves millions of users chatting with the same popular character. Describe how you would architect prefix caching to exploit this, what the cache hit rate ceiling is, and what breaks the cache.
Answer A popular character's personality prompt is 4โ€“16K tokens shared across potentially millions of simultaneous conversations. The key insight is that the shared personality prefix is reusable across users, while the per-user dialogue suffix is not. Architecturally, that means: prefill the shared prefix once, hash it, keep the KV cache resident on the sticky serving shard, and route subsequent turns for that dialogue back to the same shard. The token-level ceiling for savings depends on how much of a typical request is shared prefix versus user-specific suffix: with a 4K shared prefix and a 2K user suffix, the shared fraction is 4/(4+2) = 67%. Character.AI's June 2024 inference post reports a much higher 95% fleet-level cache rate because they also reuse inter-turn dialogue prefixes with longest-prefix matching, not just the static character preamble. What breaks the cache: (1) personality prompt version bumps โ€” even a whitespace change invalidates the prefix hash; treat prompt text as a deployment artifact. (2) Loss of shard affinity โ€” once dialogue turns stop landing on the same server, the warm KV state becomes useless. (3) Checkpoint or quantization changes โ€” a serving image update that changes KV layout requires invalidating old cache entries. The important interview move is distinguishing token-level shared-prefix savings from fleet-level query cache rate; they are related, but not the same metric.
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** Multi-query attention (MQA) is cited in the Character.ai cost blog as a key memory-saving technique. Explain the mechanism, quantify the KV cache memory reduction versus multi-head attention, and describe what you give up.
Answer Standard multi-head attention (MHA) keeps separate K and V tensors for every head. For one transformer layer with 32 heads, 4,096 tokens, 128 dims/head, and fp16 KV, the cache size is 2 (K+V) ร— 32 ร— 4,096 ร— 128 ร— 2 bytes = 67,108,864 bytes, or 64 MiB per layer. MQA (Shazeer, 2019, arXiv:1911.02150) shares K and V across heads, so the same layer drops to 2 MiB โ€” a 32x reduction versus MHA for this geometry. Character.AI's June 2024 inference post says they use MQA in all attention layers and combine it with hybrid attention horizons plus cross-layer KV sharing to reduce KV-cache size by more than 20x without quality regression; that is the public source you should cite rather than reverse-engineering the whole fleet. What you give up is representational flexibility: GQA ablations (Ainslie et al., 2023, arXiv:2305.13245) show that more aggressive KV sharing can trade away some reasoning quality versus full MHA. The interview-safe framing is: MQA is a training-time architectural choice that buys huge memory savings, but you only take it when your product economics care more about batchable long-dialogue serving than squeezing out every last bit of head specialization.
### โ˜…โ˜…โ˜… _(Google)_ **Q:** You are a Google DeepMind interviewer. Character.ai was acquihired by Google in 2024. The Character.ai team proposes to migrate the serving infrastructure to Google's TPU v5e fleet. What are the top three integration risks, and how do you mitigate each?
Answer Risk 1: int8 quantization incompatibility. Character.ai's model uses int8 attention matmul and int8 KV cache calibrated for NVIDIA A100/H100 tensor core layouts. TPU v5e uses bfloat16 as its native compute type with limited int8 support in the matrix multiply units. The migration requires either (a) re-calibrating the model in bfloat16 โ€” which likely recovers the ~1โ€“2% quality gap sacrificed for int8 on GPU but costs more memory and thus requires more TPU chips โ€” or (b) implementing custom int8 kernels in JAX/XLA for the specific attention pattern. Risk: either path takes 3โ€“6 months and carries regression risk on persona consistency. Mitigation: run A/B traffic on GPU vs. TPU with identical prompts and track the persona-judge score daily before cutting over more than 5% of traffic. Risk 2: prefix cache architecture mismatch. vLLM-style prefix caching relies on GPU HBM being addressable as a hash table keyed on token hash. TPU memory management under JAX/XLA is less flexible โ€” tensor shapes must be static at compile time. Replicating the dynamic prefix caching behavior requires engineering a custom TPU serving layer (similar to what Google did for PaLM serving). This is solvable but not trivial; budget 6+ months. Risk 3: character-to-shard affinity routing. Character.ai routes conversations to the GPU shard holding the warm KV states for the target character. Google's TPU Borg scheduler is optimized for batch training, not request-affinity routing at LLM serving latency. A custom Borg job configuration or a sidecar routing layer is required. If the routing layer is not ready at migration time, cache hit rate drops to near zero and GPU-equivalent cost increases 2โ€“3x, blowing the economics of the migration.
### โ˜…โ˜…โ˜† _(Meta, Anthropic)_ **Q:** Character.ai must enforce safety for minors at consumer scale. A naive keyword filter fails; a full LLM safety judge per message is too slow. Design a tiered safety architecture that hits p99 <2,500 ms TTFT while protecting under-18 users.
Answer The architecture has three tiers, each gating the next more expensive tier. Tier 1 โ€” sub-millisecond lexical + embedding gate: a pre-trained embedding classifier (BERT-small equivalent, ~12M params, runs in <2 ms on CPU) scores the user message for obvious harm signals and age-specific risk indicators. Hit rate on clear-positive blocks: ~40% of all policy-violating content. Cost: essentially free per request. Tier 2 โ€” 50 ms risk classifier: a fine-tuned 125M-param model specialized for character.ai's taxonomy (NSFW roleplay, self-harm, CSAM adjacent). Runs on GPU in a dedicated safety cluster on the 60% of messages that pass Tier 1. This classifier was trained specifically on roleplay context โ€” generic classifiers trained on social media text dramatically under-perform on fictional framing (e.g., “my character asks how to...” bypasses most off-the-shelf models). Hit rate on remaining violations: ~85%. Tier 3 โ€” post-generation LLM judge: runs after the character model generates a response, on the 5โ€“10% of outputs that produced a high-risk activation in the post-processing hook. This judge has up to 500 ms budget. Age-gating layer: the gateway attaches an age-tier flag (inferred at account creation) to every request. For accounts flagged as under-18 or age-unverified, the Tier 2 classifier threshold is tightened (lower logit threshold for blocking), and the Tier 3 judge runs on a larger sample (20% vs. 5% for adult accounts). The core engineering insight: the expensive safety compute is not flat across all users โ€” it is concentrated on the highest-risk (under-18, unverified) user segment. By tiering the compute and routing only the high-risk segment to the expensive judge, you achieve comparable safety outcomes at 30โ€“40% of the flat-cost alternative.
## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Compare: RAG Systems" part: "Design Reviews" number: 80 emoji: "๐Ÿงฎ" subtitle: "Perplexity vs NotebookLM vs ChatGPT-search vs Phind โ€” retriever, grounding, citation side-by-side" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿงฎ Compare: RAG Systems > Perplexity vs NotebookLM vs ChatGPT-search vs Phind โ€” retriever, grounding, citation side-by-side > [!question] Key Question > Same question, four systems, four answers โ€” whose retriever wins? โ† Case: Design Character.ai | โ†’ Compare: SLO โ†” Cost ## Key Insights > [!tip] Insight > The hierarchy interviewers test. Citation precision is the “trust surface” โ€” users experience the system through citations, not raw text. Groundedness is the “silent killer” โ€” it degrades without any UI signal until user satisfaction collapses. Freshness is the “loudest failure” โ€” users notice immediately. Rank your attention in that order. > [!tip] Insight > Interview trap. Interviewers at Google frequently ask “which system has the best grounding?” expecting you to say Perplexity. The correct answer is NotebookLM โ€” its controlled corpus and explicit document-fidelity objective yield lower estimated hallucination rates ( ~2โ€“4%) than Perplexity (~5โ€“8%) on document-answerable questions. But NotebookLM has no freshness, so the question is under-specified. Always ask: “On which query class?” > [!tip] Insight > The pattern. All four retrievers reflect their corpus constraints. Perplexity owns the corpus โ†’ controls freshness and chunking. NotebookLM's corpus is user-defined โ†’ small enough to skip chunking. ChatGPT outsources the corpus โ†’ trades control for scale. Phind narrows the corpus โ†’ trades breadth for depth. In a design interview, the retriever choice is the first question: “What corpus are you serving? Who controls it? What's the freshness requirement?” > [!tip] Insight > ChatGPT Search's single point of failure. Bing API downtime is not a degraded state for ChatGPT Search โ€” it is a total retrieval failure. Perplexity can degrade to its vector index if the live-fetch path fails. NotebookLM can brute-force search if Matching Engine degrades. ChatGPT has no fallback corpus. This is the most important architectural difference in the comparison, and Google interviewers regularly probe it. ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** You're designing the eval harness for a new RAG product that competes with Perplexity and NotebookLM. You have 2,000 human-labeled examples. How do you allocate them across eval axes, and what does your offline-to-online correlation strategy look like?
Answer Allocate by axis risk, not evenly. Suggested split: 600 examples for citation precision (the trust metric โ€” wrong citations destroy the product immediately), 500 for groundedness (LLM-drifts-to-memory is invisible until measured), 400 for freshness accuracy (freshness-sensitive query cohort only), 300 for recall@K (retrieval coverage on head vs. tail queries), 200 for refusal/disclosure behavior on low-evidence queries. Offline-to-online correlation: instrument a 5% production sample for each axis using the same eval logic โ€” track the online-offline gap monthly. If offline groundedness says 92% but online thumbs-down on factual queries says 15%, the gap is real and the eval is not measuring what users experience. Calibrate LLM judges quarterly against a human-labeled subsample (Shankar et al., 2404.12272).
### โ˜…โ˜…โ˜… _(Google, OpenAI)_ **Q:** NotebookLM uses Gemini 1.5 Pro with 1M-token context instead of a traditional chunked RAG pipeline. When does this architectural choice hurt, and how would you fix it?
Answer It hurts in three scenarios: (1) Cost at scale โ€” a 128K-context Gemini call costs significantly more than passing top-5 chunks to a smaller model. At 10K QPS, the per-query cost difference compounds to millions per month. (2) Latency ceiling โ€” long-context inference latency scales roughly linearly with context length; at 500K tokens, TTFT can exceed 5s even with KV cache. (3) Needle-in-haystack degradation โ€” Gemini's attention is not uniformly strong across 1M tokens; claims from the middle of a large document are under-attended (per Kamradt's NIAH benchmark). Fix: introduce a two-stage retrieval path โ€” semantic search retrieves the top 20 passages, Gemini synthesizes over those 20, keeping context under 50K tokens while preserving the “no explicit re-ranker” property. This cuts cost ~5x at a small quality cost on cross-document synthesis tasks.
### โ˜…โ˜…โ˜† _(OpenAI, Anthropic)_ **Q:** Phind's citation rate is lower than Perplexity's on code queries โ€” instead of citing every sentence, it cites at the function level. A product manager wants Phind-style citations. How do you defend or reject this?
Answer Defend if the content is primarily code, reject if it is primarily prose. The reason: sentence-level citations for code are semantically wrong โ€” a single function spans many sentences and the citation unit is the function, not the sentence. Phind's function-level citations match developer mental models (I want to see which package/file this pattern came from, not which line). Conversely, for prose claims about APIs or behavior, sentence-level is more precise and catches grounding failures at finer granularity. The architectural choice: add a query-type classifier that routes code-heavy queries to function-level citation mode and prose queries to sentence-level. Eval separately โ€” citation precision on code queries and citation precision on prose queries should have separate thresholds.
### โ˜…โ˜…โ˜… _(Google, Meta)_ **Q:** A senior interviewer at Google asks: 'Vertex AI Matching Engine vs. HNSW-backed self-hosted ANN โ€” which would you choose for a 50B-passage production RAG system, and why?'
Answer Vertex AI Matching Engine for a team without dedicated ANN infrastructure expertise; self-hosted HNSW (via Weaviate, Vespa, or Milvus) for a team with retrieval engineers and a need for custom scoring. Trade-offs: Vertex offers managed scaling, SLA-backed availability, and native Google Cloud IAM integration โ€” reducing operational burden but limiting control over the ANN graph construction, quantization settings, and filtering logic. Self-hosted HNSW gives you control over ef_construction, M (max connections per node), and hybrid sparse-dense scoring โ€” critical for retrieval systems that need query-time filtering (e.g., filter by domain, date range, or language) without post-filter recall collapse. At 50B passages, index sharding becomes the primary design problem regardless of backend โ€” plan for 20โ€“50 shards with a scatter-gather query fan-out. The deciding factor is query-time filter complexity: if you need more than 2โ€“3 filter dimensions at ANN time, self-hosted Vespa or Weaviate with native filter support outperforms Vertex's post-filter approach by 30โ€“60% recall at the same latency budget (per Weaviate benchmark, 2023).
## Further Reading - [Perplexity Engineering Blog โ€” How Perplexity Builds Its Products](https://www.perplexity.ai/hub/blog) Primary source for Perplexity's retrieval architecture, freshness design, and citation strategy. The most candid engineering disclosure from any answer engine. - [Google NotebookLM โ€” Product Changelog & Architecture Notes](https://notebooklm.google.com/) Product-level documentation for NotebookLM's Gemini 1.5 Pro long-context approach. Pair with Google I/O 2024 talks on Vertex AI Matching Engine. - [Phind Engineering Blog โ€” How We Built a Code Search Engine](https://www.phind.com/blog) Phind's description of their code-specialized retrieval pipeline, domain-weighted re-ranking, and function-level citation design. - [RAGAS: Automated Evaluation of Retrieval Augmented Generation (Es et al., 2023)](https://arxiv.org/abs/2309.15217) The evaluation framework for RAG systems โ€” faithfulness, answer relevance, context precision, context recall. The eval metrics used in the cross-system comparison in this module are grounded in RAGAS. - [Lilian Weng โ€” Retrieval-Augmented Generation for LLMs](https://lilianweng.github.io/posts/2023-10-02-rag/) The canonical survey of RAG architectures โ€” covers bi-encoders, cross-encoders, fusion-in-decoder, and long-context approaches. Essential background for defending any retrieval design choice. - [Dense Passage Retrieval for Open-Domain QA (Karpukhin et al., 2020)](https://arxiv.org/abs/2004.04906) The DPR paper that defined the dual-encoder retrieval baseline. Understanding why DPR works is prerequisite to understanding why every system here extends or departs from it. - [Shreya Shankar โ€” Who Validates the Validators? Towards LLM-Assisted Evaluation](https://arxiv.org/abs/2405.03600) The foundational paper for cross-system eval design โ€” explains why LLM-judge calibration is not optional and how to measure judge-to-human agreement across RAG eval axes. ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Compare: SLO โ†” Cost" part: "Design Reviews" number: 81 emoji: "โš–๏ธ" subtitle: "Interactive sensitivity โ€” slide p99, watch GPU count, $/req, and cache hit-rate move together" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # โš–๏ธ Compare: SLO โ†” Cost > Interactive sensitivity โ€” slide p99, watch GPU count, $/req, and cache hit-rate move together > [!question] Key Question > Cut p99 latency in half โ€” how much more expensive does it get? โ† Compare: RAG Systems | โ†’ Compare: Failure-Mode Taxonomy ## Key Insights > [!tip] Insight > Measure the right thing. Gil Tene's “How NOT to Measure Latency” (QCon 2015) makes the point explicitly: coordinated omission in latency benchmarks causes p99 to look like p50. If your load generator doesn't account for back-pressure, every latency histogram you publish is a lie. The fix is HDR histograms with coordinated-omission correction โ€” the standard in production SLO tooling since circa 2016. > [!tip] Insight > The cross-system comparison. The three sandboxes reveal the cost-SLO slope difference: consumer chat has a moderate slope (high baseline cache saves most from cache hits), search/RAG has a steeper slope (low baseline cache, bigger cache investment payoff), and image gen has a near-vertical p99 slope (no cache benefit, pure latency-cost tradeoff). Designing across all three in a single interview shows range โ€” most candidates only know the chat model. > [!tip] Insight > Interviewer trap. “Our p95 is 500 ms, so p99 should be around 600 ms.” This is only true for near-Gaussian distributions. LLM serving latency is heavy-tailed due to variable output length and prefill interference. In practice, p99 is often 3–8ร— p95 for serving workloads with long-context requests in the batch. Always ask for the histogram, not the point estimate. > [!tip] Insight > The cache-hit cost curve bends at 60%. Below 60% cache hit, each 10 pp increase saves roughly linearly in GPU costs. Above 60%, the marginal gain starts to diminish because you're already deflecting most of the cheaply cacheable traffic โ€” the remaining misses are long-tail queries with inherently low reuse. The investment threshold for semantic caching infrastructure is when your query distribution has identifiable clusters (FAQ, support topics, similar intents). If your query distribution is uniform (open-ended chat, creative writing), semantic cache ROI is poor. ## Code Examples ```python import time def compute_burn_rate( error_count: int, # errors in window total_requests: int, # requests in window slo_target: float, # e.g. 0.999 for 99.9% window_seconds: int, # observation window (e.g. 3600 = 1h) budget_seconds: int = 2_592_000, # 30-day month ) -> float: """ Burn rate > 1.0 means budget is draining faster than it refills. Burn rate > 14.4 means the full monthly budget is exhausted in 2 days. Matches the multi-window alerting scheme from the Google SRE Workbook. """ error_rate = error_count / max(total_requests, 1) allowed_error_rate = 1 - slo_target # 0.001 for 99.9% burn_rate = error_rate / allowed_error_rate return burn_rate # Example: 50 errors in 10k requests over 1h, 99.9% SLO rate = compute_burn_rate(50, 10_000, slo_target=0.999, window_seconds=3600) print(f"Burn rate: {rate:.2f}x") # 5.00x โ€” page immediately ``` ```python import math def gpu_cost_after_slo_tightening( baseline_gpus: int, baseline_p99_ms: float, target_p99_ms: float, gpu_hourly_usd: float, hours_per_month: float = 730.0, ) -> dict: """ Estimate GPU fleet delta when tightening p99 latency SLO. Uses the sqrt(latency) empirical exponent from the transition regime between weight-bound and KV-cache-bound decode. Cite: Pope et al. 2022 (PaLM inference) + SloCostSandbox empirical fit. """ latency_ratio = baseline_p99_ms / target_p99_ms gpu_scale_factor = math.sqrt(latency_ratio) new_gpus = math.ceil(baseline_gpus * gpu_scale_factor) baseline_monthly = baseline_gpus * gpu_hourly_usd * hours_per_month new_monthly = new_gpus * gpu_hourly_usd * hours_per_month return { "baseline_gpus": baseline_gpus, "new_gpus": new_gpus, "gpu_scale_factor": round(gpu_scale_factor, 3), "baseline_monthly_usd": round(baseline_monthly, 0), "new_monthly_usd": round(new_monthly, 0), "delta_monthly_usd": round(new_monthly - baseline_monthly, 0), } # Consumer chat: 5,000 H100s at $3.50/hr, p99: 3000ms -> 1500ms result = gpu_cost_after_slo_tightening(5000, 3000, 1500, 3.50) print(result) # {'baseline_gpus': 5000, 'new_gpus': 7072, 'gpu_scale_factor': 1.414, # 'baseline_monthly_usd': 12775000.0, 'new_monthly_usd': 18073040.0, # 'delta_monthly_usd': 5298040.0} # Cutting p99 in half costs +$5.3M/month on a $12.8M baseline โ€” +41%. ``` ```python def mm1_wait_factor(utilization: float) -> float: """ Mean queue wait time as a multiple of mean service time. M/M/1 queue formula: rho / (1 - rho). Diverges as utilization -> 1.0. """ assert 0 < utilization < 1.0, "Utilization must be in (0, 1)" return utilization / (1 - utilization) for rho in [0.5, 0.6, 0.7, 0.8, 0.9, 0.95]: print(f" ฯ={rho:.2f} wait_factor={mm1_wait_factor(rho):.2f}x") ``` ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** An interviewer asks:
Answer The โˆš(latency) batch-size rule: halving p99 latency forces batch size to shrink by roughly โˆš2 โ‰ˆ 1.41ร—, so throughput drops by the same factor. To sustain the same QPS, you need โˆš2 more GPUs โ€” approximately 41% capacity increase. For a fleet of 5,000 H100s at $3.50/hr: baseline monthly burn = 5,000 ร— $3.50 ร— 730 = $12.775M. After SLO tightening: 5,000 ร— 1.41 ร— $3.50 ร— 730 โ‰ˆ $18.01M โ€” a $5.24M/month increment, or ~41%. The non-obvious piece: the โˆš exponent comes from the relationship between GPU decode throughput and batch-level memory bandwidth saturation; it is not a linear relationship. Cite the memory-bandwidth-bound decode argument from Pope et al. 2022 (PaLM inference paper) for credibility.
### โ˜…โ˜…โ˜… _(OpenAI, Meta)_ **Q:** Your cache hit rate drops from 55% to 20% overnight. How does that change your GPU fleet sizing, and what caused it?
Answer Effective QPS hitting the GPU path = total QPS ร— (1 โˆ’ cache hit %). At 55% hit: effective QPS = 0.45 ร— total. At 20%: effective QPS = 0.80 ร— total. Ratio = 0.80 / 0.45 โ‰ˆ 1.78ร—, so you need ~78% more GPUs to sustain the same p99 SLO. Root causes: (1) system prompt format changed, busting prefix cache keys; (2) a new feature added personalization tokens at the start of the prompt (prefix keys now per-user, not per-product); (3) a rollout changed the prompt template hash; (4) semantic cache TTL expired or was flushed. The correct first diagnostic step is plot cache-key distribution โ€” if cache hit is spreading across 10ร— more unique keys, it is a prefix-key churn event, not a traffic spike.
### โ˜…โ˜…โ˜… _(Google, Anthropic)_ **Q:** At 80% GPU utilization, p99 latency is 2.2ร— p50. At 50% utilization, it is 1.3ร—. Why, and what is the threshold you should design around?
Answer Queuing theory: at utilization ฯ, mean wait time in an M/M/1 queue scales as ฯ / (1 โˆ’ ฯ). At ฯ = 0.8: factor = 0.8 / 0.2 = 4. At ฯ = 0.5: factor = 0.5 / 0.5 = 1. The tail (p99) is dominated by queuing wait, not service time. The empirical design threshold is ฯ โ‰ค 0.7 for serving workloads where p99 โ‰ค 2ร— p50 is the SLO; above 70%, p99 climbs super-linearly and any burst crosses SLO. Google SRE book codifies this as “error budget consumption accelerates non-linearly above 70% utilization” โ€” it is not an opinion, it is the M/M/1 formula.
### โ˜…โ˜…โ˜† _(OpenAI, Google)_ **Q:** You have a system with p99 = 120 s and high variance (image generation). The product team wants a p99 SLA commitment. How do you price and architect it?
Answer High-variance workloads like image/video gen are fundamentally different from chat: the distribution is multi-modal (fast 30s generations vs. slow 180s for complex scenes). Steps: (1) Instrument the full empirical distribution, not just mean. (2) Offer the SLA on a percentile the system can actually hold โ€” p95 at 150 s is defensible; p99 at 120 s probably requires 2ร— GPU buffer. (3) Price the SLA tier to cover the buffer: if p99 requires 40% more fleet headroom, the guaranteed tier price must cover the cost difference. (4) For Sora-class workloads, the cost-optimal architecture separates fast and slow jobs (latency disaggregation): fast jobs run on a smaller dedicated pool, slow jobs fill capacity gaps. Without job-class routing, slow jobs block the fast pool and SLO breaches are correlated.
## Further Reading - [Gil Tene โ€” How NOT to Measure Latency (QCon 2015)](https://www.youtube.com/watch?v=lJ8ydIuPFeU) The canonical talk on why averages and even p95 lie, and why p99/p99.9 are the only metrics that capture the user's experience. The HDR histogram argument is mandatory background for SLO design. - [Amazon DynamoDB โ€” Dynamo: Amazon's Highly Available Key-value Store (DeCandia et al., SOSP 2007)](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) The paper that defined SLO-driven design at scale. Section 4 on the latency-at-p99.9 requirement and its architectural implications is the playbook this module derives from. - [Google SRE Book โ€” Chapter 20: Load Balancing at the Frontend](https://sre.google/sre-book/load-balancing-frontend/) The M/M/1 queueing argument and the 70% utilization cap are made explicit here. The error-budget math in the SLO chapter pairs with this module's queueing deep dive. - [Lilian Weng โ€” Large Transformer Model Inference Optimization](https://lilianweng.github.io/posts/2023-01-10-inference-optimization/) The best single reference for how batch size, memory bandwidth, and latency interact at the hardware level โ€” the physical grounding for the โˆš(latency) derivation. - [Pope et al. โ€” Efficiently Scaling Transformer Inference (Google, 2022)](https://arxiv.org/abs/2211.05100) First-principles analysis of memory bandwidth vs. compute bottlenecks in large model serving. The paper that grounds the batch-size/latency tradeoff in hardware arithmetic. ## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor --- --- title: "Compare: Failure-Mode Taxonomy" part: "Design Reviews" number: 82 emoji: "๐Ÿงฏ" subtitle: "One master table of every failure mode across 14 real systems โ€” with detectโ†’escalateโ†’rollback playbooks" tags: ["designreviews", "ml", "ai-engineering", "interview-prep", "transformer"] --- # ๐Ÿงฏ Compare: Failure-Mode Taxonomy > One master table of every failure mode across 14 real systems โ€” with detectโ†’escalateโ†’rollback playbooks > [!question] Key Question > The 3am page happens โ€” you have 30 seconds to pick the right lever โ† Compare: SLO โ†” Cost ## Key Insights > [!tip] Insight > The scoring rubric: interviewers are listening for (1) blast radius quantified, (2) how fast you detected it, (3) whether your rollback was principled or lucky, and (4) whether the post-incident action prevents recurrence. A taxonomy gives you a mental checklist to tick off in real time. > [!tip] Insight > The dark pattern: post-mortems that produce action items with no owner and no deadline. Every action item must have a DRI and a due date. The taxonomy table is only useful if the “detection” column is wired to a real alert. > [!tip] Insight > Interview move: when asked “how do you handle incidents?”, lead with “we separate detection, escalation, and rollback SLOs.” Then quantify each. This signals L6 thinking immediately โ€” most candidates describe a single MTTR without breaking it down. > [!tip] Insight > The key insight for interviewers: prompt injection is not a content-moderation problem โ€” it is a trust-boundary problem. The fix is architectural (separate trust levels) not just a better content filter. Candidates who say “add a content filter” as the only mitigation are missing the structural issue. > [!tip] Insight > The L6 answer on model-swap risk:  “We never skip the canary phase, even under competitive pressure. The canary phase is cheap โ€” it costs 1% of traffic and 24 h. An incident from a rushed model swap costs weeks of MTTR and potentially months of user trust recovery.” > [!tip] Insight > The universal opener: regardless of company, start with the blast radius in one sentence, then the detection speed, then the resolution. This hits the primary scoring axis for every company (scale at Google, speed at OpenAI, revenue at Meta) and buys you time to tailor the rest. ## Interview Questions ### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** Walk me through an incident where a model swap caused a quality regression that wasn
Answer The shadow-eval gap is the root cause. The fix: (1) run a canary eval on the new model against a golden set BEFORE traffic migration; (2) gate on per-cohort pass rates, not just aggregate โ€” a new model can improve average quality while degrading safety-sensitive or edge-case cohorts; (3) route 1% of live traffic through the new model for 24 h before full rollout, with a kill switch on thumbs-down rate > baseline + 3 pp. The non-obvious lesson: most model-swap regressions appear in latency tail (p99 TTFT), not average quality, because the new model has a different speculative-decode profile. Instrument p95/p99 TTFT separately from average quality in your shadow period.
### โ˜…โ˜…โ˜… _(Anthropic, Google)_ **Q:** You
Answer Immediately: (1) check if a classifier config was pushed in the last 30 min โ€” a threshold change or model swap is the most likely cause; (2) check if the spike is correlated with a specific topic cluster (news event, trending query) vs. uniform across all categories โ€” uniform = classifier issue, topic-specific = distribution shift; (3) measure revenue impact: at 18% refusal, every 10 min = ~X% of daily active users hitting a wall, price it immediately for incident severity. Rollback path: classifier config has a feature flag โ€” revert to previous config in < 5 min. Post-incident: add a canary that runs the classifier on a fixed 200-sample distribution probe every 5 min and pages on deviation > 2 pp.
### โ˜…โ˜…โ˜† _(Google, Meta, Anthropic)_ **Q:** Describe the difference between how Google and Anthropic interviewers ask about production incidents in the behavioral round. How does your answer change?
Answer Google (L6 SWE/MLE): wants the STAR format with emphasis on SCOPE (how many users affected), SPEED (how fast did you detect and mitigate), and SYSTEMIC FIX (what monitoring did you add). They prize quantitative blast radius. Anthropic: wants you to surface the reasoning behind your safety trade-offs โ€” specifically, what did you do when the right call was ambiguous? They care about the principle you applied, not just the outcome. Meta: wants the business impact number immediately, then the technical root cause โ€” revenue first, architecture second. Answer template: Lead with blast radius (N users, $X revenue at risk), then detection speed, then root cause, then the durable fix that made the post-mortem unnecessary to repeat.
### โ˜…โ˜…โ˜… _(Anthropic, OpenAI)_ **Q:** A prompt-injection attack is discovered in your RAG pipeline: a retrieved document contains instructions that override the system prompt. What are your defense layers?
Answer Defense in depth with four layers: (1) Input sanitization โ€” strip known injection patterns before retrieval (<system>, IGNORE PREVIOUS, etc.); (2) Retrieval-path trust โ€” treat retrieved documents as untrusted user input, never as system-level context; use a separate system prompt section that is not part of the retrieved context window; (3) Output monitoring โ€” safety classifier on the output looks for instruction leakage signals (e.g., the model repeating back injected instructions verbatim); (4) Rate limiting on semantic similarity to known injections โ€” embed the query against a library of known injection patterns and block above a cosine similarity threshold. Real-world example: Simon Willison documented the Bing Chat indirect injection in Feb 2023 where a malicious web page caused Bing to reveal its system prompt via a retrieved context window.
## Related The Design Doc ยท Cost Accounting & Eval-Driven Design ยท Case: Design ChatGPT ยท Case: Design Perplexity ยท Case: Design Claude Code / Cursor ---