Changelog
What changed in the wiki. SOTA additions get a 🚀 SOTA tag — keep an eye on these to stay current. Fundamentals (intuition, math, derivations) stay stable; new techniques are added as sub-sections within existing modules.
13 entries · LLM-friendly raw feed at /changelog.json
May 6, 2026
- 🚀SOTA
Reasoning cluster — added SOTA 2024-2025 sub-sections covering test-time compute scaling, DPO successors (SimPO/ORPO/KTO/GRPO/DAPO), o3/o4-mini/Claude 3.7/Gemini 2.5 Deep Think benchmarks, ProcessBench/ThinkPRM, AlphaProof+AlphaGeometry-2 IMO silver. 12 new Claim ledger entries with source URLs.
- ⚙️Infra
AI-era LLM-consumable infrastructure: llms.txt + llms-full.txt (686 KB, 14K lines), 83 raw markdown files at /raw/<id>.md, robots.txt allowing GPTBot/ClaudeBot/Google-Extended/PerplexityBot, Schema.org LearningResource JSON-LD on every page, sitemap.xml with all 83 modules.
- ✨Feature
Personal study notebook UX: localStorage progress tracking (Studying/Studied/Mastered), sidebar progress dots, Recently Visited + Resume Studying sections on homepage, 'Ask Claude / ChatGPT / Gemini about this module' deep-link buttons on every module.
- ⚙️Infra
Shiki bundle 10-18 MB → 0.46 MB (~25-40× reduction). Switched from import('shiki') (pulls all 60+ TextMate grammars) to fine-grained @shikijs/core + per-language imports for the 6 actually-used langs. Faster initial load on every module with code blocks.
- 🔧Fix
A11y critical: dark-mode active section pill contrast 2.52:1 → 8.67:1 (WCAG AAA) by adding @custom-variant dark to Tailwind config. Mobile search icon, tx3 contrast 3.83→5.3:1, 44px touch targets, sidebar scroll-to-active, search modal aria-modal.
- 📚Content
13 module hooks rewritten to strawberry-pattern (specific number + counterintuitive paradox). Examples: attention now hooks with 'GPT-4 reads The trophy didn't fit in the suitcase because it was too big — what does it refer to?'; rlhf now hooks with 'OpenAI fine-tuned InstructGPT on 13K labeled examples and beat GPT-3 175B on human preference.'
- ✨Feature
9 new SVG diagrams across MoE / Coordinator-Worker / SAE / Sub-Agent topology / Command Registry / MCP Transport / State Dual System / Memory Layer / Error Recovery Flow. All server-rendered (no use-client), CSS-variable colors for dark-mode adaptability.
- 🔧Fix
Codex + tier-1 review cleanup: 7 BLOCKERs fixed (Embeddings 394M→788M params, FFN Llama-2 stem reword, DrCaseGemini KV math 1 KB→944 KB raw + INT4 quantization framing, video math 86,400→371,520 tokens, DRAM tier 1s→25s, $0.067/M→$0.111/M, ToolUse empty <text> orphan). Plus internals fact-check: AgentHarness exit-0 semantics + session path, CommandsSkills YAML key when_to_use.
- 🔧Fix
7 SVG layout bugs: RLHFPipelineDiagram title/legend overlap, ScalingLawsDiagram legend overflow (2×2 grid), FlashAttentionDemo legend truncation, EvalPipelineDiagram white-on-cream contrast, BridgeArchitectureDiagram label collision, RAGPipelineDiagram label clipping, StreamingPipelineDiagram queryModelWithStreaming() too long.
- ⚙️Infra
CI hardening: lint:drift + lint:svg-fonts wired as CI gates. Verify CONCURRENCY 5→3 + completed-count guard fixes false-green CI on partial OOM runs. Stale 'Challenge answer revealed' selector fixed. Error boundary (error.tsx + global-error.tsx) catches runtime exceptions instead of blank screen.
- 📚Content
16 cross-module Link bridges added across Part 1-4 (attention↔kv-cache↔flash-attention↔quantization, rlhf↔dpo, mech-interp↔interpretability, etc.). Reduces 'isolated islands' problem flagged in Editor review.
- 📚Content
28 new MCQs added across 7 Part 8 engineering modules (sub-agents, commands-skills, plugins-mcp, state-management, terminal-ui, memory-system, error-recovery), with balanced A/B/C/D correct-answer positions and length-balanced labels.
- 📚Content
Worst content modules raised: streaming-api real-numbers replaced 'illustrative' with sourced Anthropic docs, transformer-overview wrapped challenges section with proper id/scroll-mt-24, data-curation got HuggingFace+MinHash LSH PyTorch snippet + FineWeb pedagogical resource.