COMPANIES

SWE Interview Playbooks by Company

Amazon grades the same LeetCode medium differently than Google. Company-specific interview playbooks for software engineers — loop format, rubric signals, what actually gets you hired.

SWE Interview Playbooks by Company

There is a myth in SWE interview prep that companies are basically interchangeable — grind LeetCode, memorize a system design template, show up, perform. If you have interviewed at more than one FAANG in the last three years, you know this is not true. The loop format, the rubric, the signals each round is scoring, and the committee that ultimately decides your fate all look materially different from company to company.

This pillar is the company-by-company translation layer.

Why the same question gets graded differently

Take “design a URL shortener.” At Amazon, the bar raiser wants you to open with scale assumptions, justify your database choice on cost-per-request, and talk about how you would actually operate the service at 3am. At Google, the same question is graded on how rigorously you reason about the coding component — hash collisions, ID generation, complexity — and your senior signal shows up in capacity planning. At Meta, if you cannot explain the product surface — who uses this, what happens when a link rots, how do you measure success — you are capped at mid-level regardless of your architecture.

The question is the same. The rubric is not.

The same principle applies to coding rounds. A clean O(n log n) solution with tests gets different treatment depending on whether the interviewer is scoring Customer Obsession (Amazon), coding complexity rigor (Google), or speed-of-execution and edge-case coverage (Meta). The algorithm is table stakes. The signal layered on top is what moves your packet.

How to use this pillar

Each company playbook follows the same structure: how the loop is actually run, what each round is scoring, what common rejection patterns look like, and how to calibrate your prep to the rubric. Start with the company you are farthest along with in the pipeline — that is where the highest-leverage reading sits.

Pair each playbook with a mock interview calibrated to that company’s rubric. Reading about Amazon’s bar raiser is not the same as having one ask you a follow-up on your weakest LP story.

Frequently asked questions

Is company-specific prep actually worth it, or is LeetCode enough?
LeetCode gets you past the phone screen. It does not get you the offer. The onsite loop is graded against a company-specific rubric — Amazon's bar raiser is scoring Leadership Principles, Google's hiring committee is scoring signal consistency across rounds, Meta is scoring product sense alongside coding. Same algorithm question, three different rubrics. If you skip this layer you end up in the pile of 'strong coder, no offer' candidates.
How much of the interview is actually standardized across companies?
The surface layer — two coding rounds, one design round, one behavioral — looks similar everywhere. The scoring rubric underneath is not. At Amazon every answer is expected to map to an LP. At Google every coding answer is expected to discuss complexity up front. At Meta a system design that ignores product tradeoffs fails even if the architecture is clean. The loop format is not the signal; the rubric is.
Which company should I target if I'm coming off a layoff?
Target the company whose rubric your last two years of work already matches. If you spent three years on AWS infrastructure, Amazon's ops-heavy design round is a home game. If you led a zero-to-one consumer feature, Meta's product-sense round will feel natural. Matching your actual experience to the rubric is a much higher-leverage move than grinding a company you admire but have no story for.