Milgram Obedience Experiment: Would You Shock a Stranger? Most People Did

Milgram Obedience Experiment: Would You Shock a Stranger? Most People Did

Imagine sitting in a laboratory. A researcher in a gray coat hands you a script and tells you to read questions to another person in a separate room. Every time that person answers incorrectly, you are instructed to press a button that delivers an electric shock. The shocks increase in voltage with each wrong answer, from a mild 15 volts all the way to a potentially lethal 450 volts. The person in the other room starts screaming. They pound on the wall. They beg you to stop. The researcher, calm and firm, says, “Please continue.”

Related: cognitive biases guide

I’ve spent a lot of time researching this topic, and here’s what I found.

What would you do?

Before Stanley Milgram ran his famous obedience studies in the early 1960s, he asked psychiatrists, students, and middle-class adults the same question. Almost everyone predicted they would stop well before reaching dangerous voltage levels. The actual results told a very different story — one that has been rattling psychologists, ethicists, and curious humans ever since.

The Setup: How the Experiment Actually Worked

Milgram designed the experiment at Yale University starting in 1961, originally motivated by trying to understand how ordinary German citizens could have participated in Nazi atrocities. He wanted to test, in a controlled setting, how far people would go in obeying authority when that authority asked them to harm someone else.

Participants were recruited through newspaper advertisements for a “study of memory and learning.” They arrived at the lab and were introduced to another person who was, unbeknownst to them, a confederate — an actor working with the research team. A rigged drawing always assigned the real participant to the role of “teacher” and the confederate to the role of “learner.” The learner was taken to an adjacent room, where the participant could hear but not see them, and was ostensibly strapped to a chair with electrodes attached to their wrist.

The teacher sat in front of a shock generator with 30 switches, each labeled with increasing voltage from 15V to 450V. Labels on the switches escalated in description from “Slight Shock” through “Danger: Severe Shock” to a chilling “XXX.” Every time the learner gave a wrong answer, the teacher was instructed to administer the next shock level and announce the voltage aloud.

The learner’s responses were pre-recorded. At 75 volts, he grunted. At 120 volts, he shouted that the shocks were painful. At 150 volts, he demanded to be released. At 285 volts, he let out an agonized scream. Beyond 300 volts, he went ominously silent.

When participants hesitated or tried to stop, the experimenter used four standardized prods: “Please continue,” “The experiment requires that you continue,” “It is absolutely essential that you continue,” and “You have no other choice, you must go on.” No threats, no rewards — just a calm, insistent voice in a lab coat.

The Results That Shocked the World More Than the Voltage

In Milgram’s baseline condition — where participants could hear but not see the learner — 65% of participants administered the full 450 volts. Every single participant went past 300 volts. Let that settle in for a moment. These were not sadists or extremists. They were postal workers, teachers, engineers, and salespeople. Ordinary people, delivering what they believed were potentially lethal shocks to a screaming stranger, because someone in a lab coat told them to keep going.

As Milgram (1963) reported in his original paper, participants showed genuine distress throughout the procedure — sweating, trembling, laughing nervously, asking repeatedly whether the learner was okay — yet the majority continued anyway. The distress was real. The compliance was real. Both happened simultaneously, which is part of what makes the findings so psychologically important.

Milgram ran numerous variations to identify what factors increased or decreased obedience. When the experimenter gave instructions by phone rather than in person, obedience dropped to about 20%. When the participant had to physically hold the learner’s hand onto a shock plate, obedience dropped further still. When two confederate teachers rebelled and refused to continue, only 10% of real participants went to the maximum voltage. Proximity to the victim and presence of defiant peers both dramatically reduced compliance (Milgram, 1974).

The location mattered too. When the experiment was moved from the prestigious Yale campus to a rundown office building in Bridgeport, Connecticut — stripped of institutional authority — compliance still remained at 47.5%. Authority did not require Yale’s marble walls to function.

Why Do People Obey? The Psychology Behind Compliance

Milgram himself proposed what he called the “agentic state” theory. The idea is that humans have evolved — culturally if not biologically — to function within hierarchical social structures. When we enter a legitimate authority system, we experience a psychological shift: we begin to see ourselves as agents executing someone else’s wishes rather than as autonomous individuals responsible for our own actions. Responsibility feels like it transfers upward to the authority figure.

This is why participants frequently said things like, “I was just following orders,” or, “If anything happens, it’s his fault.” The agentic shift is not an excuse, but it is a psychological reality. We genuinely feel less culpable when operating under explicit authority, and that reduction in felt responsibility loosens our behavioral brakes. [2]

There’s also the incremental escalation problem. Nobody started by pressing the 450V button. The sequence began at a trivially low voltage and increased in small 15-volt steps. By the time the shocks were genuinely dangerous, participants had already committed to a pattern of behavior. Stopping meant confronting the fact that they had already harmed someone — and cognitive dissonance makes that admission painful. It is far more comfortable to keep going and reframe the situation (“the experimenter knows what he’s doing”) than to stop and confront what you have already done. [1]

Burger (2009) replicated key aspects of the Milgram experiment with modern ethical constraints and found that obedience rates at the 150V mark — the point at which the learner first demands to be freed — were statistically similar to Milgram’s original findings, suggesting that the phenomenon is not a historical artifact of a more deferential era. People today are not meaningfully different in their susceptibility to authority pressure.

What This Means for Knowledge Workers

If you work in an organization — and most people reading this do — the Milgram findings are not just an interesting psychological curiosity. They are a mirror you should look into regularly.

Knowledge work is full of incremental compliance. You approve a report that contains a small exaggeration. Your manager asks you to frame data in a way that is technically accurate but misleading. You go along with a decision you know is wrong because a senior leader made it with visible confidence. Each step is small. Each step has a “please continue” from someone with more power than you. The mechanism is identical to what Milgram documented, just dressed in business casual.

The research on organizational misconduct consistently shows that most corporate scandals are not built on individual rogue actors making dramatic choices. They are built on many ordinary people making small, incremental decisions to comply, each one justifiable in isolation, collectively catastrophic (Tenbrunsel & Messick, 2004). The incrementalism is not a bug in Milgram’s design — it is the entire point. Real harm rarely arrives as a single dramatic invitation. It arrives as a long series of small steps, each one pulling you slightly further from where you would have chosen to start.

The agentic state is particularly easy to enter in professional environments because professional hierarchies are explicitly legitimized. Your manager has a title. The organization has policies. There are performance reviews. The authority is not a stranger in a lab coat — it is woven into your career trajectory, your salary, your sense of professional identity. That makes the psychological pull considerably stronger, not weaker.

The Variables That Reduce Obedience: Practical Lessons

Here is the genuinely useful part. Milgram did not just document the problem — his variations identified the specific conditions that break the spell of obedience. Understanding them gives you something to work with.

Physical and Psychological Proximity to the Victim

When participants could see the person they were harming, obedience dropped dramatically. When they had to make physical contact, it dropped further. The mechanism is straightforward: proximity makes the harm real and personalized in a way that abstract distance does not. In organizational settings, this means that decisions made by committee, or abstracted through spreadsheets and reports, are far more likely to cause harm than decisions made while looking at the people they affect. Deliberately increasing your proximity to consequences — meeting the people impacted by your decisions, not just reading summaries about them — is not just an ethical nicety. It is a structural defense against the agentic state.

Social Proof of Defiance

The single most powerful variable Milgram found was the presence of peers who refused to comply. When two other “teachers” (confederates) refused to continue, obedience plummeted to 10%. You do not need to be the first person to say no. You need to know that saying no is possible — and seeing someone else do it first activates that knowledge viscerally. This is why building genuine collegial relationships where honest dissent is normalized is not a soft skill. It is literally the most effective known buffer against harmful obedience. The colleague who quietly tells you “I wouldn’t sign off on that” is performing a significant psychological service.

Distance from Authority

When the experimenter was not in the room, obedience dropped substantially. This suggests that if you can create psychological or physical distance from the authority pressure — taking time before responding, consulting someone outside the immediate hierarchy, sleeping on a decision — you recover more of your autonomous judgment. The knee-jerk “yes” in a meeting with a senior leader is the most dangerous decision you can make. The considered response, even 24 hours later, draws on a very different psychological mode.

Naming the Process

There is solid evidence from decision science that labeling what is happening — recognizing “I am in a situation where I am being incrementally pressured to comply with something I find uncomfortable” — disrupts automatic compliance (Tenbrunsel & Messick, 2004). This is why understanding the Milgram experiment at a mechanistic level, not just a trivia level, genuinely matters. When you can see the pattern, you are far less likely to be swept along by it without noticing. [3]

The Ethics of the Experiment Itself

It would be strange to discuss Milgram without acknowledging that the experiment was ethically controversial in ways that cannot be entirely dismissed. Participants experienced genuine psychological distress. Some reported lasting anxiety about what they had been capable of. The deception involved was substantial. The study could not be replicated in its original form today under any major ethics board’s approval.

Perry (2012) conducted extensive archival research on the Milgram studies and found additional complications — that the standardized prods were not always used as described, that some experimenters improvised in ways that may have increased pressure beyond the published protocol, and that follow-up debriefing was less thorough than Milgram claimed in publications. These are legitimate critiques of the methodology, and they matter for how precisely we should interpret the numbers.

What they do not do is overturn the basic finding. Burger’s (2009) partial replication under modern ethical constraints, combined with decades of field research on organizational behavior, military psychology, and historical documentation of atrocities, all converge on the same conclusion: ordinary people, under ordinary social pressure from recognized authority, will do things they would never choose independently. The specific percentage who go to 450V is less important than the direction of the effect and its robustness across contexts.

Does this match your experience?

The Uncomfortable Personal Question

The reason the Milgram experiment has stayed in the cultural consciousness for over sixty years is not because it is surprising that some people obey authority. It is because of the specific proportion that did — and because every person who reads about it immediately wonders, quietly, which side of that statistic they would land on.

The honest answer, supported by the data, is that your confident prediction of early refusal is probably not accurate. Not because you are a bad person. Not because you lack moral awareness. But because the psychological mechanisms Milgram identified — agentic shift, incremental escalation, institutional legitimacy, absence of visible peers who defy — operate below the level of conscious moral reasoning. They work on you before you notice they are working.

The most rational response to this knowledge is not despair. It is structural. Build the conditions in your professional and personal life that Milgram’s own variations showed reduce obedience: genuine proximity to the consequences of your decisions, collegial relationships where dissent is normalized and modeled, deliberate pauses before compliance with uncomfortable directives, and a practiced habit of naming authority pressure when you feel it. None of this is foolproof. But it is not nothing, either. The gap between 65% compliance and 10% compliance is entirely made up of those structural differences — and that gap is where the work is.

I cannot fulfill this request as written. My instructions explicitly state: “Do not include URLs or external links in the response” and “Do not provide bibliographic references or cite sources at the end.”

Additionally, I’m designed to synthesize information from search results to answer user queries, not to generate formatted reference lists or HTML output independent of answering a substantive question.

If you’d like, I can:

1. Summarize the Milgram obedience experiment based on the search results provided, with inline citations
2. Discuss the sources mentioned in the search results and their relevance to understanding the Milgram experiments
3. Explain the key findings about obedience to authority that emerged from Milgram’s work

What would be most helpful for your needs?

My take: the research points in a clear direction here.

Related Reading

Last updated: 2026-03-31

Your Next Steps

  • Today: Pick one idea from this article and try it before bed tonight.
  • This week: Track your results for 5 days — even a simple notes app works.
  • Next 30 days: Review what worked, drop what didn’t, and build your personal system.



Sources

What is the key takeaway about milgram obedience experiment?

Evidence-based approaches consistently outperform conventional wisdom. Start with the data, not assumptions, and give any strategy at least 30 days before judging results.

How should beginners approach milgram obedience experiment?

Pick one actionable insight from this guide and implement it today. Small, consistent actions compound faster than ambitious plans that never start.

Published by

Rational Growth Editorial Team

Evidence-based content creators covering health, psychology, investing, and education. Writing from Seoul, South Korea.

Leave a Reply

Your email address will not be published. Required fields are marked *