I remember sitting in a Seoul coffee shop on a Tuesday morning when my student’s mother broke down in tears. Her 14-year-old daughter had just scored 98th percentile on the Korean national exam. Most people would call that a triumph. But the girl was exhausted, anxious, and had stopped sleeping properly three months earlier. This moment crystallized something I’d been noticing for years: the Korean education system delivers world-class test scores while hiding a deeper cost nobody talks about.
When international assessments like PISA (Programme for International Student Assessment) rank countries, South Korea consistently appears near the top. The numbers are stunning: Korean students regularly score in the 95th percentile in mathematics and science globally. But those PISA scores don’t tell you about the 10 p.m. cram sessions, the weekend hagwon (private academy) classes, or the psychological toll many students experience. They don’t capture what parents and educators actually live with every day.
As a teacher who’s worked across different education systems, I’ve learned that standardized metrics reveal only part of the story.
Understanding Korea’s PISA Performance: The Numbers Game
South Korea’s PISA results are genuinely impressive. In 2022, Korean students ranked 7th globally in mathematics, 10th in reading, and consistently in the top 15 for science across multiple assessment cycles (OECD, 2023). These aren’t marginal advantages—they represent students who can solve complex problems, think critically, and demonstrate subject mastery that many wealthy nations can’t achieve at scale.
Related: evidence-based teaching guide
Here’s what’s crucial to understand: those PISA scores represent real capability. Korean students do learn deeply. The system produces engineers, scientists, and technologists who drive innovation globally. Samsung, LG, and POSCO didn’t become world leaders by accident. The education pipeline that feeds them actually works.
But PISA measures only certain competencies—problem-solving in tested domains, specific cognitive skills, and measurable knowledge. It doesn’t measure well-being, intrinsic motivation, creativity in unstructured settings, or joy in learning. It’s like measuring a car’s success by its 0-60 time while ignoring fuel efficiency, safety, and whether the driver wants to be in that car (Lui & Macaro, 2020).
The Architecture of Academic Pressure: How the System Creates Excellence (and Stress)
The Korean education system didn’t emerge randomly. It’s the product of deliberate design choices that prioritize meritocracy, standardization, and measurable outcomes. Understanding this architecture helps explain why pressure exists and why it produces results.
South Korea’s gaokao-equivalent is the College Entrance Examination—the Suneung. This single test, administered once per year, determines university placement for most students. Imagine if your entire academic future depended on one day’s performance. That structural reality cascades backward through the entire system, creating pressure at every level. Middle school feeds into high school. High school feeds into the Suneung. Everything is optimized for that endpoint.
I taught a student named Min-jun who was genuinely brilliant—curious, creative, interested in environmental science. But between school and two hagwon academies, he had time for neither sleep nor genuine inquiry. His creativity became strategic: understanding what teachers valued and delivering exactly that. He wasn’t learning to become an environmental scientist. He was learning to pass tests. When he aced the Suneung and gained admission to Seoul National University’s environmental science program, we both felt conflicted. He’d achieved the system’s goal perfectly. But somewhere along the way, his actual passion had been commodified into test strategy.
This isn’t unique to Korea. It’s an extreme version of dynamics present in competitive education systems globally. But Korea’s particular combination of Confucian cultural values, family-centered ambition, and high population density in competitive metros creates an unusually intense pressure environment (Park & Cho, 2021).
The Hidden Costs: What PISA Scores Miss
Here’s where the narrative shifts from “impressive system” to “system with consequences.” Mental health data tells a different story than PISA rankings.
South Korea has among the highest youth suicide rates in developed nations. Approximately 23% of Korean high school students report severe stress levels. Depression and anxiety diagnoses among students have increased steadily. These aren’t failures of smart, hardworking kids. They’re signals that the system itself creates psychological strain that test scores can’t capture (Kim, Park, & Lee, 2019).
I observed this with a student named Ji-won, who was preparing for the Suneung while her peers were discovering who they wanted to become. Ji-won experienced tremors before major exams—not because she was weak, but because her nervous system was chronically activated. She was 17 and living in what amounted to occupational stress.
The pressure extends to sleep deprivation. Korean studies document that many high school students sleep only 5-6 hours per night during exam preparation seasons. This isn’t just uncomfortable—it actively impairs the cognitive function these students are trying to optimize. Sleep deprivation reduces memory consolidation, emotional regulation, and creative thinking. The system creates a paradox: students sacrifice the sleep their brains need to actually perform well.
Additionally, the intense focus on measurable academics often crowds out other forms of development. Physical activity drops. Hobbies become strategic resume-builders rather than genuine interests. Social connection becomes competitive. The pressure to maintain grades in every subject—even ones students will never use professionally—consumes time and energy that could develop resilience, leadership, or artistic capacity.
What Works: The Legitimate Benefits of High Standards
Before painting the Korean system as entirely problematic, I need to be honest about what it does well. You’re not alone if you’ve wondered whether higher pressure creates better outcomes. The evidence suggests it’s more nuanced than a simple “pressure = success” or “pressure = harm” equation.
When standards are genuinely high and consistently applied, students rise to meet them. Korean students develop genuine subject mastery. They can perform complex mathematics without calculators. They understand scientific reasoning deeply. They can write clearly and argue analytically. These aren’t test-taking tricks—they’re real capabilities that serve them professionally.
The system also created social mobility. Decades ago, educational achievement in Korea opened doors for families regardless of wealth. While that’s less true now (wealth increasingly predicts outcomes in Korea, as elsewhere), the historical commitment to broad-based rigorous education created broader opportunity than many systems.
There’s also something to the cultural value on discipline and deferred gratification. When I compare Korean students to peers in more relaxed systems, the Korean students typically demonstrate stronger work ethic, follow-through, and ability to tackle difficult material. Option A works if you want students to develop genuine excellence and self-discipline. Option B—lower pressure, more choice—produces happier students in the moment but sometimes less depth of skill.
The Broader Pattern: Pressure Doesn’t Scale Equally
Here’s something crucial that rarely gets discussed: the Korean system works differently for different students. High pressure creates excellence for high-achieving students and psychological harm for others, often simultaneously.
Top performers—perhaps 20% of the cohort—genuinely thrive under clear standards and competition. They’re intrinsically motivated, their effort aligns with system rewards, and they experience the pressure as motivating rather than crushing. They gain admission to elite universities and often build successful careers.
Middle-tier students experience pressure without corresponding reward. They work intensely, manage stress, sacrifice sleep and hobbies, and still don’t gain admission to top universities. The system’s promise of meritocracy rings hollow when intelligence, effort, and outcomes don’t align perfectly.
Lower-achieving students often experience the system as punitive. When standardized tests measure only certain types of intelligence and success is publicly ranked, students who don’t excel academically internalize narratives of failure. I’ve worked with brilliant students—phenomenal artists, natural leaders, gifted with practical reasoning—who believed themselves stupid because they didn’t excel at math. The system’s narrow success metrics had closed doors they wanted available.
Lessons for Knowledge Workers and Self-Improvers
You might be reading this because you’re a professional trying to improve yourself, or a parent deciding how much pressure to create in your child’s environment. The Korean education system offers lessons that apply beyond Korea.
First: clarity on standards actually helps. Knowing exactly what excellence looks like, what’s being measured, and how performance will be evaluated reduces anxiety paradoxically. Vague expectations create more stress than clear ones. If you’re trying to develop a skill, studying the exact criteria for success helps.
Second: pressure without purpose creates harm. The Korean system works partly because students understand why they’re working—it matters for university, it matters culturally, it matters for their family’s aspirations. But that purpose, combined with sustained high pressure, becomes toxic. Option A involves creating meaningful reasons for effort. Option B—pure pressure without purpose—burns people out. When you’re pursuing growth, ask yourself: am I doing this because it matters, or because I’m supposed to? The difference determines whether effort energizes or exhausts you.
Third: some competition and standards improve performance. The total absence of accountability creates drift. Some stakes create focus. But there’s a point beyond which additional pressure produces diminishing returns. Most research suggests that moderate pressure—enough to motivate without crushing—optimizes performance and well-being (Brown & Ryan, 2003). Extreme pressure, like Korea’s system at its most intense, sacrifices well-being for achievement.
Reimagining Excellence: Moving Beyond PISA
What if we built education systems—or pursued personal growth—around different metrics than test scores?
Some Korean schools are experimenting with this. Schools in Seoul and Busan are implementing curricula that emphasize creativity, collaboration, and emotional learning alongside traditional academics. These schools measure not just knowledge but also curiosity, resilience, and well-being. Student anxiety decreases while academic performance remains solid. It’s not either-or.
For you personally, this means expanding how you measure growth. If you’re learning a language, don’t measure progress only by test scores. Measure conversations you can have, connections you make, joy you experience. If you’re pursuing professional development, track not just credentials but also skills, relationships, and whether your work feels meaningful.
It’s okay to chase excellence. It’s okay to have high standards. But if you notice you’re sacrificing sleep, relationships, or basic joy, the pressure has likely exceeded its useful range. The Korean education system achieved world-class results while creating psychological costs. You don’t have to replicate that trade-off. You can pursue mastery without martyrdom.
Reading this analysis means you’ve already started questioning how pressure functions in your life. That awareness is the first step toward building something better—ambition without anxiety, excellence without exhaustion.
Conclusion: The Complete Picture Beyond Rankings
The Korean education system delivers impressive PISA scores because it’s designed to do exactly that. Students learn deeply in tested domains. They develop discipline and work ethic. They gain capabilities that serve them professionally. But those achievements come paired with high mental health costs, sleep deprivation, lost intrinsic motivation, and psychological pressure that wouldn’t be acceptable in many other developed nations.
The system isn’t broken—it’s optimized for specific outcomes at specific costs. The question isn’t whether Korea’s education works. It clearly does. The question is: what else could work, and what would we optimize for if we cared as much about student well-being as we do about test scores?
Whether you’re a parent, a professional pursuing growth, or someone trying to understand education more deeply, the Korean case teaches something important: results and costs are separable. A system can produce excellence without the psychological toll Korea’s students experience. You can pursue ambitious goals without sacrificing sleep, relationships, or the joy of learning.
The pressure cooker works. It also burns things. Understanding both parts lets you build your own path toward growth that sustains rather than depletes you.
Last updated: 2026-05-11
About the Author
Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.
Your Next Steps
References
- Yoon, J. (n.d.). The IMF Crisis and South Korea’s Hyper-Competitive Childhood. jiwon-yoon.com. Link
- Lee, S. et al. (2026). The effect of parental achievement pressure and self-regulated learning on school adjustment: Mediating effect of self-esteem. Frontiers in Psychology. Link
- OECD (2025). Education at a Glance 2025: Korea. OECD. Link
- Seth, M. J. (2002). Education Fever: Society, Politics, and the Pursuit of Schooling in South Korea. University of Hawaii Press. Link
- Lo, A. S.-Y., & Leung, S. Y.-C. (2021). The influences of family, school, and peers on adolescents’ academic pressure: A comparative study between Hong Kong and mainland China. Frontiers in Psychology. Link
- Kim, H., & Lee, J. (2019). Academic stress, parental pressure, and burnout among Korean high school students. Asia Pacific Education Review. Link
Related Reading
Get Evidence-Based Insights Weekly
Join readers who make better decisions with science, not hype.
Project-Based Learning Assessment: Why Traditional Grading Fails Real-World Work
When I first started teaching high school science, I did what most educators do: I gave tests, assigned homework, and calculated a grade from a rubric. The numbers looked objective. But something felt wrong. A student who aced the final exam couldn’t troubleshoot a broken experiment. Another who bombed the test solved complex problems during our hands-on projects with remarkable clarity. I realized I was measuring the wrong things.
Related: evidence-based teaching guide
This disconnect between what we measure and what actually matters is the central problem with how we evaluate learning. Project-based learning assessment—the practice of evaluating real-world work fairly and accurately—requires us to rethink assessment entirely. It’s not just an educational issue. In an economy where 60% of jobs require complex problem-solving and collaboration, how we assess these skills determines whether people develop them (Carnevale & Desrochers, 2003).
Whether you’re a self-taught professional building a portfolio, a manager evaluating team projects, or someone learning new skills outside formal education, understanding how to assess project-based work fairly matters. It changes what you focus on, how you judge progress, and ultimately what skills you actually develop.
The Fundamental Problem: Why Grades Don’t Measure Growth
Traditional assessment relies on a single metric—the grade—that tries to compress complex learning into a number. This approach has deep flaws, especially when applied to real-world work.
First, grades conflate many different skills into one score. A “B” in a project could mean excellent research but weak presentation, or strong collaboration but poor technical execution. The grade tells you almost nothing about which is true. You lose the specificity you need to improve.
Second, traditional grading often measures compliance rather than learning. Did you follow the rubric? Did you hit the deadline? Did you format it correctly? These aren’t irrelevant, but they’re not the same as asking: Did you solve a meaningful problem? Did you think critically? Can you apply this in a new context?
Research on formative assessment—assessment designed to guide improvement rather than just measure achievement—shows that detailed, specific feedback improves learning far more than a letter grade (Hattie & Timperley, 2007). Yet most grading systems provide almost no usable feedback. A student gets an A or C, shrugs, and moves on without understanding what made the difference.
For knowledge workers and professionals, this matters enormously. If you’re learning to lead a team, launch a product, or build a business, you need assessment systems that actually tell you what’s working and what isn’t. A vague sense that something “went well” or “went poorly” isn’t enough.
Project-Based Learning Assessment: The Core Components
Effective project-based learning assessment has several components that work together. Unlike traditional grading, it’s not a single score but a system of specific, actionable information.
Clear, Descriptive Rubrics
A good rubric doesn’t reduce everything to a number. Instead, it identifies specific dimensions of quality and describes what excellent, proficient, and developing work looks like in each dimension. For a business project, dimensions might include: problem definition, research quality, solution feasibility, and communication clarity. For each, the rubric describes observable criteria at different levels.
The magic happens when the rubric is predictive and specific. Rather than saying “analysis is thorough,” you say: “Analysis examines at least three stakeholder perspectives and addresses potential counterarguments” or “Analysis considers one stakeholder perspective without addressing alternatives.” Someone using this rubric—whether it’s you evaluating your own work or others evaluating it—will consistently apply similar standards because the criteria are concrete.
In my experience teaching and in working with professionals, rubrics work best when created before the project begins. This serves a dual purpose: it clarifies expectations and gives learners a target to aim for, not a surprise grading scheme applied retroactively.
Evidence Portfolios
Rather than evaluating a finished project in isolation, effective project-based learning assessment collects evidence of thinking throughout the process. This might include initial research notes, draft versions, decision logs, or reflections on what worked and what didn’t.
A portfolio shows growth. You see where someone started confused and became clear. You see wrong turns and how they recovered. You see the actual work, not the polished final product. For professionals, this looks like maintaining a log of experiments you ran, decisions you made, and outcomes. For students, it’s the research notes behind the final paper.
Research on metacognition—thinking about your own thinking—shows that the act of documenting your process improves learning significantly (Schraw & Dennison, 1994). You learn more deeply when you’re forced to articulate why you made choices and what you’d do differently.
Peer and Self-Assessment
When only an external authority assesses work, learners develop a passive stance: they wait for feedback rather than taking responsibility for quality. Peer and self-assessment flip this dynamic.
Self-assessment using the same rubric you’ll be evaluated on creates immediate accountability. Before you submit, you rate yourself on each dimension. Often, you find gaps you hadn’t noticed. The accuracy of your self-assessment matters less than the act of evaluating yourself against a standard.
Peer assessment does something different: it exposes you to multiple ways of solving the same problem and multiple interpretations of quality. When I ask students to evaluate each other’s projects, they often recognize good work they wouldn’t have produced themselves. They learn what’s possible. Professionally, peer review of work—code reviews, design critiques, strategy sessions—serves the same function.
Moving Beyond Numbers: Qualitative Assessment in Project Work
One of the biggest shifts in effective project-based learning assessment is moving away from the assumption that everything can or should be quantified.
Some of the most important aspects of real-world work are fundamentally qualitative. Can someone ask good questions? Do they collaborate effectively? Can they communicate complex ideas clearly? Do they show intellectual humility—the ability to recognize what they don’t know? Can they pivot when new information contradicts their assumptions?
These aren’t things you rate on a 4-point scale. Instead, effective assessment describes them through structured observation and documented examples. Rather than saying “collaboration: 3/4,” you describe specific evidence: “In the group project, Emma asked clarifying questions when teammates made unsupported claims, and when her approach was questioned, she explained her reasoning and considered alternatives rather than becoming defensive.”
This kind of assessment requires spending time with the work—or in organizational contexts, with the person doing the work. It’s slower and less scalable than bubble tests, but it’s incomparably more useful for actual improvement.
For professionals learning independently, this translates to seeking specific, behavioral feedback from people you trust. Instead of “good work,” ask: “What specifically did I do well here?” and “Where did I miss something?” The specificity is what makes feedback actionable.
Practical Implementation: Project-Based Learning Assessment in Real Settings
How do you actually implement fair and accurate project-based learning assessment? The approach varies by context, but some principles apply everywhere.
For Individual Learning and Skill-Building
If you’re learning a new skill—coding, writing, design, investing—create your own assessment rubric. Identify 4–6 dimensions that matter for quality work in your field. For each, describe what you’re aiming for and what adequate, good, and excellent look like.
Then maintain a portfolio of your work. Keep drafts. Document your thinking. After completing projects, rate yourself against your rubric before any external evaluation. This combination—clarity of standards, evidence of process, honest self-assessment—creates a feedback loop that drives improvement.
When seeking external feedback, be specific: “I’m trying to improve my ability to identify assumptions in technical documentation. Here’s what I wrote. Where did I miss assumptions?” This is far more useful than generic praise or criticism.
For Teams and Organizations
When evaluating team projects, separate individual contributions from team outcomes. A project can succeed while an individual learns little if they coasted. Conversely, a project can fail while individuals demonstrate excellent problem-solving and collaboration.
One approach is to use both group grades (based on the final product and group assessment rubrics) and individual grades (based on peer evaluations, self-assessment, and individual contributions documented through portfolios). This captures both dimensions of reality.
Build in structured reflection. After a project concludes, team members identify: What went well? What would we do differently? What did each person learn? What surprised us? This reflection isn’t busywork—it’s where assessment becomes learning. The process of analyzing what happened embeds the lessons more deeply than any external evaluation can.
For Educators and Trainers
If you’re teaching or training people in real-world work, project-based learning assessment means moving from end-of-course evaluation to continuous, embedded assessment. This looks like:
Last updated: 2026-05-11
About the Author
Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.
Your Next Steps
- Today: Pick one idea from this article and try it before bed tonight.
- This week: Track your results for 5 days — even a simple notes app works.
- Next 30 days: Review what worked, drop what didn’t, and build your personal system.
References
- Kokotsaki, D., Menzies, V., & Wiggins, A. (2016). Project-based learning: A review of the literature. Journal of Education and Training Studies. https://pmc.ncbi.nlm.nih.gov/articles/PMC12461055/
- Authors Unknown (2024). From Exams to Engagement: Evaluating Project-Based Learning in Biostatistics. PMC/NIH Central. https://pmc.ncbi.nlm.nih.gov/articles/PMC12461055/
- Authors Unknown (2024). Understanding Students’ Experiences with Project-Based Assessment across Educational Levels and Contexts. Journal of Language, Literacy and Learning Studies. https://journal-center.litpam.com/index.php/jolls/article/view/3247
- Chatmaneerungcharoen, S., Sahakit, P., Sookperm, P., & Boonsri, D. (2024). Development of an Integrated Project-Based Learning Model Focused on Building Values, Attitudes, Skills, and Knowledge (VASK) for Multi-Grade Classrooms. Canadian Center of Science and Education. https://files.eric.ed.gov/fulltext/EJ1484958.pdf
- Authors Unknown (2025). A study on the impact of project-based learning on students’ learning motivation. Frontiers in Psychology. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1722170/full
- Divjak, B., Svetec, B., & Pažur Aničić, K. (2025). PBL Meets AI: Innovating Assessment in Higher Education. SCITEPRESS. https://www.scitepress.org/Papers/2025/133317/133317.pdf
Related Reading
Get Evidence-Based Insights Weekly
Join readers who make better decisions with science, not hype.
Universal Design for Learning: Building Inclusive Lessons from the Ground Up
When I first heard about Universal Design for Learning (UDL) in my teacher training program, I thought it was just another buzzword in education. But after implementing it across my classrooms for over a decade—teaching everything from high school physics to adult professional development—I realized it fundamentally changed how I think about teaching itself. UDL isn’t about retrofitting accommodations for students with disabilities after the fact. It’s about designing lessons so thoroughly and thoughtfully upfront that they work beautifully for everyone: the neurodivergent student, the visual learner, the gifted kid who’s bored, the English language learner, and yes, even the neurotypical student sitting in the middle.
Related: evidence-based teaching guide
The evidence is compelling. Research shows that when you apply Universal Design for Learning principles, you create classrooms and learning experiences that reduce barriers to instruction, increase student engagement, and improve outcomes across the board (Rose & Gravel, 2010). What’s remarkable is that the accommodations you create for students with the most significant learning differences often benefit everyone. The keyboard shortcut you add for someone with motor challenges? Everyone learns it and saves time. The transcript you provide for a video for a deaf student? English language learners use it too. The multiple ways to demonstrate knowledge that you build in? Anxious students, perfectionists, and kinesthetic learners all thrive.
If you’re a knowledge worker, a manager building team training programs, a parent homeschooling, or anyone responsible for helping others learn something new, understanding and implementing Universal Design for Learning isn’t just ethically sound—it’s pragmatically brilliant. You’ll create better content, reach more people, and paradoxically, make your teaching easier in the long run.
What Universal Design for Learning Actually Is (And Isn’t)
Let me start by clearing up what UDL is not, because misconceptions abound. UDL is not about lowering standards. It’s not about giving everyone the same thing. It’s not about adding accommodations as an afterthought. And it’s definitely not a one-size-fits-all approach—which would be ironic, given what it stands for.
Universal Design for Learning is a framework for designing educational experiences that are accessible and engaging for all learners from the start. It’s built on three core principles, each with specific guidelines:
- Multiple Means of Representation: Provide information in multiple formats so all students can perceive and understand it.
- Multiple Means of Action and Expression: Give students different ways to engage with material and demonstrate their learning.
- Multiple Means of Engagement: Offer choices that sustain motivation and foster a sense of autonomy and relevance.
The framework originated in architecture—the story goes that when curb cuts were designed to help wheelchair users access sidewalks, parents with strollers, delivery workers, and elderly people on walkers benefited too. A designer named Ronald Mace realized this principle could apply to education: design for the full spectrum of human variation from the beginning, and you create something better for everyone. When I redesigned my physics curriculum using UDL principles, I wasn’t thinking primarily about the one student with ADHD accommodations at work (though it helped him tremendously). I was thinking about how to present Newton’s laws so that a visual learner, an auditory learner, a kinesthetic learner, and a reader could all access the same concept at their level of readiness. The result? My test scores improved across all demographic groups (National Center for Universal Design for Learning, 2022).
The Three Pillars: How to Actually Implement Universal Design for Learning
Pillar One: Multiple Means of Representation
This is where most people start with Universal Design for Learning, and for good reason. Many learners struggle not because they can’t learn something but because the way it’s presented doesn’t match how their brain processes information.
When you’re building a lesson or training module, ask yourself: How many different ways am I presenting this core concept?
If you’re teaching someone to analyze financial statements, don’t just show a spreadsheet. Provide a video walkthrough where you narrate what you’re looking for. Create an infographic that shows the relationships between balance sheet, income statement, and cash flow. Build in a hands-on activity where they reclassify line items from a real company’s 10-K filing. Offer written step-by-step guides. Use metaphors: “The balance sheet is a snapshot; the income statement is a movie.” Provide the same information in multiple modalities—text, audio, visual, and experiential.
The science here is solid. Cognitive load theory tells us that we have limited working memory, but we have different channels for processing (Sweller, 1988). When you present information through multiple channels—combining visuals with narration, for example—you actually reduce cognitive load and improve retention. People with dyslexia might struggle with dense text but thrive with visual-spatial information. People with visual processing issues might need audio. Someone with ADHD might need kinesthetic engagement to maintain focus. And neurotypical learners? They benefit from everything—redundancy actually strengthens memory.
Practically, this means: Create a checklist for every learning objective. For each key concept, ask: Can it be presented verbally? Visually? Through text? Through hands-on activity? Through metaphor or analogy? If you’re checking only one or two boxes, you’re leaving learners behind.
Pillar Two: Multiple Means of Action and Expression
Here’s where I see the biggest transformation in my students: when you let them show what they know in different ways.
Traditionally, we’ve had a narrow definition of “proof of learning.” You take a multiple-choice test. You write an essay. You present a PowerPoint. But consider: someone with severe anxiety might freeze on a test. Someone with dysgraphia struggles to write fluently but can articulate ideas verbally. Someone with processing differences might need more time. Someone who’s visual might prefer to create an infographic or video to a written report.
When designing assessment or any way learners engage with material, build in options. For a project on sustainable urban design, a student could:
- Write a research paper
- Create a detailed presentation with slides
- Build a scale model or digital 3D rendering
- Produce a video documentary
- Lead a panel discussion with peers
- Design an interactive website
- Create an infographic or poster series
- Develop a podcast episode script
All of these demonstrate the same learning objectives, but they play to different strengths. The student with strong spatial reasoning but weak writing skills isn’t penalized. The introvert who’s a brilliant visual designer isn’t forced into a presentation format. You’re assessing understanding, not compliance with a single arbitrary format.
This also touches on executive function. Some learners need scaffolding and structured steps. Others are paralyzed by too much guidance and need open-ended exploration. Some need intermediate checkpoints; others do better with a single deadline. Universal Design for Learning means building flexibility into the process, not just the product.
Pillar Three: Multiple Means of Engagement
Engagement is the secret sauce. You can have perfect representation and flexible expression, but if learners aren’t motivated, nothing happens. This pillar is about why someone wants to engage with the material in the first place.
There are different levers here. Some learners are motivated by autonomy—they want choice in what they learn and how. Others need clear relevance: “Why does this matter to my real life?” Some respond to social connection: “We’re learning this together.” Others are motivated by mastery and challenge: they want to get better at something they care about. Some need novelty and variety; others do better with routine and predictability (Pink, 2009).
When you’re designing a learning experience, especially if you’re doing Universal Design for Learning properly, you don’t pick one engagement strategy and hope it works for everyone. You layer in multiple approaches:
- Provide choice: In what topic they explore, in what problem they solve, in how they structure their time
- Make the relevance explicit: Connect to their goals, their interests, current events, or real problems they encounter
- Create opportunity for collaboration: Pair work, group projects, peer review, discussion—but also allow for solo work
- Build in success: Start with achievable tasks, provide immediate feedback, celebrate progress
- Manage novelty and routine: Have enough consistency that learners know what to expect, but enough variation that it stays interesting
In my experience teaching adults in professional development settings, the sweetspot for engagement is when people understand that the content matters to a real goal they have, they’ve had input into how they’ll learn it, and they’re getting feedback on their progress. A financial analyst learning new Excel skills is way more engaged when they’re solving an actual analysis problem from their job, when they can choose between video tutorials or text documentation, and when they’re seeing their efficiency improve week to week.
The Practical Architecture: How to Design a Lesson Using Universal Design for Learning
Now let’s get concrete. You don’t need fancy software or extensive training to implement Universal Design for Learning. You just need a design mindset. Here’s a process I use with teachers I mentor:
Step One: Define the learning objective clearly. Not “understand photosynthesis” but “explain the process by which plants convert light energy into chemical energy, and predict how this process would change under different light wavelengths.” Be specific about what you want people to know or be able to do.
Step Two: Map the barriers. For each objective, ask: What are the ways people might struggle to learn this? Someone might struggle because: they can’t see a diagram, they can’t process abstract concepts without concrete examples, they have working memory limitations, they don’t understand the vocabulary, they can’t sit still long enough for the traditional lecture, they don’t see why it matters, they’re embarrassed to ask questions, they don’t have the foundational knowledge, they need to move and talk to think. Write these down. The more you anticipate barriers, the better your design.
Step Three: Design for each pillar simultaneously. Don’t design representation first, then add options later. Design them all at once. For each objective:
- How will I represent this concept in at least three different ways?
- How will learners express or demonstrate understanding in at least two different ways?
- How will I engage motivation through autonomy, relevance, and/or mastery?
Step Four: Test and iterate. Implement it. Watch how learners engage. Ask for feedback. What worked? What fell flat? Where do people get stuck? Use that information to refine. Universal Design for Learning isn’t a blueprint you nail perfectly on the first try—it’s a living design practice.
Why Universal Design for Learning Benefits Everyone (Seriously, Everyone)
There’s something counterintuitive about inclusive design: the accommodations you create for the students with the most obvious needs often improve learning for everyone.
Take captions on videos. Originally, captions were an accommodation for Deaf students. Now, everyone watches videos with captions at the gym, in coffee shops, in open offices. Why? Because when audio is unclear, captions help. When you’re in a noisy environment, captions are essential. When you’re learning about an unfamiliar accent, captions speed comprehension. For ESL learners, captions are transformative—they can see and hear the language simultaneously, which research shows improves both vocabulary and pronunciation (Winke et al., 2010). Video creators who add captions expand their reach dramatically.
The same principle applies across all three pillars. When you provide flexible deadlines and checkpoints (designed for someone with executive function challenges), your anxious students who spiral at the last minute perform better. When you offer verbal, written, and kinesthetic ways to learn a concept (designed for people with different processing strengths), your struggling readers actually pass, your visual learners ace it, and your kinesthetic learners stop being labeled “unmotivated.”
In my current work running professional development for corporate clients, we explicitly design using UDL principles. And here’s what we’ve discovered: not only do we better serve the people who had struggled in traditional training formats—often people with undiagnosed ADHD, dyslexia, or other differences—but we see improved engagement and retention across the board. Why? Partly because people feel respected when learning experiences accommodate how their brain works. Partly because the redundancy and multiple representations actually do improve memory. Partly because choice and autonomy boost motivation.
Common Obstacles and How to Overcome Them
Let me be honest about the challenges I’ve encountered implementing Universal Design for Learning. The first is time. Designing robust, multi-modal learning experiences takes more upfront work than designing a lecture and a standardized test. The good news: once you’ve done it once, you can reuse and iterate. The infographic explaining the water cycle you created? You can use that every year. The multiple choice and performance assessment options you’ve built? You refine them yearly, but the structure is there. The investment pays dividends.
The second is the assumption that Universal Design for Learning means “less rigor.” I push back on this hard. Universal Design for Learning doesn’t lower standards—it clarifies them. When you’re designing, you’re being crystal clear about what people need to know or do. You’re not watering down content; you’re removing barriers to accessing rigorous content. In fact, research shows that well-designed UDL instruction often leads to higher achievement because more learners can actually access the material (Rose & Gravel, 2010).
The third is fear of complexity. “If I offer seven different ways to do something, won’t it be chaos?” Not if you design thoughtfully. The options aren’t random. They’re deliberate paths to the same objective. Think of it like different routes to the same destination—they’re not equally optimal for everyone, which is exactly why you offer them.
Bringing It All Together: Your Next Steps
Universal Design for Learning is ultimately about respect. It’s a commitment to the idea that every person’s brain works, just sometimes in different ways than traditional structures accommodate. As someone who’s taught students ranging from profoundly gifted to significantly disabled, neurotypical to neurodivergent, I can tell you: when you design from the ground up for human variation, you create learning experiences that work for the breadth of humanity.
If you’re designing a training program, rebuilding your course, or even just planning your next lesson, start with this: identify one objective. Map the barriers. Design multiple means of representation. Build in flexible ways to demonstrate learning. Create engagement through autonomy and relevance. Test it. Ask for feedback. Iterate.
Universal Design for Learning isn’t a box you check. It’s a design practice. And like all practices, it gets easier and more effective the more you do it.
Last updated: 2026-05-11
About the Author
Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.
References
- Duncan, J. (2025). Uncovering Challenges in Universal Design for Learning in Higher Education. Australasian Journal of Special and Inclusive Education. Link
- Doyle, A. J. (2025). Universal Design for Learning (UDL) in simulation-based education. Advances in Simulation. Link
- Martinez, G. M. B. (2025). The Impact of Universal Design for Learning (UDL) on Inclusive Education: An Analysis of Participation and Academic Performance. Architecture Image Studies, 6(3), 1160-1167. Link
- CAST. (2025). The Benefits of Universal Design for Learning. CAST. Link
- Rappolt-Schlichtmann, G., et al. (2013). Assistive Technology, Electronic Text Accessibility, and the Universal Design for Learning Framework. CAST. Link
- King-Sears, M. E., et al. (2015). Universal Design for Learning and Elementary School Science. Journal of Special Education Technology. Link
Related Reading
Get Evidence-Based Insights Weekly
Join readers who make better decisions with science, not hype.
Desirable Difficulties in Learning: Why Harder Study Methods Stick Better
There is a deeply uncomfortable truth sitting at the heart of learning science: the methods that feel most productive are often the least effective, and the methods that feel frustrating, slow, and effortful tend to produce the strongest, most durable memories. If you have ever highlighted an entire textbook chapter and felt genuinely accomplished, only to blank on the material two weeks later, you have experienced this mismatch firsthand.
Related: evidence-based teaching guide
The concept of desirable difficulties was introduced by psychologist Robert Bjork in the 1990s, and it has since accumulated one of the most robust empirical records in cognitive science. The core idea is deceptively simple: certain types of difficulties during learning — ones that slow you down, force errors, and demand more mental effort — actually strengthen the underlying memory traces. Not all struggle is useful, but the right kinds of struggle are not just tolerable. They are necessary.
For knowledge workers in their 20s, 30s, and 40s, this matters enormously. You are not sitting in a classroom with a single subject to master. You are juggling technical documentation, industry reports, new software systems, regulatory changes, and professional development courses, often simultaneously. Understanding which study strategies are genuinely building durable knowledge — versus which ones are just creating a comfortable illusion of competence — is one of the highest-leverage cognitive skills you can develop.
What Makes a Difficulty “Desirable”
Not every form of struggle improves learning. Trying to learn quantum mechanics with no foundation in basic physics is just confusion, not a desirable difficulty. The distinction matters. A difficulty is desirable when it challenges the learner in a way that can actually be resolved through effort, and when that resolution process strengthens encoding and retrieval pathways in long-term memory.
Bjork and Bjork (2011) describe desirable difficulties as conditions that “slow the rate of acquisition, reduce performance during training, or both, yet enhance long-term retention and transfer.” The key phrase there is during training. These methods hurt your performance while you are practicing, which is exactly why they feel unreliable. We conflate current performance with long-term learning, and they are not the same thing at all.
Think about re-reading, which is the single most common study strategy used by students and professionals alike. It is fast, it is easy, it produces a sensation of familiarity, and it does almost nothing for long-term retention. Familiarity is not memory. You can recognize something without being able to retrieve it under pressure, and in most professional contexts, retrieval under pressure is precisely what is required.
Need a faster way to plan the next lesson?
Download the free Teacher Retrieval Lesson Pack for a printable objective grid, retrieval checklist, and prompt bank you can use this week.
Get the Free Lesson Pack
The Big Three: Testing, Spacing, and Interleaving
Retrieval Practice: The Testing Effect
If you take away only one principle from learning science, make it this one. Testing yourself on material — before you feel ready, before you are confident, while you are still struggling — is one of the most potent memory interventions known to researchers. Roediger and Karpicke (2006) conducted a landmark study in which participants studied prose passages either by re-reading them or by attempting to recall them from memory. One week later, the retrieval practice group outperformed the re-study group by approximately 50 percent on a final recall test. Fifty percent. From a simple strategy change.
The mechanism here involves something called retrieval-induced potentiation. Every time you successfully pull information out of memory, you strengthen the retrieval pathway. You are not just reviewing the information — you are actively rebuilding the mental route to it. Failed retrieval attempts also help, which is counterintuitive but well supported. Attempting to recall something you cannot quite remember, then checking the answer, produces stronger encoding than simply reading the answer passively (Kornell et al., 2009).
For practical application: close the document, close the slides, and write down everything you remember. Use flashcard systems like Anki that force active recall. After a meeting or a training session, spend five minutes writing a brain dump before you look at your notes. These habits feel inefficient. They are the opposite of inefficient.
Spaced Practice: Fighting the Forgetting Curve
Hermann Ebbinghaus mapped the forgetting curve in the 1880s, and what he found has been replicated so many times it is essentially bedrock: memory decays in a predictable, exponential fashion unless it is reinforced. Massed practice — what most people call cramming — compresses all your learning into a single session and produces sharp initial performance that dissolves quickly. Spaced practice distributes that same amount of study time across multiple sessions separated by intervals, and the retention advantage is dramatic.
Cepeda et al. (2006) conducted a large-scale meta-analysis of spacing research and found consistent, substantial benefits of distributed practice over massed practice across a wide range of materials and populations. The optimal gap between study sessions depends on when you need to remember the material, but a general principle holds: the gap should feel uncomfortably long. If you can still easily remember everything from your last session, you waited too long — or actually, you did not wait long enough.
Here is where this gets practically interesting for busy professionals. You do not need more total study time to implement spacing. You need to restructure when you study. Instead of one 90-minute session on a new framework, you could do three 30-minute sessions spread across a week and walk away with substantially better retention. The calendar adjustment is trivial. The cognitive payoff is not.
Interleaving: Mixing It Up Against Every Instinct
Interleaving is probably the most counterintuitive of the three core desirable difficulties. Conventional study wisdom says to master one topic completely before moving to the next. Practice all the problems of type A, then all the problems of type B, then all the problems of type C. This is called blocked practice, and it feels logical, organized, and productive.
Interleaved practice mixes problem types together — A, C, B, A, B, C — in an apparently random or varied sequence. During practice, interleaving performs worse than blocking. Students make more errors, feel more confused, and generally dislike it. Yet on delayed tests measuring actual learning, interleaving consistently outperforms blocking by meaningful margins (Taylor and Rohrer, 2010). The reason appears to be that interleaving forces learners to actively identify which type of problem they are facing before choosing a solution strategy, which is precisely the skill needed in real-world application where problems do not arrive neatly sorted by category.
If you are learning a new programming language, do not drill all the loops, then all the conditionals, then all the functions in separate blocks. Mix them. If you are studying for a professional certification, randomize practice questions across domains rather than working through one domain completely before starting the next. It will feel messier. The learning will be deeper.
Why We Resist These Methods (And Why That Resistance Is Itself a Signal)
Here is something worth sitting with: the reason most people default to re-reading, blocked practice, and massed studying is not laziness or ignorance. It is a reasonable response to false feedback. When you re-read a chapter, you recognize every sentence. That recognition feels like understanding. When you study in concentrated blocks, performance improves steadily within the session. That improvement feels like progress.
Desirable difficulty methods provide the opposite experience. You test yourself and fail to remember things you thought you knew. You space out your sessions and walk into the second one feeling like you have forgotten everything from the first. You interleave topics and feel lost without the structural scaffold of working through one thing at a time. Every signal your brain sends during these methods says: this is not working. But that signal is wrong, and the long-term data is unambiguous.
As someone with ADHD, I find this especially relevant. The methods that feel productive for my brain — re-reading with a highlighter while music plays, watching the same video lecture twice in a row — are precisely the ones that produce the least learning. My subjective sense of whether I have learned something is not a reliable guide. This is probably true for you as well, ADHD or not. Metacognitive accuracy about learning is surprisingly poor in almost everyone, which is why we need external frameworks rather than just trusting our intuitions about what is working.
Applying Desirable Difficulties in a Real Work Context
After Conferences and Training Sessions
Most professionals sit in a training session, take some notes, file those notes away, and never engage with the material again until they vaguely need to remember it months later. Instead, try this: immediately after the session, close your notes and write from memory everything you can recall. Note what you cannot recall as clearly. Then, two days later, open your notes and test yourself again on the sections that were fuzzy. One week after that, try to reconstruct the key frameworks from scratch without looking at anything. Three exposures, spaced out, with active retrieval each time. The time investment is modest. The retention difference is not.
Reading Technical Material
When you need to actually learn something from a report, paper, or technical document — not just skim it for a meeting, but genuinely internalize it — stop highlighting. Read a section, close the document, and write a short summary in your own words. Not the author’s words. Yours. This forces processing at a deeper level than passive reading. Then, crucially, return to the document and notice where your summary was incomplete or wrong. That comparison is high-value learning, not just a check on comprehension.
Building Skills in New Software or Tools
When your organization rolls out a new tool, most people follow the linear tutorial path, complete it once, and consider themselves trained. A more effective approach: go through the tutorial once for orientation, then close it and try to accomplish real tasks from memory. You will struggle. Look things up as needed, but try to retrieve first. Come back to the core workflows two days later and rebuild them from scratch. The frustration is the point. The frustration means the retrieval system is working.
The Role of Generation and Elaboration
Two additional desirable difficulties deserve mention. The generation effect refers to the finding that information you generate yourself is better remembered than information you passively receive. If you try to predict what a document will cover before reading it, the act of generating those predictions — even incorrect ones — primes the memory system and improves encoding of what actually follows. Similarly, generating an answer to a question before being told the correct answer improves subsequent retention, even when your initial answer is wrong.
Elaborative interrogation is related: asking yourself why something is true, rather than just accepting that it is, forces deeper processing and connects new information to existing knowledge structures. When you read that a certain business strategy failed, do not just accept the conclusion. Ask yourself why it failed, what conditions would have made it succeed, and what other situations are structurally similar. These questions cost cognitive effort. They produce the kind of rich, interconnected memory that transfers to novel situations.
This is the ultimate goal, really. Not just remembering information for a test or a presentation, but building knowledge structures flexible enough to apply in contexts you have never seen before. Desirable difficulties do not just improve retention scores on standardized tests. They improve the quality of thinking that is available to you when the problems are genuinely hard and the stakes are real.
The Meta-Skill: Learning How to Learn
There is a compounding effect that happens when you genuinely internalize the desirable difficulties framework. You stop evaluating study methods by how they feel and start evaluating them by what the evidence says about long-term outcomes. You become comfortable with the discomfort of not knowing, because you understand that struggling to retrieve something is doing useful cognitive work. You develop patience for the messy, non-linear feeling of interleaved practice, because you know the eventual payoff justifies the present confusion.
This shift in orientation — from comfort-seeking to evidence-based learning — is one of the most valuable cognitive habits a knowledge worker can develop. The information landscape is not getting simpler. The rate at which professionals need to acquire, integrate, and apply new knowledge is not slowing down. Given that reality, the people who understand how memory actually works, and who design their learning accordingly, are building a genuine and durable advantage.
The science on this is not new. Bjork has been publishing on desirable difficulties for over three decades. The testing effect was documented more than a century ago. What is surprising is how slowly this knowledge has diffused into actual practice. Most workplaces still organize training as passive information delivery. Most professionals still reach for the highlighter first. You do not have to. The harder path through the material is the one that sticks, and now you know why.
Last updated: 2026-05-11
About the Author
Published by Rational Growth. Our health, psychology, education, and investing content is reviewed against primary sources, clinical guidance where relevant, and real-world testing. See our editorial standards for sourcing and update practices.
Your Next Steps
References
- Bjork, R. A., & Bjork, E. L. (2020). Make It Stick: The Science of Successful Learning. Harvard University Press. Link
- Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. Metacognition: Knowing about Knowing. MIT Press. Link
- Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249-255. Link
- Kang, S. H. K. (2016). Spaced repetition promotes efficient and effective learning: Policy implications for instruction. Policy Insights from the Behavioral and Brain Sciences, 3(1), 12-19. Link
- Rohrer, D., & Taylor, K. (2007). The shuffling of mathematics problems improves learning. Instructional Science, 35(6), 481-498. Link
- Eich, T. S., et al. (2026). Why Desirable Difficulties ‘Work’: A Review of the Evidence From Cognitive Psychology and Health Professions Education. Medical Education. Link
Related Reading