Applying mastery-based learning in tech-ed

#education
#altEducation
#techEducation
#masteryBasedLearning
In a first principles approach to education system design, mastery-based learning would be among the first of the principles

The primary mechanism of Mastery-Based Learning (MBL) is this: Learners need to demonstrate a certain skill level in a specific learning task before being allowed to move to the next thing. If a learner is struggling with a task, they get additional support. If someone finishes a task early, they are given extra "enrichment" activities to keep them engaged and busy.

Flowchart

MBL is very different from how classrooms typically work in traditional education systems: Typically, what happens here is that learners are given the same lessons at the same pace.

Flowchart

In this article, I'll discuss the benefits and mechanisms of MBL and how MBL can be applied in code schools. Of course, lessons here are also relevant outside of code schools, but code schools are my thang.

Normal distributions

In a cohort-paced learning environment, education moves at a set pace. Now learner's learning paces tend to follow a normal distribution:

  • Some learners are slower than the average pace - many would assume this is because they have lower aptitude, but that's a bit reductionistic. Maybe someone is moving slowly because they have problems at home, health issues, or any number of other things.
  • Other learners are capable of moving faster than the average pace. These learners might pursue something constructive in their free time, chill, or get into trouble. Personally, I can say I was bored to tears at school and probably could have benefitted from being pushed to do a bit more
  • Then there are the average learners moving at the average pace. If the learning tasks are paced out so that the average learner keeps pace, then these learners will do just fine. But deciding on how quickly people should work and tuning the education tasks and pace to match the pace of any learner or group of learners is non-trivial. The best way to ensure the success of the maximum number of learners is to be conservative about what a course can cover in a period of time. In other words, move slowly. Move slightly slower than the average learner

This is clearly sub-optimal.

On the other hand MBL deals with these groups of learners in a totally different way: The learners who are slower than the rest get the time and the TLC they need to master the material before moving forward, and the speedy folks get to crush it.

If you take away a major reason for failure then fewer people fail.

Benefits

Studies show that mastery-based learning has a lot of benefits for learners. And if you think about the mechanisms of MBL, then those benefits make intuitive sense.

Why do learners fail at completing courses?

  • they don't have enough time to solidify their skills
  • they need more tlc than the rest of the group
  • they move forward onto more advanced material despite not mastering prerequisite fundamental skills. Advanced knowledge builds upon the basics

MBL solves all of this. In a cohort-paced learning environment, a learner's likelihood of success strongly correlates with the learner's natural pace of learning. MBL breaks this dependency.

Here are a few reported benefits of mastery-based learning over more traditional cohort-paced classroom setups, as well as some explanations for them:

  • Learners are often more satisfied with the instruction they receive. This makes sense because learners would only receive instruction based on what they need and when they need it. They wouldn't be pushed to try to grasp content they aren't ready for, and they wouldn't be forced to accept instruction on things they don't need help with. If they needed extra assistance to move forward then that is what they would get.
  • Learners display aspects of a growth mindset, and an improved academic self-concept. Learners who are required to master a thing before moving forward learn that they can master things. If a learner is forced to move forward when they are not yet ready, then they get set up to struggle and possibly fail completely at advanced concepts. Of course this is likely to affect their self-concept in a negative way as those needing support are made helpless by the system. Learned-helplessness is a whole thing.
  • There is a decreased amount of variability between learner outcomes. This also makes sense if you consider that the slower learners who would usually be forced to keep pace with a larger group would actually be given a chance to make real solid progress. With enough time all learners would achieve mastery of all concepts, assuming that instruction is of a high enough quality, and the learners have enough basic aptitude
  • Learners are more likely to stay on task. This would make sense because there are tangible benefits to staying on task: Once you finish a task, you can move forward. And for those slower learners, they are less likely to give up and do something else because they are given a chance as well
  • Advanced concepts become easier to master An example from coding is that a person should master variables before they can master function arguments, and they should master function arguments and returns before they can master things like unit testing, recursion or use of existing libraries and frameworks. Advanced knowledge and skill is built upon mastery of fundamental concepts.

In a perfect education system, each learner would be able to move at their optimal pace, and they would get the help they need as they need it. In a perfect world learner achievement would not follow a normal distribution based on how quickly they can work.

Here is what happens if you force all learners to move at the same pace:

Uniform

And this is what happens if you get each learner to move at their optimal pace:

Optimal

Graphs from: Mastery learning. (2024, January 20). In Wikipedia. https://en.wikipedia.org/wiki/Mastery_learning .

Implementation in a code school

If you are teaching a single person to code, or do anything really, it is operationally straight-forward to implement MBL. But things get challenging if there is a group of learners.

Firstly you need a syllabus that lends itself to being consumed in a self-paced way. This means that the syllabus needs to be in a useful format so that learners can go through things as they need to.

The next thing you need are assessments. Lots and lots of assessments. One of the big challenges with MBR is the fact that it relies so heavily on people proving their skill at different points.

After that you need some learners who are motivated to move at a reasonable pace, mechanisms for seeing who needs help, mechanisms for tracking competence of a number of different skills over time...

The rabbit hole goes pretty deep.

Below is a description of some of the challenges of implementing MBR and some solutions to those problems.

Learners can feel lost and disconnected

Learners are often used to classroom, group-paced courses where someone tells them what they will do and what is expected daily. MBL puts more control in the hands of the learner. This can make some people uncomfortable at first. Learners can feel alone in their work if peers are not working on the same thing.

Certain aspects of this can be addressed in different ways:

  1. Setting expectations about how hard a person should be working. Eg: you should be working on this course for x hours per day without distraction
  2. Having scrum-like daily standups where learners can share their plans and connect with a larger group
  3. Getting more advanced learners to act as tutors and help with giving other learners feedback on tasks. Peer to peer teaching actually solves a lot of problems when done right, but it also has it's challenges
  4. Learners should see how different tasks are related. Having some kind of learning roadmap helps learners feel less lost

A lot can be solved if you borrow from scrum practices.

Learners might move too slowly

If there is a limited amount of time to finish a course, say 9 months, then learners will still have a minimum pace at which they would need to work.

If you know what tasks make up a course and how long those tasks take the average learner, then you can predict how long any learner will take to finish the whole course. Something like a burndown chart can be useful here.

Again, mechanisms associated with Agile are useful.

How to choose and tune the "enrichment" activities

In traditional MBL, learners are made to sync up at regular intervals: The slower learners are given time to catch up, and the more advanced learners are given new material and tasks to stretch themselves.

Having regular syncs does simplify some of the operational aspects of the teaching process, but it does mean that the group needs to wait for the slower learners and it is still possible that the group would need to move forward before the slowest learners are able to. Imagine if an already slow learner takes sick leave, suddenly they would be suffering from catch-up-itis.

One thing that can be done to solve this is to give the faster learners very well-chosen enrichment activities so that the slower learners can take all the time they need. But if these enrichment activities take the faster learners away from the next critical tasks in the syllabus then that is also not great.

The ideal solution is to not require learners to sync up at all - let the faster learners move forward with their syllabus, and possibly stack a whole lot of non-critical enrichment tasks at the end of the syllabus. If those enrichment activities touch on and expand concepts covered earlier in the course all the better - this would get learners to recall and apply skills they learned before and solidify earlier knowledge.

Faster learners can also be enriched by taking on roles in peer-to-peer learning mechanisms such as reviewing projects and tutoring others. In this way the faster learners get to revise work they have done before and harden their knowledge. They also get to practice different "soft" skills critical for teamwork.

Accountability

Online, self-paced courses typically have very low completion rates. There are many reasons for this. One reason is that there is no accountability and little support.

It is still necessary to have some kind of accountability mechanism built into the system. But, accountability doesn't need to be based on group timelines. Again, scrum-like standups work quite well - if learners talk about their plans, progress and problems in a group with a trained facilitator, then that has the effect of:

  • surfacing issues
  • accountability
  • connection between learners

And if an issue surfaces, an advanced learner can take part in helping to solve the problem.

Repeated interventions

If the pace of the different learners in a group is very different, then they will be in different places in their course. That means if multiple learners struggle with a specific piece of coursework, they will struggle at different times. This would mean they would need to be helped with the same thing at different times.

Instead of doing one lecture covering a topic in bulk for a bunch of learners, it might be necessary to do multiple individual interventions on that one topic over the course of weeks or months. This is expensive.

This is a big problem, but it can also be an opportunity - if an educator's role is to help a learner master a concept, then that educator will need to pay very close attention to the learner and their struggles. The educator will get feedback if their teaching techniques aren't working.

This means that educators will develop a better mental model of how learners progress to mastery. In the short term, this would mean that the educators level up their teaching game. It also means that repeated patterns of misunderstandings can be noted - syllabus content and teacher training can be upgraded.

Teaching to the test

MBR is very assessment-heavy. A potential pitfall of this is that some teachers might "Teach to the test" to get learners to move forward. If a learner is consistently failing to move forward under the tutelage of a specific educator, then that educator might start to optimize for the wrong thing.

And if a learner is struggling to master a concept and doesn't even know what mastery means, then they might also optimize for passing over mastery. For example, in tech, a learner might end up cargo-culting in their projects and just writing code that they have been told to write because someone told them to (more on this later).

Here are a few things that can be done to alleviate this problem:

  • train the teachers well
  • monitor the teachers - if you can build up a peer review system among teachers, then problems can be noticed and addressed. Of course this requires having a few teachers floating around, so you need a certain minimum scale to get it right
  • monitor project and assessment submissions from learners - if they are starting to look eerily similar in weird ways, for example, if all the learners are applying an unusual algorithm in the same place or organizing their code in a specific non-obvious way, then something has gone wrong with the peer to peer learning mechanism and needs to be addressed. This is critical. Cargo-culting kills mastery.
  • design projects and assessments well - inject nuance and problem-solving into assessments so that there is not just one right answer
  • have multiple mechanisms to assess the same thing. If there is a library of assessments that assess the same skill in different ways, and the learners are given assessments at random, it makes teaching to the test much more challenging.
  • spaced-repitition: Trying to build the perfect assessment mechanism is quite hard - there are diminishing returns. At some point you need to say the assessment is good enough. By revisiting concepts in different ways during a course, learners get multiple chances to prove their mastery while not being held back. This does take a bit of craftsmanship.

Detecting struggle

If a learner is falling behind their peers, then there can be many different causes:

  • maybe they had some sick leave
  • maybe they struggled with understanding some earlier concepts and they are now happily speeding up
  • maybe they are struggling with the work at hand
  • maybe they are taking part in a high-stakes tango competition

In MBL courses, learners who are genuinely struggling can get stuck - their progress can get blocked. It, therefore, becomes necessary to find ways to detect who is struggling so that they can get the help they need when they need it. Making a spreadsheet and tracking a bunch of percentage scores will no longer cut it.

It becomes helpful to track:

  • task duration per learner: This can be compared to the average pace
  • number of failed assessments per task per learner: For example, if a learner keeps trying to submit a project and is never quite over the line, that needs to be addressed. Maybe for certain tasks, it is normal to prove mastery on the first attempt, and others tend to take more attempts and need more support
  • progress towards a goal. For example, in a coding course, you might want to track if learners are making commits on their code, if they are not then they might be stuck

With all this information, you would be able to see who is struggling, on what, and for how long. This makes it possible to give learners just-in-time help. It also makes it possible to triage that help.

It also becomes possible to get good metrics about the course material itself. This is very useful. For example, if you knew that people typically struggle on a specific learning task then the content for that task could be upgraded in many different ways.

Teachers need more skill and focus

Mastery-based learning requires teachers who can impart mastery to struggling learners. This is a lot harder than running a classroom and accepting a minimum passing grade. It takes more domain expertise, interpersonal finesse, and teaching skills than standard cohort-paced teaching mechanisms.

On the other hand, MBL also helps to create more skilled teachers and better courses because:

  • teachers become very aware of all the ways a learner can misunderstand a task; this comes from repeated bouts of focused, personalized teaching. The teachers get focused practice
  • teachers are able to learn about how specific learning tasks cause struggle to learners and can tune and supplement those tasks so that learners are more able to learn what they need to without help. This can be a bit of a balancing act - you never want to spoonfeed a learner or set them up for pointless struggle. (not all struggle is pointless)

Low aptitude resource sinks

If you accepted a literal brick into a mastery-based course, then all the interventions in the world would not help it to achieve mastery. Ag shame.

If you accept a particularly low aptitude learner, then, of course, they would end up needing more TLC than a high aptitude learner. This can lead to a lot of resources being poured into people who are unlikely to succeed.

There are two main mechanisms to address this:

  1. be strict on who gets accepted to the course
  2. put a cap on how much staff-time each learner can get in a given period. For example, a maximum of three staff-led tutoring sessions per week

Besides that, it is always worth working towards:

  1. better teachers: teaching is a skill that can be learned. Investing in this skill makes teachers more able to efficiently help people who can't teach themselves
  2. Better content: Sometimes learners get stuck because things were stated in a confusing way, or they were asked to do too much too soon. Content improvements can permanently squash certain problems

Less attention is given to the high-aptitude learners

The struggling learners get all the time and attention of the educators. The learners who don't need help might get ignored for the most part - they are doing fine so they don't need anything.

On the one hand, this is good - faster learners don't get their time wasted by being forced to attend classes they don't need, and the people who need help get it. On the other hand, higher-aptitude learners can feel disconnected. They might feel like they are not getting the attention they signed up for.

This is a tricky problem to solve. Classic MBL says that the faster learners can be given "enrichment" activities to keep them feeling engaged and to reward their progress. I argued earlier that it often makes sense to let faster learners move forward with the course and then stack educational enrichment tasks at the end.

Here are a few things that work quite well. Faster learners can:

  • assist with daily standups and get a bit more hands-on experience with agile practices
  • act as a tutor to level up their communication skills while refreshing and hardening their knowledge
  • assist with preliminary reviews of other learners' projects, this will expose them to different ways that people solve similar problems and, again, act as a refresher of earlier material
  • simply finish the course quickly and move on to other opportunities, that's not exactly a bad thing

Diminishing returns

MBL puts a lot of emphasis on assessment. If a learner does not prove "mastery" then they don't get to move forward. But "mastery" is not a fixed target.

If an educator tries to make the perfect assessment then that could have a lot of moving parts - tests, projects, interviews, group and individual work... Getting it perfect is expensive for the educators, and learners alike.

It is actually useful to be intentional about having a limit to the effectiveness of an assessment. You need to decide how good is "good enough".

That might not sound a look like mastery, but it is alright because:

  • Knowledge builds on knowledge - concepts covered earlier in a course should show up later on in different ways because advanced concepts contain the basic ones. So many things would be assessed again by default. If it looks like a learner has probably mastered a skill and then they can't apply that skill later on, they get more support from teachers
  • interleaving is known to improve retention, so it is useful to intentionally revisit earlier concepts in different ways regardless. Assessing a person's skill once some time has passed is good for them.

Even with this it is a challenge to decide on exactly how well a learner should score before moving forward. Setting the threshold too low would result in struggles later on in the course, problems would become visible and it would be possible to spend time with struggling learners and tune earlier assessments.

On the other hand, if the assessment is too strict, then it can lead to other problems. Inexpert teachers can end up "teaching to the test" instead of teaching for mastery, and learners can also become rigid in their ideas of what is good enough.

Cargo-culting

Cargo-culting is a common problem in software development. TLDR; it means that some people write code in a specific way because they are mimicking something they have seen before rather than because they thought through what should be written. It's almost ritualistic and is a symptom of a lack of understanding.

In an assessment-heavy and project-based software development course it is possible that many learners will misunderstand feedback on their work. They might get feedback on a project and come to the conclusion that "you should write your code like this because the teacher said so" instead of actually mastering the reasons behind the feedback.

This can become a hard problem because it can be difficult to detect in an individual. And it is made worse if learners take part in any peer-to-peer project reviews and training. A learner might tell another learner to do their project in a specific way so that they can pass instead of conveying why something is considered good or bad practice.

There are a few ways to address this:

Don't expect perfection from learners when they hand in projects. Have a certain "good enough" measure that doesn't leave the learners feeling helpless. If a learner hands in less-than-perfect code, gets push-back, and is told to fix five different things, and the reasons behind those things are not explained well, then the learner might slip into a mode of just doing what they are told to do.

A tactic that works for software courses is to build learner projects in multiple parts. In the first part of a project the learner can write code that "just works". And then, in the next iteration, the learner can be asked to make the code "professional." Early in a course, separating the assessment of "functionality" and "style" helps the learner categorize and apply any feedback given.

Of course in advanced projects the "style" of the code matters more and more - if a learner writes code that is tightly coupled and uncohesive then they will struggle.

Another thing that can be done is: Teachers can be coached not to say things like "do it like this" but rather "read this article" or "spend some time finding out about best practice for ...". This can take a bit more time and care.

Lastly, it's possible to detect cargo-culting in a large enough group. If you review a few learner projects and those projects all do something weird and weirdly similar then it could mean that some "do it this way" lesson has been learned and shared among a group.

Marking bottlenecks

If a learner needs to demonstrate mastery by doing a project, then that project needs to be marked before the learner can move forward. This puts a lot of time pressure on educators. Learners get frustrated if they are made to wait for feedback.

There are a few solutions to this as well:

  • courses need not be strictly linear. Learning tasks can be more like a directed graph than a straight line. This means that if a learner is blocked on one task, there should be other tasks that they are not blocked on. For example, suppose a learner just submitted a project demonstrating mastery of fundamental express.js usage and is waiting for feedback. In that case, they can start working on something from another branch of the skill tree, such as SQL. Interleaving different areas of knowledge is useful for the learner
  • automate marking as much as possible: In software development courses, it is often possible to speed up markers by running automated tests and static code analysis. It is still often useful to have a human in the loop, especially for more advanced projects

Dear educators

When first hearing about MBR it's easy to jump to the conclusion that everyone should be doing it. It's so very good for the learners. But it is quite tricky to pull off, especially at scale.

A lot of MBR's early successes came from situations where it was applied in a blended way, for example by allowing learners to be reassessed on skills only a certain limited number of times and by getting a larger cohort to sync up periodically by adding extra optional material to keep faster learners busy.

For educators who are working in a cohort-paced environment, I would suggest trying to incorporate elements of MBR into your training. Going full MBR in one swell foop is hard, but experimenting and moving in that direction gradually is quite worthwhile.

Stay up to date

Subscribe to my newsletter. This will be a monthly digest of my latest articles and some other news and goodies.

Did you find this useful? Do you have something to say?

Feedback makes my heart go boom, and I would love to hear from you if you want to talk about this!

Hit me up on the socials :)