Episode #387: The Math Screener Struggle Is Real | Why & How To Use Math Screeners With Impac
LISTEN NOW HERE…
WATCH NOW…
Math Screeners… Diagnostics… Assessments… It’s easy to get lost in the language of math data. In this episode, we unpack the similarities and differences between math screeners and diagnostics, share where we stand on the topic, and offer insights from real classroom experiences.
We’ll talk about the benefits we’ve seen, the challenges that can trip up implementation, and how to use these tools to support—not overwhelm—math instruction. Most importantly, we want to hear from you! What’s been your experience with math screeners and diagnostics? The good, the bad, and the ugly—we’re here for all of it.
In this episode, you’ll discover:
- The key differences between math screeners and diagnostics
- When and how to use each tool to support student learning
- Common challenges educators face with implementation
- Insights from real classrooms on what works (and what doesn’t)
- An invitation to share your own experiences to help grow our community knowledge
Attention District Math Leaders:
Not sure what matters most when designing math improvement plans? Take this assessment and get a free customized report: https://makemathmoments.com/grow/
Ready to design your math improvement plan with guidance, support and using structure? Learn how to follow our 4 stage process. https://growyourmathprogram.com
Looking to supplement your curriculum with problem based lessons and units? Make Math Moments Problem Based Lessons & Units
Be Our Next Podcast Guest!
Join as an Interview Guest or on a Mentoring Moment Call

Apply to be a Featured Interview Guest
Book a Mentoring Moment Coaching Call
Are You an Official Math Moment Maker?

FULL TRANSCRIPT
Yvette Lehman: Today we’re going to go for a hot topic. I feel like it’s a divisive topic in the math education world. And I guess I should preface that. So today we’re talking about screeners and diagnostics and where they live in the math classroom. And I think I should preface this conversation by saying I spent the last three years deeply immersed in the science of reading literacy journey. So I think that my opinion is maybe biased or influenced by that learning journey.
But I’m curious, John, what do you think about screeners and diagnostics in the math classroom? Do you have a strong opinion about it? Where’s your head at currently? What’s your current belief?
Jon Orr: Mm hmm. Good question. Good question. you know, teaching high school and high school math, you know, the lowest I ever taught was seventh grade and and the highest I taught was obviously
the highest you can go in high school and some in college. Now, I think where I always stood when I was teaching in my 19 years was that you would be handed a screener or a diagnostic from the curriculum or the textbook company. If you looked at, the resources that were provided, sometimes they provided that. And I remember seeing a number of when resources were sent over to me.
I always was of the opinion that, okay, I could do this and I could learn by giving these. Now, every time I saw one of those, it was very much like naked problems or there are some context problems and there are word problems, if you want to call them that. And in my opinion, and maybe you are going to sway my opinion here, but my opinion was that
I always wanted students to have a different relationship with math that wasn’t perpetuating the math is just a worksheet and math is just like doing these problems. And I felt like most diagnostics, most screeners I saw, especially if you were introducing at the beginning of a unit or the beginning of the year, or your first experience with math was going to be a worksheet or a sheet of paper that I was asking students to demonstrate some sort of
prior knowledge on just so I could learn, right? The purpose there was so that I could learn the way, this is the way I was viewing it, that I could learn where they are with certain skills prior to the unit so that I could make adjustments on the fly or to what I was going to be teaching in the next day or the next week. Now, that was my understanding of, screeners and diagnostics, and I know there’s a difference, and I know you’re going to get into it. Now, what I was doing instead,
Jon Orr: was that I was of the opinion that I could engage my students in a thinking type task, a task that was context driven, but also provided different opportunities for students to discuss problems, put them in situations where they were using problem solving techniques. And because I was using experiences like that from day one, I was trying to also set the stage for this is what mathematics is going to be here. It’s not this over here, it’s this here. So I’m trying to introduce like,
this is the norm. This is what we’re trying to do. But at the same time, I felt like I was learning a lot about where my students were with certain skills, certain ideas, because of the choice of the task that I picked and also the structure of how the task unfolded during the lesson to provide me the evidence that I needed to see on the skills that we were going to take. So for me,
because I felt like I was learning so much from my students interacting with the mathematics and with each other, I learned enough that I needed to pivot when I needed to pivot for the rest of the for the unit to get to my learning goals and to get to my outcomes, to structure where we were gonna go in that unit or in that course. So that was the long way of saying, Yvette, I didn’t use them. Those were my screeners. My tasks were my screeners.
Yvette Lehman: Okay, for sure. And I think that coming from the elementary panel, I think that there’s a distinction because particularly K6, you don’t always have content specialists. So you don’t necessarily have facilitators who can do dynamic assessment in the moment and are very well versed in the curriculum and the content and math development that without a tool,
They’re able to notice and name student behaviors and place them on a developmental trajectory and know what the next step is.
Jon Orr: Yeah, I think you’re right. You’re saying, if I’ve got an educator and I’m supporting an educator or I am an educator who is just not comfortable or familiar with the grade level standard of where students are and where they need to go in that roadmap, that pathway, thinking about the grade levels above, the grade levels below, and I don’t have that sound foundation, then I need something. maybe to assess what grade level looks like.
Yvette Lehman: So you mentioned, let’s talk about what’s the difference between a screener and diagnostic. I feel like now that I think I know, they seem so clear, but I remember when we were supporting the shifts in literacy instruction, how much time it took to kind of consolidate or internalize the difference, the distinction between the two. So I’m going to do my best to explain this clearly. Here we go. Yeah, exactly. hopefully, you know, bear with me. A screener.
Jon Orr: Go for it. Yeah, you only got two minutes to do it. It took you for long time.
Yvette Lehman: The analogy is it’s like taking somebody’s temperature. It’s supposed to be quick. It’s supposed to be a snapshot indicator of okayness. Okay. So the reason I love a screener, I use screeners with my own child because for me, a screener in math and literacy is just one more piece of evidence to suggest that he’s on a good trajectory based on
a meta analysis of thousands or hundreds of thousands of his same grade level peers. So a really good screener is rooted in extensive research and it is just a piece of evidence to suggest that for that child’s age and grade, we’re comfortable with where they are currently along their developmental pathway.
Jon Orr: Let me ask you this then. So if I’m looking at the results of a screener, don’t I have to still be that qualified educator that knows above, below, just to even an interpret, whether this is actually at grade level or not? So therefore, it’s like, does it defeat the purpose based off what I was saying before, is that the whole point is to gain the information you need to make the choices, but if I’m actually gonna interpret what I’m seeing on the paper correctly and so that I can make decisions, am I wasting time?
Yvette Lehman: So screeners are typically very easy to score. They’re quick to score and you don’t interpret anything. They produce a result and they, it’s just an indicator of okayness. So basically you enter your data and you know, typically with these well-researched screeners, you get this indicator. No, no, no. A screener should be universal and it should be research-based.
Jon Orr: So screeners not like something you’re making up as a classroom teacher.
Yvette Lehman: So, and again, it’s just an indicator of okayness. It’s not the whole picture and it should never be used for the purpose of evaluation of a student. It is basically just, it validates what you already believe about students sometimes. So for example, with my own son, I typically would believe that he’s progressing well above, but I’m also biased.
And so sometimes when I want to step away from my own bias, I’ll use a screener just as a validation of what I already believe to be true, but I’m taking my bias out of it.
Jon Orr: Okay, so when are you using a screener?
Yvette Lehman: So typically screeners are administered three times a year, but sometimes what they say is you can start the year by administering the screener with everybody. And then the students who are above or well above, you may not administer three times. You may only do twice because again, what happens oftentimes, and this won’t surprise you, John, what I’ve seen when I’ve used screeners in the past is that sometimes a student will start the year well above or above.
But then they don’t experience a year’s worth of growth or 10 months worth of growth for whatever reason, right? There may have been high absenteeism that year, may have been a challenge with the dynamic of that, like whatever happened, we thought they were okay. And now we have just one piece of evidence to suggest that in those 10 months, they didn’t follow the trajectory that we were hoping for for that child. Again, not to be used on the report card, not for any purpose, but essentially,
ensuring that no students are falling through. And it’s removing the entire onus on the classroom teacher to make that judgment. And remember thinking again that these might be classroom teachers who are brand new to that grade. They’ve never taught that curriculum before. They may not have a ton of confidence in math instruction themselves. And so a universal screener is just one other validated piece of evidence to confirm
what we believe to already be true or to help us remove our bias from our understanding of where students are developmentally.
Jon Orr: Are they doing it the beginning? So help me then. So help me with the one, you know, there was only really two main, you know, components of my reasoning of not using it. And one of them was this connection, this belief of like what math is really about and how are we not continually perpetuate that math is just, hey, do this worksheet and I’m gonna see how you do on that.
Yvette Lehman (10:08.469)
Yes, yeah. So typically, yes, yes. Sure. So this is, I would say probably the biggest argument in math education. And I also share this concern. And I’m going to preface this that I don’t, I haven’t seen every screener out there. I’ve seen a handful of math screeners and I’ve experienced a handful of math screeners and I have not found one I like yet. And the reason for that is I love the idea of them. I think that there’s a place for them in education. I just haven’t found the one, the tool that feels right for me.
Jon Orr: Here’s, here’s, it brings up a question is like, like, why is there going to be one? Like, why is it just why is there going to be like one that is actually one like, like the best one, like that it doesn’t sound like it should.
Yvette Lehman: I mean, and there probably isn’t. There’s a variety and it really depends on, and I think you mentioned it already, it’s we evaluate what we value. And so I’ve also been apprehensive about some of these screeners because, for example, the computation part, they often have them represented on the page in like a stack standard algorithm representation.
And to me, it’s like already you’re funneling students to that standard algorithm rather than so that’s like they’re that’s my big argument against the ones I’ve seen. I actually just started investigating what I’m not going to name it because I feel like I don’t know enough about it, but I just started investigating one that’s worth, you know, it’s actually one of our districts, John, who’s using it that put it on my radar and they are speaking highly of it. It doesn’t look like that at all. It’s interview style. It’s, you know, pictorial. There’s models. It’s it’s not just a bunch of
know, stacked computation on a page and kids are timed and you use that data. Now, I guess I’m going to, I’ll go back though, cause that’s a whole other, you know, we can have a whole other episode once we’ve done more investigation into screeners and we could talk about what we like about different ones. But I guess what I’ll say is I think there’s a place for screeners. I think that what I mentioned before our call is that sometimes when we’re in a school environment for many, many years, we start to set the benchmark based on the students in front of us, or we set the benchmark.
not fully understanding the critical key concepts at that grade level because we haven’t developed curriculum mastery at that grade level. And a screener helps us get a level playing field. It’s really helpful for like an RTI model. So when you’re trying to plan for tiered instruction and you’re aligning, you’re allocating additional supports at a district level, when we all collect the exact same data in the exact same way across an entire system,
and we’re making informed decisions about what sites are going to receive more intervention support or more learning support. You have now data that’s consistent because we know that report card data is not. Like it’s not an indicator. We would also use report card data. We would also use our provincial assessment data, but this is just one other piece of evidence to confirm what we already suspect to be true, but it helps us validate.
Jon Orr: Right, so you have more standard. So you think everyone should be using a screener every grade level? Is there a place where you’re like, nah, we’re not going to need it there?
Yvette Lehman: think in our Ontario model, where we generally don’t have middle school and we don’t have content specialist teaching math, I think they should be used K-8.
Jon Orr: So if you feel like you have a content specialist teaching math, then no need for a screener.
Yvette Lehman: I mean, I still think there’s value to them, truthfully. If I was in the classroom and I found a good one, you know, one that I believed in and that matched my own belief around, you know, what we value in mathematics education, I would 100 % use a screener. Again, not for report card data, not to put students in buckets, or it’s just to gather evidence of a level of okayness that I might need to be aware of.
Jon Orr: And you think still that’s the best way to gather that evidence.
Yvette Lehman: I think it’s the most equitable way to gather that evidence when you don’t have content specialists in every classroom. And it also allows for common conversations amongst grade level teams because they’re administering something that’s consistent.
Jon Orr: Got it. Got it. I like that. What are going to be some horror stories around screeners? where are we going to go wrong here?
Yvette Lehman: There are lots. So a couple of things that come with screeners is what we’ve already mentioned. It’s like, does the screener match your belief about the experience you want for your students and what you value in the math classroom? Because when you pick a screener and you share it with an entire system, it sends a message about what we value. So if you don’t value the standard algorithm and speed, then you don’t want a screener that’s based on that.
Jon Orr: So it’s got to align to the vision. It’s got to align to your objectives. It’s got to align to what we are holding true about the high quality math instruction that we are trying to provide our students. So that makes sense. I guess I’m sure there’s some folks out there right now going, I’m a classroom teacher. I’m going to use it the right way. But the classroom next door to me is just going to use it as another quiz.
Yvette Lehman: Well, and I was just going to say another challenge with the implementation of a standardized screener is implementation with fidelity. And that is a challenge because the data isn’t valid if it’s not administered with the same guidelines. So if we are going to go down this road, we need to position ourselves to support implementation and likely it’s not going to be a single day of pullout PD. It’s going to require ongoing support and feedback.
Jon Orr: For sure, for sure. those, and oftentimes we, in our leadership positions, think that, you know, and this is the funny, guess,
ironic part is that we know that in with students, the I taught it, therefore I kids learned it is is like what we don’t want our classrooms to be doing. We’re trying to move away from that model and assess on the fly and making sure that how do you know that students have learned this learning goal. But when we’re supporting educators, that’s still the model is I’ve taught it. Now, therefore you are doing it.
or you are supposed to do it, even though there’s so much more learning to do, especially in any sort of pedagogical moves you’re trying to make in the classroom or shift in the classroom and specifically a screener here. I’ve introduced a screener, I showed you how to do it, done. Check it off our list, therefore it’s gonna work. But like you’re saying, we need to be teaching, like we need to look at what does it look like when it’s done well? What does it look like when we’re not implementing these things well?
How do I know that my teachers are at this end of that spectrum versus this end of that spectrum? In any sort of shift of instruction or shift of resource that you are using, we have to be continually monitoring, continually providing follow-up support. Otherwise, those shifts just don’t happen.
Yvette Lehman: Agreed. So, and I don’t want to get into too much detail. If anybody’s listening right now and they’re like, I would love to know what implementation of a universal screener looks like, you know, reach out. We can have a full other conversation about that because there are key moves to ensuring that, you know, a greater percentage of teachers are using the screener with fidelity. I know I’ve said it already, but I’m gonna say it again. Screeners should never be used for evaluation. Like it is never something that we would use
to then create a mark. And something that other people will say is, you know, well, this student in particular doesn’t test well or the time. And it’s like, that’s why we validate. This is just one piece of evidence that we then have to validate with the other evidence we have about that student. So it is not.
Jon Orr: Right, kids won’t do it unless I tell them it’s for marks.
Yvette Lehman: Right, yeah, and like this is just an indicator of okayness. It’s helpful for when you have tough conversations, to be honest, because it’s no longer, I said, right? When I’m having a difficult conversation with a parent about my concerns with their child’s achievement, when I’m suspecting that there are two, three years of gap for this child,
And I need to have that difficult conversation. I have one other piece of unbiased evidence to articulate why we are concerned and why this child is at risk and how we know.
Jon Orr: Yeah, yeah. And I guess for those educate, like I’ve always trusted, know, like you’re a, you know, you are a professional educator. You have professional judgment. The school boards pay you for that professional judgment. Like you, you’re an ex, you’re supposed to be an expert on that. But I think what you’re saying is like, not everyone feels like that. So therefore having this piece of evidence to back you up or to provide a piece of evidence that says,
says this can be helpful in those tough conversations. When an educator who feels confident, knows they’re confident, has evidence to suggest that their professional judgment has been validated over the years on, say, the learning goals, then that teacher might not need that piece of evidence.
Yvette Lehman: Absolutely. All right, let’s talk about diagnostics. Okay, so remember how I said that a screener is taking somebody’s temperature. It is a very initial indicator that something might be wrong. The diagnostic is where we start to actually peel back the layers to find the cause of the fever. Like we are actually now diagnosing where we need to intervene.
Jon Orr: Okay, yeah, I was gonna say, tell me the difference now. Got it. So it’s like, now we’re asking, if I’m the doctor, now I’m asking all the questions at this point. Whereas the screener was taking the temperature. Now I’m like, there’s something wrong. Let’s find the root.
Yvette Lehman: Right, so diagnostics are typically, I would say more intensive because they’re more targeted around a particular concept or skill. So a screener might be very, very broad. A diagnostic is going to be a lot more content specific. And we’re gonna be able to really pinpoint, let’s say for example, the screener revealed that the student may have had some challenges with
subtraction. My diagnostic is going to put them in a variety of situations where it’s like they’re looking at removal subtraction, they’re looking at comparison subtraction, they’re looking at missing add-in, missing subtrahend, I’m asking them to write a story, I’m asking them like I am really now digging into subtraction. I’m doing counting, counting up, counting back.
Jon Orr: So what do the questions look like then on a screener versus the diagnostic? if we’re getting more, I think people were probably imagining that, okay, the screener is a bunch of questions about the math topics that are coming up, but then they’re like, isn’t that the same as a, like the diagnostic also is doing that, but then you’re saying at a deeper level, so where’s the changeover? What does it typically look like?
Yvette Lehman: Sure. So I think there’s two maybe differences. So typically, and this is not true for all screeners. So I’m going to preface this. Like I said, I’ve investigated a handful of math screeners. I’m not an expert at all in math screeners. Typically on a math screener, it’s more general and it’s usually timed. So there is a element of, and I don’t even want to call it speed. I want to call it efficiency.
There is an element of efficiency in that timed assessment typically and again, it’s gonna cover a range of concepts. You’re gonna have possibly all four operators on it. If it’s a younger screener, you might have all the counting principles on it. Like it’s gonna be much broader where then the screener might give you your areas of need, right? The data will typically show you that these are the areas that this child needs to strengthen.
But that area is a concept that we then need to really unpack and say like, are the foundational skills that allow or foundational skills and understanding that students need to be able to get them from where they are to their grade level rigor. So I’ll give you an example of one that I’m very familiar with.
And I’m going to preface this by saying I actually really like this one. It’s not a universal screener in the same sense as some of the other research-based ones, but the comparison that I’m going to give is in Ontario, if anybody has ever used Marion Small’s Prime versus her Leaps and Bounds. So Prime is a big assessment that you do at the beginning of the year that kind of helps you indicate where this child is along their developmental trajectory. But then in Leaps and Bounds, you might have
So on the actual original screener, you might have one or two subtraction questions. On the diagnostic, there are 12 subtraction questions. Like it is really now unpacking subtraction and going back several years to understand like, where is the student entering as far as their ability to understand problems involving subtraction? Can this student count? Can this student count forward, count backwards? Can this student represent situations that are
Removal, comparison, missing add end. So it’s that next layer of understanding. And what happens then, and I mean, I’ve used Leaps and Bounds for this purpose myself when I’ve done intervention. It tells me what I’m doing in small group for tier three or tier two. Like it’s really helping me understand the skills that I need to address or when I’m working with my
family partners at home who are supporting the student who’s trying to close gaps. Like the tier one train is moving. It’s moving along and there’s more than 10 months worth of learning to do in 10 months already. But now I have a student with one, two, three years of gaps for whatever reason. And so I’m trying to provide additional opportunities for remediation of those gaps, which means often additional practice, additional support.
more targeted, explicit instruction, but I need to know what to target. To be most impactful. Like I don’t want to waste time trying to figure it out because I need the intervention to have started yesterday. So a diagnostic helps me pinpoint how to maximize my time.
Jon Orr: Right? And you can imagine that, you know, that everyone’s got a different pinpoint. So what would you recommend? you know, like imagine you’re getting to the tier two, you’re gonna show your small group and you’re remembering that this student, we got to pinpoint this part from the diagnostic. Is there a tool that you’re bringing around to help you kind of track, remember these types of things?
Yvette Lehman: Well, you just brought up the other aspect of RTI. So we talked about screeners, we talked about diagnostics, and then the next piece is progress monitoring. So it’s like, now have a map for all of my at-risk students. So I’ve identified students who, without additional support and targeted instruction intervention, we may find it challenging to access grade-level content. So it’s like, now I have a plan for these kids. And I’ve met with other…
support system. So I’ve worked with the LSTF, talked to the families at home who will be supporting them and we all have a plan for how we’re going to try to close these gaps for this child so that we can get them back to grade level content as quickly as possible. And then like you mentioned, it’s like now I have to progress monitor. I have to know that the intervention that we’ve implemented is actually supporting that student in closing that gap and then they’re retaining it, which means I have to, the progress monitoring doesn’t just
end at the end of this cycle. Like I’m con… it’s like cumulative, right? Like I need like cumulative monitoring and tracking and updating with families to say like, this is the work that we’re doing. We know that it’s working because I guess, you know, not to go on too long about this. I think there’s a place for all of this when it comes to ensuring every child has the opportunity to access grade level content.
Jon Orr: Right. Hey, I just did it, but move on. For sure.
Yvette Lehman: I think that there is a place for it as well in my world where our elementary teachers are not all content specialists and they just may not know what to do. And so we end up wasting a lot of time trying to figure out what to do and who to work with when there are tools that exist that can help us map this out with support from a knowledgeable other who’s well versed in the use of these tools. We have a district.
We had kindergarten teacher on a call recently and they use a system like this that has a screener, diagnostic and progress tracking. And I don’t remember her exact quote, but I think we asked her for a win and her win was all of my kindergarten students. She feels confident that they’re ready to go to grade one because she uses this system that’s constantly collecting data on individual student progress so that she knows what to do so that
she can get as many students as possible to the target by the end of the year knowing she’s not a content specialist. She actually told us she’s a literacy specialist, but she’s using some of the things she knows about RTI with support from this particular screener diagnostic and progress monitoring platform to be really intentional with the targeted instruction she gives to every student.
Jon Orr: Yeah, I think, I think if you’re not knowing what to do to get a student from where they are to where you know that they should be or where their next step is. And you also are trying to answer like, how do I know what to do? Then these tools are essential because they can give you not only the roadmap, but also that, that, they almost like the gate to kind of open the door to kind of like go down these roadmaps.
Yvette Lehman: thing is if it’s a good one, which again like hit me up everybody, like I said I’ve explored five, send me anybody who’s using a screen or diagnostic out there in the math education world, please email us and tell us which one you’re using so we can investigate. Because ideally if it’s solid, it should actually build the capacity of the teacher. If it’s a high quality instructional material, by actually using
the suggested interventions, the teacher will be better positioned the next time they see that behavior. They can actually learn by using the tool.
Jon Orr: All right, you heard Yvette. If you are using a screener diagnostic, let us know. your reply on any of the emails you get. If you’re not getting emails from us and you would like to, then head on over to McMathMoments.com. into any one of our online free online courses that we currently are offering or get on over to our tasks page and download one of our free classroom ready tasks to use in the classroom.
and then you’ll be on the email list, then it’s a reply. And then all of a sudden you can share that with us. And because we want to learn too. We’re here to learn and share back that learning to this community. So thanks for joining us here on this episode. And as we unpack screeners and diagnostics in the math class.
Thanks For Listening
- Book a Math Mentoring Moment
- Apply to be a Featured Interview Guest
- Leave a note in the comment section below.
- Share this show on Twitter, or Facebook.
To help out the show:
- Leave an honest review on iTunes. Your ratings and reviews really help and we read each one.
- Subscribe on iTunes, Google Play, and Spotify.
DOWNLOAD THE 3 ACT MATH TASK TIP SHEET SO THEY RUN WITHOUT A HITCH!
Download the 2-page printable 3 Act Math Tip Sheet to ensure that you have the best start to your journey using 3 Act math Tasks to spark curiosity and fuel sense making in your math classroom!

LESSONS TO MAKE MATH MOMENTS
Each lesson consists of:
Each Make Math Moments Problem Based Lesson consists of a Teacher Guide to lead you step-by-step through the planning process to ensure your lesson runs without a hitch!
Each Teacher Guide consists of:
- Intentionality of the lesson;
- A step-by-step walk through of each phase of the lesson;
- Visuals, animations, and videos unpacking big ideas, strategies, and models we intend to emerge during the lesson;
- Sample student approaches to assist in anticipating what your students might do;
- Resources and downloads including Keynote, Powerpoint, Media Files, and Teacher Guide printable PDF; and,
- Much more!
Each Make Math Moments Problem Based Lesson begins with a story, visual, video, or other method to Spark Curiosity through context.
Students will often Notice and Wonder before making an estimate to draw them in and invest in the problem.
After student voice has been heard and acknowledged, we will set students off on a Productive Struggle via a prompt related to the Spark context.
These prompts are given each lesson with the following conditions:
- No calculators are to be used; and,
- Students are to focus on how they can convince their math community that their solution is valid.
Students are left to engage in a productive struggle as the facilitator circulates to observe and engage in conversation as a means of assessing formatively.
The facilitator is instructed through the Teacher Guide on what specific strategies and models could be used to make connections and consolidate the learning from the lesson.
Often times, animations and walk through videos are provided in the Teacher Guide to assist with planning and delivering the consolidation.
A review image, video, or animation is provided as a conclusion to the task from the lesson.
While this might feel like a natural ending to the context students have been exploring, it is just the beginning as we look to leverage this context via extensions and additional lessons to dig deeper.
At the end of each lesson, consolidation prompts and/or extensions are crafted for students to purposefully practice and demonstrate their current understanding.
Facilitators are encouraged to collect these consolidation prompts as a means to engage in the assessment process and inform next moves for instruction.
In multi-day units of study, Math Talks are crafted to help build on the thinking from the previous day and build towards the next step in the developmental progression of the concept(s) we are exploring.
Each Math Talk is constructed as a string of related problems that build with intentionality to emerge specific big ideas, strategies, and mathematical models.
Make Math Moments Problem Based Lessons and Day 1 Teacher Guides are openly available for you to leverage and use with your students without becoming a Make Math Moments Academy Member.
Use our OPEN ACCESS multi-day problem based units!
Make Math Moments Problem Based Lessons and Day 1 Teacher Guides are openly available for you to leverage and use with your students without becoming a Make Math Moments Academy Member.
Partitive Division Resulting in a Fraction
Equivalence and Algebraic Substitution
Represent Categorical Data & Explore Mean
Downloadable resources including blackline masters, handouts, printable Tips Sheets, slide shows, and media files do require a Make Math Moments Academy Membership.
ONLINE WORKSHOP REGISTRATION

Pedagogically aligned for teachers of K through Grade 12 with content specific examples from Grades 3 through Grade 10.
In our self-paced, 12-week Online Workshop, you'll learn how to craft new and transform your current lessons to Spark Curiosity, Fuel Sense Making, and Ignite Your Teacher Moves to promote resilient problem solvers.
0 Comments