Episode #35 – Visible Learning: An Interview with John Hattie

Jul 29, 2019 | Podcast | 2 comments


This week we speak with John Hattie, a co-author of the Visible Learning Book series which has been a foundation for effective teaching practice and change for so many districts across the world.  We chat with John about the research into good learning strategies, how he conducted his research and how he uses effect size to compare teaching & learning strategies. We learn about the difference between surface learning, deep learning, and transfer learning, and what is the difference between focusing questions and funneling questions.

You’ll Learn

  • How John conducted his widely adopted research; 
  • How he uses effect size to compare different teaching strategies; 
  • The difference between surface, deep, and transfer learning; and, 
  • What the difference is between focusing questions and funnelling questions. 


Download a PDF version | Listen, read, export in our reader


John Hattie: Well, firstly, let’s start with the evidence. The evidence says the average effective class size is, reducing class size is, about 0.1, 0.2. And what that means, guys, it’s reducing class size enhances achievement. It’s a positive effect size. And anybody who argues that they should increase class size is ignoring the evidence. The only reason you could do that is that the effect size is negative. And it is not.

John Hattie: The second part of it is [crosstalk 00:00:19].

Kyle Pearce: That there is the one and only Dr. John Hattie. John is the co-author of the Visible Learning book series, which has been a foundation for effective teaching practice and change for so many districts across the world.

Jon Orr: We chat with John about the research into good learning strategies, how he conducted his research, and how he uses effect size to compare teaching and learning strategies. We learn about the difference between surface learning, deep learning, and transfer learning, and what is the difference between focusing questions and funneling questions.

Jon Orr: But before we get to all that, hit it.

Kyle Pearce: Welcome to the Making Math Moments That Matter podcast. I’m Kyle Pearce from TapIntoTeenMinds.com.

Jon Orr: And I’m Jon Orr from MrOrr-isageek.com. We are two math teachers who, together-

Kyle Pearce: -With you, the community of educators worldwide who want to build and deliver lessons that spark engagement-

Jon Orr: Fuel learning, and ignite teacher action.

Kyle Pearce: Jon, this is an episode we’ve been looking forward to for quite some time. Are you ready to get in?

Jon Orr: Of course, Kyle, of course. We are super pumped to bring you this episode.

Kyle Pearce: Awesome. Before we do, we want to give a quick shout out to Aggieam95, who left us a five-star rating and review on i-Tunes. Thank you, Aggieam95. Here’s what she says.

Jon Orr: “Resources and ideas you can use tomorrow. I teach 7th grade math and I love to listen to other educators and researchers to help me improve my craft. Kyle and John do an amazing job of bringing in guests who share valuable insight and share great resources. Every episode leaves me with several “a-ha” moments. If you want to be more effective in the classroom, this podcast is definitely worth your time.”

Kyle Pearce: If you’ve been loving the podcast, leave us a review on i-Tunes just like Aggieam95 did by outlining your biggest takeaway. Reviews help more educators hear about the show, and in turn, we can help make more math moments matter for students everywhere.

Jon Orr: Also, the Make Math Moments That Matter podcast is excited to bring you the Math Moments with Corwin Mathematics Book Giveaway.

Jon Orr: That’s right. We’ll be giving away 10 books from Corwin Mathematics, including John’s book, Visible Learning in Mathematics. Plus, you’ll receive special Corwin discounts and digital downloads just for entering the draw. You can get in on the giveaway by visiting MakeMathMoments.com/giveaway by Wednesday, July 31st, 2019.

Kyle Pearce: Listening after July 31st, 2019? No sweat. We’re always running a giveaway, and you can access it through that same link. So head to MakeMathMoments.com/giveaway, and you’ll see the current giveaway that we’re offering.

Jon Orr: Don’t miss out. Dive in to MakeMathMoments.com/giveaway.

Kyle Pearce: That’s MakeMathMoments.com/giveaway. And that brings us to the main event, which is our chat with John. We hope you enjoy.

Kyle Pearce: Hey, there, John, welcome to the Making Math Moments That Matter podcast. We are so excited to have you on the show today. How are things over on the other side of the world in Australia?

John Hattie: Hi, Kyle and John. It’s the middle of winter over here, so it’s quite a cool, brisk morning. And I’m very envious of the fact that you have such really high heat up where you are, and I hope you’re really enjoying it.

Jon Orr: Thank you, thank you. Yeah, it’s like, at the time of this recording, we’re having record heat waves here in Southern Ontario.

Jon Orr: John, we know about you from your research and your books, but could you do us a favor and help our listeners understand a little bit about yourself and your background?

John Hattie: Sure. I started [inaudible 00:04:15] New Zealand. Then I went and did my PhD in your part of the world, at the University of Toronto.

Jon Orr: Nice.

Kyle Pearce: Nice.

John Hattie: Yeah. Wonderful place to be. Those were the 1970s. I know it’s changed a lot since then, but it was certainly a very quiet and wonderful place to be, particularly for a PhD student.

John Hattie: Then my background and my training, [inaudible 00:04:35] was in measurements and statistics, and that has been my career until this thing called Visible Learning came along. And I’ve worked in various universities in Australia and North America and New Zealand. I’m now a very proud grandfather of three beautiful granddaughters, and enjoying those wonders of life.

Kyle Pearce: Fantastic. Yeah, we are very excited to learn more and discuss some of that work around Visible Learning, but before we get there, for those who have listened to the podcast before, they know it is the Making Math Moments That Matter podcast because we always like to ask our guests about a memorable math moment from your learning experience. So this could be as a student. It could be as a teacher, an educator, or a researcher. But what comes to mind when we say “math class”? What memory pops into your mind?

John Hattie: Oh, that’s easy for me because, when I went through high school, it was compulsory to take math. We didn’t have the option of dropping it, and if we had that option, I probably would have dropped it.

John Hattie: I did okay, but I wasn’t that enthused about it until I got to my final year in high school. And we had a teacher, Mr. Tomlinson, and he was, oh, wow, he was a very strict disciplinarian. He made sure every single one of us understood maths.

John Hattie: And I remember the very first class. He gave us the end of the year exam, and we were all devastated to get zero. And he stood up and said, “My job is to demonstrate to you that I can help you get to the end of the year and pass this thing.”

John Hattie: And during the year, he gave us the odd item from the end of the year and showed us we could do this. And he just never gave up on us. He was unbelievably fair, and he really drummed the maths into us to the point that I realized, hey, I can do this. This is fun.

John Hattie: And, as you heard, [inaudible 00:06:13] went on, PhD kind of statistics area. I look back on that moment as when a great teacher made a dramatic difference to me, and it was a math teacher.

Kyle Pearce: Awesome.

Jon Orr: I always find it so fascinating when we think about moments from our experience in school. And Kyle and I have said here on the podcast many times that we rarely remember the actual math. We remember more social moments and also moments where people have impacted us. And I definitely resonate with your moment, having senior level math teachers that have impacted me and made impressions on me.

Jon Orr: For me, it’s not necessarily a math moment, but, say, my gym teacher, my volleyball coach, was a huge impact on me growing up, and it shaped the way I viewed the world and solved problems and tackled things. So I totally resonate with that.

John Hattie: And just on that, it’s kind of interesting that we did actually publish an article recently where we took close to a thousand adults and asked them about their memorable teacher. And it all came down to two things. One is that teacher turned them on to their passion and/or that teacher saw something in them that they didn’t see in themselves.

John Hattie: Not once, you’re right, not once, did they have a memorable moment about their teacher because of a particular subject. It was one of those two things.

Jon Orr: Yeah. I think back my high schooling, even sometimes my university, I only remember the adults and the people who led those discussions or led my class room. So definitely huge, huge moments that we as educators have and a responsibility to our students because those are the things that they’re going to remember, not necessarily the math and most likely the lessons.

Jon Orr: Kyle and I’s kind of job lately is to try to change that to help them remember those lessons, and us as educators.

Kyle Pearce: John, we’d love to just now kind of dive into your book, Visible Learning. And something that’s always been very curious to me is where did the idea originate to write Visible Learning, which has since exploded into a huge series focusing on effective teaching practices in different subject areas and all over the world. Do you mind sharing with us kind of that origin story?

John Hattie: Yeah. It simply started right from my very first days as an academic and, as a person who teaches all those courses that I’m sure you love to do on research and statistics, you were welcomed but kind of tolerated. And each of my colleagues took me aside and said, “If you’re going to make a difference in this world, you’re going to have to study.” And the first one told me curriculum and someone said communication. Of course, someone in 1976 said computers. And it was fascinating. They all knew the answer and they’re all different.

John Hattie: And then I got into teacher education. It was the same phenomena. Every teacher that we met out in the schools kind of said nicely, “Ignore all that stuff at university and just watch me.” And then you read all the research articles, and what startled me is every article you read shows evidence that what they did kind of worked.

John Hattie: And so it started from that notion of how come we’re in a business where everybody knows truth, everything works. Guys, have you ever met a teacher yet who said they were below average?

John Hattie: So this was the phenomena that I looked at. And I thought, “Well, maybe the question we’re asking is the wrong question. What we typically ask is ‘What works?’ Maybe what we should be asking is ‘What works best?'”

John Hattie: And the irony for me was that, in 1976, Gene Glass introduced this concept called meta-analysis, and as a measurement person, I thought, “Oh, well the best way to learn about it is to do one.” And so I did one, and from there, I thought, “Well, maybe I can actually start synthesizing the meta-analyses.”

John Hattie: Now, in the early days, there weren’t enough. That was a very small area. But over the years, I’d systematically do this to try to answer that or change that question from what works to what works best, and to see if I could unravel the problem of why it is that every teacher knew truth. Because we were kids. We know that’s not the case. They vary.

John Hattie: And so, obviously, what I’ve discovered is that teachers are kind of right. If you set the benchmark at “Can I improve student learning,” there is good evidence that 95% to 97% of teachers can correctly say they can do that.

John Hattie: But it’s such a low barrier, and so changing that low barrier from what works, can we enhance learning, can we do it to a sufficient level, is kind of where it all started.

Kyle Pearce: Yes. Interesting. That’s really interesting. I think we all have this unconscious bias, right? I’m trying to remember which book I was reading, but recently it came up where there was some research about just this idea that most people believe they’re better than average at pretty much everything. Nobody thinks that they’re less than average. “I’m a better than average driver.” “I’m a better than average teacher.” “I’m a better than average friend.” “I’m a better than average parent.” Actually, parents probably would disagree. We always feel like we’re not doing a great job, but that might be the only exception.

Kyle Pearce: I don’t know if you can speak to that at all. Did some of that sort of come at you? It sounds like obviously you noticed something when you’d mention that how come everyone knows truth and, obviously, students are learning every day, whether we’re there or not. So obviously, like you said, most students are walking away with some new learning, but I guess the question would be is was it because of what we were doing versus doing something else? And that really fascinates me.

John Hattie: Yeah, and so the floor of the average is certainly the case, and it’s the same way, that if you ask in your class today which kid is average, it’s kind of an absurd question. We in education work on variability and how the spread of abilities and the spread of what kids do, and that really is the essence of what we’re doing.

John Hattie: So when you go back to your questions when teachers say they’re above average, then the other side of our bias is we always can find five or six kids in every class that are learning, maybe despite us. It is always the case that teachers do have evidence that they are above average.

John Hattie: So part of it is not to deny that. Part of it is to say, “Well, let’s look at what average is.” Average is not enhancing learning. Average, to me, is every student deserves at least a year’s growth for a year’s input.

John Hattie: And understanding what that year’s growth is is obviously the key to the next question. And that’s when you start to get variability and that’s where you start to get traction with teachers because, certainly in your country, 60% to 70% of teachers are doing that now. They’re getting above a year’s growth for a year’s input. And recognizing that excellence is out there and growing it is what our business should be all about.

John Hattie: Now, unfortunately, 30% to 40% of teachers are not, and so that changes the nature of the equation dramatically. And so one of the major themes of all my work is have the courage to identify that excellence, form coalitions of impact around that excellence, and then invite those other teachers to join.

John Hattie: That’s the other thing that I observed from the very early days is that, in education, we have so much evidence about failure. And I remember Ross [inaudible 00:12:47], my supervisor, saying to me when I graduated from Toronto, “Why don’t you be one of those rare academics that go out there and study success?” We do have incredible success, and so let’s stop saying, “Why isn’t this working?” and ask the question “Why is it working so well?”

John Hattie: And that’s kind of what Visible Learning is. We know there’s so much working well, capturing that and spreading that message.

Kyle Pearce: Right.

Jon Orr: And that kind of leads us into our next question. When we’re asking the question what’s working, but what’s working the most or what’s working well, you use the term [inaudible 00:13:17] “effect size” throughout your series. Do you mind helping the Math Moment Maker community here understand what do you mean when you talk about effect size in the series and how that all works.

John Hattie: Effect sizes are a measure of size, measure of magnitude. Those in the math stats community know all about statistical significance, which has its place, but the other part of what we should look at for every study is what’s the size of the effect?

John Hattie: And there are two main ways of estimating an effect size. One is the difference between two means, like the mean when you introduce, say, inquiry learning, and the mean when you don’t do it. Divide it through by the pooled standard deviation. And the other is doing it pre/post. You do a pre-test, you do a post-test, you subtract the two, and you divide by, again, the appropriate estimate of the pooled variance in that case.

John Hattie: And so the beauty of the effect size is that it is scale free. So it doesn’t matter what the test was, how many items, all those kind of things. It’s scale free, and that way, you can then compare different studies, different outcomes. And that’s been the big breakthrough.

John Hattie: Now, there’s a heck of a lot more to meta-analysis than just that, but that’s the essence of it, is coming up with a scale-free measure of effect size so you can say, “Impact of this variable is higher or lower than the impact of that variable.”

John Hattie: There’s a lot of details you’d worry about, but that’s the essence of an effect size. The beauty is teachers can calculate it in their classrooms using their own measures and they can then compare everything to what’s in the Visible Learning book. And that’s the beauty of an effect size.

Kyle Pearce: Beautiful, beautiful. And to help people understand, I love this idea, this comparison to what is equivalent to a year’s learning. You had mentioned a year’s worth of input and getting that year’s worth of output. So, for someone who’s sitting and listening, maybe running or in the car listening to this podcast, what’s reasonable for that year’s worth of input in terms of the effect size?

John Hattie: Well, let me start with the notion of the average. When I analyze the NAPE, No Child Left Behind, [CEP 00:15:14] in England, NAPLAN in Australia, [ESOL 00:15:15] in New Zealand, and I take all that data from the last 10, 20 years and say “What’s the average effect size when kids go from one year to the other?” It’s exactly 0.4, which is the same as what I found in Visible Learning using a different dataset.

John Hattie: So that’s a first, very crude guideline. That’s crude because it’s the average. If you’re looking at a very narrow concept, vocabulary, that average will go up. If you look at something wide like creativity, that average will go down. If you look at five year olds doing reading, that average will go up. If you look at 15 year olds who are reading, that average will go down. So context matters.

John Hattie: So one of the things that we do in our work is we triangulate the evidence. We go into schools and we say to them, “We want to talk about this notion of a year’s growth. Let’s look at the schools that you have, all the tests you have. Let’s look at artifacts of kids’ work. Bring along two pieces of kids’ work six months apart, same kid. And let’s have a debate about whether we think that’s sufficient growth for six months. Let’s ask the students about their sense of progress, and triangulating that.”

John Hattie: And what you find is you find, again, unfortunately, so much variability. And one of the hard core realities, guys, is that if teachers think a year’s growth is about an effect size of 0.2, their remarkably successful at getting it. If teachers think effect 0.6, they’re very successful at getting it.

John Hattie: But it’s that triangulation. It’s that debate. And it’s also that debate about what you mean by impact, because when I say [inaudible 00:16:39] impact, it’s not just the test scores. It’s the sense of whether the school’s an inviting place the kids to come to. It’s the sense of whether the kids are prepared to invest and have the joy in the learning. It’s the sense of the effect of respect of self, respect for others.

John Hattie: And so what in the school is their basket of goods? What is their evidence from various sources that they’re getting that growth? And if you don’t have these discussions, then, unfortunately, you’re leaving it to the randomness of the teachers to decide.

John Hattie: And that’s what is the hard part. It comes back to how teachers think about these things. And we’ve had so much debate in our business about what teachers do, and I bet on your podcast, on oodles of evidence around the world, we talk about different teaching methods, we talk about all the context variables, and my work is saying, “No, it’s not about that. It’s about how we think.” I wish it was easier, but it’s not.

John Hattie: And one of the ways in which we think is our concept of what that year’s growth is. So I’m not giving you a simple answer. I’m not saying it’s just 0.4, because it’s not as simple as that. But it really is critical that we have debates in schools about what we mean, particularly in our maths community about what we think this looks like, and then the argument is that every kid, no matter where they start, deserves at least a year’s growth. But I say at least because some kids need more than that, and that is what drives all that work.

Jon Orr: I can only imagine how difficult it must have been when you tried to collect and analyze all this data, and especially when we’re thinking all these teachers know the answer or it’s anecdotal. I’m wondering if you can help us understand how you collected this data and did a little bit of the analysis. I guess not too technical, but I think a lot of teachers out there are probably wondering how did you kind of come up with the effect size?

John Hattie: It’s called squirrel behavior. A meta analysis is where someone takes other people’s work, calculates the effect size, and then looks at the various moderators. What I do is I take their meta analysis and synthesize that at the high level. It really isn’t difficult work to do. It’s just squirrel work.

John Hattie: Now, the good news is last week at the annual Visible Learning Conference in Las Vegas, we released all the data on a website called MetaX, and so if anyone wants to see all the data from the 1,600 meta-analyses, they’re all there.

John Hattie: My argument is no one since I started this has ever queried the underlying model and the explanation that I’ve had. People have queried and had hassles about all the little details of this, that, and the other stuff. And so I’m saying, “Well, let’s break through all those little details. Here’s all the data. You don’t have to spend the last 40 years of your life collecting it. It’s all available to you. Come up with a better theory or a better explanation.”

John Hattie: So it really isn’t very difficult stuff. It’s just tedious. It’s me collecting other people’s meta-analyses and synthesizing those.

Kyle Pearce: I can only imagine how difficult that could be to read. And even if an educator is trying to go to the research and you get one research paper over here with this data over here, and then there’s data over here, how do they all compare?

Kyle Pearce: So it sounds like the Visible Learning work has taken all of this and tried to put it almost like it’s on the same playing field, on the same scale, so that now we can actually do some comparison.

Kyle Pearce: So now I’m wondering, for those who are listening, and I know Jon and I know quite well many of the very highest effect size approaches, we’re wondering which ones tend to come out on top based on some of that analysis.

John Hattie: Well, there’s two things that dominate the top of the charts. One is teacher expertise, the way they think. It’s not what they do. That’s been a big mistake to promote certain kinds of ways in which we teach, because you can have a teacher using the same method and getting different effects because of the moment by moment judgments.

John Hattie: We’re spending a lot of our research work looking at that thinking of teachers, and it comes back to this notion of evaluative thinking, how they make decisions about whether it’s worth it, whether it’s valuable, significant, on a moment by moment basis.

John Hattie: And the second thing that dominates is student thinking. And one of the things that’s a worry is that some students have multiple ways of learning. So if the first doesn’t work, they could default to another one. And by multiple ways, I mean two or three, not 10 or 15. Some students start and when something doesn’t work and the teacher ask them to do another problem, they use the same method, it doesn’t work.

John Hattie: Unfortunately, sometimes we classify those kids as bright kids and struggling kids, and I argue it is not. It’s not that at all. We need to teach kids that sort of metacognitive notion that, if something doesn’t work, try another strategy.

John Hattie: When I was reviewing recently, we have about 18,000 scripts of teachers, classrooms, actual classrooms as they teach them. And when I was going through those to find examples of when teachers taught kids different strategies, after 4,000 hours, I gave up because I couldn’t find one.

John Hattie: And so it is a very serious problem there, and maths is a classic example where kids do need different perspectives, different strategies. And our argument is they can be taught, and that’s what we should be worried about. It’s not just getting the answer right or wrong. It’s what’s the strategy? When was the last time you walked into a classroom, Jon and Kyle, and heard your math kids thinking aloud? And that’s what we need to do to understand what their conceptions are, their misconceptions, how they’re making mistakes.

John Hattie: If you went to a Japanese classroom in maths, and a kid by mistake, the teacher would say, “Well, let’s understand how that kid made that mistake because you can guarantee there’s others in the class that would do it.”

John Hattie: Now, we don’t do that in the Western world because we’re fearful of affecting a kid’s self-esteem, as we should, but we unfortunately then default and we hide it and the kids think it’s just getting the right answer. Well, that process to get there and the different ways of thinking is what we need to attend to a heck of a lot more.

John Hattie: So beautifully, but ironically, the top of the chart comes down to aspects of teacher thinking and aspects of student thinking.

Kyle Pearce: I love that you’ve referenced this idea of kids thinking aloud. And these are things that I think we know in Ontario here where we’re from. We’ve been working on these things for quite some time, but it doesn’t necessarily mean it’s happening all the time.

Kyle Pearce: When you had mentioned actual mathematical discourse, and I actually have many different versions of the Visible Learning series, but the one for mathematics in particular, there’s quite a bit in there about mathematical discourse, and there’s also reference to the NCTM Effective Teaching Practices, which I thought was really phenomenal.

Kyle Pearce: So lots of key pieces. We actually had Dr. Peg Smith on the podcast last week. It was just released. And some of her work comes out in this particular book as well. So it’s great to hear you sharing some of the same philosophy that Jon and I try to share, with this idea of the math classroom shouldn’t be this quiet place. We want to make sure that kids are actually doing the thinking, they’re doing the talking, so that they can actually reflect on their learning and try to figure out where they need to go next.

Kyle Pearce: So I’m wondering here, we were going to ask this a little bit later, but I’d love to sidestep this here because we actually had asked on Twitter a number of people out there, and we asked, “What would be the questions that you would ask John Hattie if he was to come on the podcast?” And we got a lot of responses.

Kyle Pearce: I’m going to flip it to Jon. Jon, which one do you think we should head to? Because Jon was indicating to me that it would be great to flip to one of the responses from Twitter. So I’m going to let him set this up for you.

Jon Orr: Yeah, this is a question from Adrienne Burns, and this is one thing that’s always been on my mind, too. She asks you, John, which aspect or piece of your research findings or your work do you feel could be or is misunderstood or misrepresented by educators or their districts? She put in brackets here, “I’m assuming some part has been distorted or misinterpreted because that is the kind of nature of the beast with kind of stuff like this.”

Jon Orr: I think a few teachers are wondering about this, too, because so many districts do use your work as kind of their focus and guidelines for professional development. So I guess she’s wondering kind of do you know or have you heard or you wonder which piece of your work has, say, maybe been misunderstood or misrepresented?

John Hattie: I’m actually very soon releasing a white paper identifying all the criticisms of Visible Learning that I can find. And there’s about 30 or 40 of them. Two thirds of them relate the one thing, and that is that league table. And in many cases, it worked for me because it attracted attention to the book. On the other hand. It’s been a liability.

John Hattie: And the biggest mistake people make is they look at that league table and say, “I’m doing all the stuff at the top. I’m not doing any of the stuff at the bottom.” And that wasn’t supposed to be my message. My message, what’s underlying and discriminated between the top half and the bottom half? And it’s interesting when you look at the criticisms, two thirds of them, my hunch is that people criticize the league table and they’ve never read the book. They say, “Well you can’t possibly have those individual influences. They overlap.” Well, the whole book is about the overlap.

John Hattie: And they talk about how sometimes the effect sizes change. Clearly, that means that the research is wrong. Well, of course, it changes as new evidence and new meta-analyses came out. And so I could go on and on and on.

John Hattie: So I don’t use that league table anymore. We call it the matrix of influences. We try and press the notion of what that fundamental message is. And so that’s probably been the biggest misunderstood aspect. And so when you look at some of the critics out there, particularly in the academic journals, they query and they quibble about all those details. And as I said before, not one of them has really addressed what the underlying model is about the teacher thinking, the student thinking, [compared to 00:25:57] structural issues, and I just find that fascinating, that the biggest criticism has been about one page, which has been taken out of context and misinterpreted.

Kyle Pearce: Totally, totally. And you referenced it earlier as well. I think it’s so important. You had said that you could have two different teachers who are trying to do the same approach and it happens completely differently. And the results are going to be different as well.

Kyle Pearce: So when we’re taking this data, it’s like on average, and you kept mentioning this idea of the average, and we’re really looking at comparing all of that data that’s out there and this is sort of how it all stacked up. And I guess what we’re thinking is that, hopefully, the ones that landed higher up on the table, it’s likely because those teachers were doing those approaches in a really effective way.

John Hattie: High probability interventions. As mathematicians, we understand that. There’s a high probability compared to other ones that these will make a difference. I want you then to know your impact. I want you then to investigate and evaluate the impact you’re having when you introduce high impact probability interventions. And that’s why I’ve moved to that notion of knowing by impact. I want you to know your impact.

Kyle Pearce: Right. And you know, something that’s really interesting as well is that, if I go to that table and I say, “Okay, so the research is telling me that these tend to be effective,” and then I am using it and it’s not working for me, then I should be reflective on my own practice and say, “Maybe I’m missing something here.”

Kyle Pearce: So, especially as you articulated, if I just go to the table and I just look and see, “Okay, I’m going to do that now,” but I haven’t actually read or learned about how to do those things well, that could be a really tough sell.

John Hattie: Even more than that, it may be some kids are benefiting from other kids as well, so you should look at that as well.

Kyle Pearce: Right. Right. Exactly. And does that mean that we don’t do it for all the kids because it’s not working for some? Or we do it for all the kids because it is working for some? Or is that where differentiation comes in?

John Hattie: You have remember the concept of differentiation. There’s a differentiation of our teaching. We’re so often [taking 00:28:00] differentiation as differentiation of activities, different activities for different kids. And if you go back and read the research on differentiation, that’s exactly what you shouldn’t do. You should allow for some of the success criteria, but different time and different ways to get there for kids. But that’s not what often happens.

Kyle Pearce: Absolutely. Absolutely. You know, I had gone to some differentiation workshops early in my career, and that was at least the message I interpreted. It doesn’t necessarily mean that was the intended message, but the message I received was I have to make five versions of the same problem.

Kyle Pearce: And in reality, at the end of the day, if the problems that I’m giving different students or the tasks that I’m having them do aren’t actually addressing the learning goal or the learning intention, then that’s not going to actually help those students, right? I mean, they might be able to accomplish the task, but it doesn’t help them accomplish the learning that we set out for all the students in the classroom.

Kyle Pearce: So something we noticed in the table, and a lot of the pieces near the top of the table really had this metacognition, this part where students were taking ownership of their learning. And one in particular was self-reported grades and student expectations comes out on top with that effect size of 1.44. That’s more than three times of that hinge point of 0.4 that you referenced earlier.

Kyle Pearce: Now, I’m wondering, just to get your own perspective on this, with state, provincial, even district policies in place around assessment and evaluation including grades and how we comment or create comments and other challenges, do you have any tips for educators who are eager to try applying some of these high yield approaches? How might they create the conditions to implement something like self-reported grades in their math classroom?

John Hattie: That’s a fascinating one, and over the years, I’ve certainly struggled to come up with the best ways of looking at that word. What we’ve been doing now is turning that more into assessment capable students, because that’s something you can actually action. And the argument there is, next time you give your class an assessment or a test, before they do the assessment, ask them to put at the top what grade they’re going to get. Fortunately or unfortunately, kids by the age of eight are pretty accurate at estimating it. So you have to seriously ask why you would ever bother giving the test.

John Hattie: But our argument is you give the test to find out to you what you taught well, who you’re taught well, and what the magnitude is. And that [changes 00:30:23] the notion, firstly, of assessment as feedback to you as the teacher, because it actually doesn’t give much feedback to the students because they already know what their grade is going to be.

John Hattie: But on the other hand, what we want to do is we want to teach them to interpret the results from their assessment. And so, again, when you give the assessment back to your class next week, wait a day so it’s not just short term memory, and ask the students what did they understand by the feedback that you gave them. What do you understand about what you’re going to do next?

John Hattie: And unfortunately, it’s a pretty barren discussion. Most kids [inaudible 00:30:53] the grade. Most kids say, “Yeah, I expected about that.” And that’s despite you guys spending all your Sunday afternoons writing screeds and screeds of comments.

John Hattie: How do you teach the kids to interpret the feedback? How do you teach them to know where to go next? Because what you’re trying to do here is you’re trying to change their expectations about what they can and can’t do, because if they know that they’re a C student and they perform at that level, in a sense, they’re satisfied. And that’s just not good enough. We want them to be a B student.

John Hattie: And so the separation of … It’s kind of like going to the nation of feedback. I spent many years of my research career trying to understand how teachers can give more feedback. Well, the first thing you note in the classroom is teachers already give lots of feedback.

John Hattie: And it turns out it’s the wrong question. The right question is how do you increase the amount of feedback that a student receives and understands? And that, unfortunately, is, in an average classroom, about two or three seconds a day. Most kids know that when you give feedback to the whole class, it’s not about them. Most kids know when they get an assignment back, the grade is about as much feedback as they’re going to get.

John Hattie: And so the assessment capable learning, the student expectation notion, is how do we teach the kids to be party to understanding and interpreting where they need to go next? Because if you think of the definition of the perfect student, the educated person, it’s the person who knows what to do when they don’t know what to do.

John Hattie: And that’s what we want to do in our maths class. It’s “Well, I couldn’t do that. How do I get help? Where do I go to next?” Not, “Oh, I couldn’t do that. Therefore, I can’t do it.” So that’s why that notion of student thinking, student expectation is so powerful. And you say you have three or four times the average effect. It’s actually double the effect of the teacher expectation. It’s very, very powerful.

Jon Orr: I’m so glad you mentioned that about helping kids understand what to do next. And part of the reason and exactly how Kyle and I have modified our assessment approaches a number of years ago to including full class days dedicated to exactly that. We were tired of kids handing back our quizzes or tests and kids looking at the mark, like you said, and just kind of going, “Okay, that’s exactly what I predicted,” and tossing it in the trash or filing it in the binder and never looking at it again.

Jon Orr: And one of the ways that we changed right away was that we only wrote comments on how to fix that work. We didn’t write grades on it anymore, because like you said, the kids who knows that they’re a C level student, when they get the C, they’re like, “Yeah, I’m done.” Instead, when they don’t see the C, they were like, “Okay, well, what did I get on it?” We’re like, “Well, you’re not at your level yet or you’re not showing proficiency yet. You need to change it.” So they might have had the C and quit. But the fact that you’ve not wrote that C on there, they’re now imagining, “I got to do better.” And so then they fix it.

Jon Orr: We’re spending a full class day a week now helping our kids understand that just accepting some of those grades doesn’t mean you’re done. We always want to strive towards kind of making it perfect. And that was one of the messages I’d chair in the first week of class is that every quiz you’re going to write or every test you’re gonna write or every assessment that you’re gonna do in this class will become perfect. And that’s part of our goals during our time together, is that we’re not going to just accept that mark that you thought you might have got or that mark you got. Just because you got the 60% in September, it doesn’t mean that that’s where you’re going to stay on that particular learning goal by the time we get to January.

Jon Orr: So I’m super glad that you mentioned that, because I think that’s super important.

John Hattie: Yeah. But the one thing I’d probably want to talk to you about is I would not ignore the mark. I think it is about that. There’s information in the mark. But I like what you’re doing as critically as well. And what we’re doing is we’re using the notion of personal bests and that often needs an anchor. So, you got a C last time. Let’s go for a C+, and this is how you do it.

John Hattie: And that highlights, again, the importance of the where to next and what you understand and what you don’t understand. I think the discussion about whether it’s grades or marks is a bit of a folly because both can be powerful in maximizing kids’ learning. And when you talk about perfect, there is a concept of what perfect looks like that is an A+ grade.

John Hattie: And so working the kids on this notion of personal best, and in a sense, personal best is kind of like many success criteria, [inaudible 00:34:46] is all kids. They have a clear understanding on a personal basis, and they talk about striving to do better than what you did yesterday without expertise.

Jon Orr: And that makes complete sense that they need a benchmark to see where they were and where they want to go. So that’s a good tip for sure.

Jon Orr: We’re going to switch gears here just for a second. We love that you reference the funneling and focusing questions so much that we have discussed them here on previous episodes with some of our former guests. Can you help the Math Moment Maker community understand the difference between funneling and focusing questions?

John Hattie: Especially when you look at a typical classroom, [inaudible 00:35:20] teacher scripts. You ask questions like what’s the average time teachers talk. It’s 89% of the time. When you ask do you ask your teacher questions, about 250 a day is the typical one. When we ask about what those questions were about, they’re always about facts. They’re very content based. And then when you ask how many questions does a class ask a day about their work where they don’t know the answers? So I’m ruling out what page am I on, can I go to the toilet. And the answer per class was about two.

John Hattie: And that’s what we want to change with the notion of the funneling and the focusing questions. It’s to get the students to be more involved in asking questions about things they don’t know. And that requires an incredible amount of trust.

John Hattie: And then it’s this notion of looking at the nature of the student questions, which is a real luxury when you consider that two is the average per class, and it’s how you can get the students to ask questions about the strategies they’re using or they’re not using, if they can ask questions for you to understand where they start, to understand the way that understanding breaks down, whether connections are right with the role, and how you can get them to focus specifically on the task they’re looking at the moment, how they can look at questions that move them to other directions, and probably the hardest thing in our business, how they can then transfer the understanding for what they’re doing now to the next problem.

Jon Orr: You know, something that’s interesting to me, and I think about this a lot, is I know trying to get my own mind wrapped around this idea of asking more focusing questions in my classroom. I’m wondering, have you bumped into any research or any data that would suggest where a certain level of teacher content knowledge and just that expertise that they have under their belt can allow them to ask and plan for better focusing questions, especially those that are in the moment?

Jon Orr: Because we talk about anticipating before our lesson using the five practices for orchestrating productive discussions. And in there, we talk about this anticipating stage. It’s so important to make sure that, when we go into a lesson, that we know or at least know that some students are going to do things in certain ways. But then there’s still some surprises. And I can only imagine that teacher expertise must really, really impact the effectiveness of being able to ask a good question, like a focusing question, over those sort of fact focused funneling questions all the time.

Jon Orr: Do you have any thoughts on that or any research that you’ve sort of bumped into that you can share with us?

John Hattie: Oh, yeah. It’s one of the … I spent 15 years on this question, trying to understand why that teacher subject matter knowledge, however you define it, pedagogical content, you name it, has the effect size of about 0.09. It just doesn’t make sense. And so I spent 15 years trying to understand this, and I’m sure it bothers the heck out of you guys, that really if I brought an English teacher in to teach your math class tomorrow, it doesn’t make a difference. And if you went and taught the English class, it doesn’t make a difference.

John Hattie: And that bothers me. And it really does come back when you look at why that is the case. In many ways, it’s that difference between focus and funneling. And it turns out that the reason why subject matter knowledge doesn’t matter is because of how we teach.

John Hattie: Now go back to I said before. If 90% of the questions you ask and the reactions you give to your students is about the facts, all you need to do is be one page ahead of the kids. And that’s why subject matter doesn’t matter because we’re so focused on the factual [inaudible 00:38:45].

John Hattie: Now, when you come to the things like focusing questions, where you do have to understand what the misconceptions kids can make, you do need a deep understanding of the math to understand “Oh, that’s how they went there. That’s why they didn’t do this.”

John Hattie: You do need to have a lot more subject matter knowledge and expertise to understand about the where to next in light of of each kid’s dilemmas and the way to go about problem solving. And certainly what you find is teachers who have that expertise and who teach in a way where they have more focused questions than funneling questions, then subject matter knowledge matters a lot.

John Hattie: But the reason it doesn’t matter is because, so often, we go straight on to going on to the next “Oh, you didn’t understand that. Try this problem. Here’s the more factual knowledge.” Ask the kids questions. Put your hand up if you know the right answer as opposed to put your hand up if you don’t have the right answer.

John Hattie: And so, absolutely, teacher expertise matters under the circumstance where a particular kind of teaching happens, such as focusing versus funneling.

Kyle Pearce: I got chills listening to you say that because I have had some questions in the past looking at the list where there are those surprises, where you sort of go, “Oh, wait a second. What does this mean then?” And even going back to the … We had asked about whether some of the research or some of the results can be misinterpreted. I can only imagine that some districts actually take that and they sort of make an assumption one way instead of what you’ve just articulated here.

Kyle Pearce: And I couldn’t help but also wonder, another big one was the idea of inquiry based learning. And I would love to hear your perspective on this, because, for me, I feel that inquiry based learning is so helpful, but only if it’s being done well, and that’s really difficult to do if you don’t have that expertise under your belt, that content knowledge, also that pedagogical knowledge. I think really both of those two things are so important in order to actually teach effectively with inquiry based learning so it doesn’t just become aimless.

Kyle Pearce: And I’m wondering if you can help fill us in. I know the effect size at least in the chart that I have, and that might have changed since, like you said, with new data coming out, but the effect size was 0.31 for inquiry based learning, which isn’t nearly as low as the 0.09 we just discussed, but it was lower than I guess I was hoping to see because I was thinking to myself, “Oh, my goodness, I want to provide the conditions in my classroom to guide students through that process so that they can own some of the learning along the way.”

Kyle Pearce: So I’m wondering, can you help us out with your perspective on that?

John Hattie: Yeah. If you look at anything that’s got a deep focus, like problem based learning, 0.1, discovery learning, right down there near the bottom. And it’s kind of like you’re saying, is that that doesn’t mean you don’t do them. That means you ask the question why. You accept the evidence that they’re typically not working and ask why they’re not working.

John Hattie: And I’ve certainly done that, particularly for problem based learning, because I got a lot of criticisms from people. “Of course, all your stuff must be wrong because I know it’s not true. And if you come into my class, you’ll see I do it.” And you think those people are not listening to the evidence. The evidence is they’re typically not working.

John Hattie: And so when you look at why, and let me give you a hint, problem based learning is used dramatically high levels in first year medicine at universities. We have done 14 meta-analyses. We don’t need to do another study to know that it has a zero to negative impact on those medical students.

John Hattie: But if you introduce problem based medicine in fourth year medicine, the effect size goes up to 0.5. And so that was the hint that maybe go back to all the problem based meta-analyses and ask the question about the timing of the intervention. And that’s where it starts to become sensible. If you go into problem based learning before the students have the subject matter knowledge, it has a very large negative impact.

John Hattie: And that’s why I am very concerned when people say, “Oh, I’m a problem based learner. I use discovery learning.” And that kind of religious zeal is where the problem comes. And it goes back to the discussion we had at the start of this podcast. It’s about the notion of differentiation of teaching. There’s a right moment to do discovery based inquiry based. There is a wrong moment.

John Hattie: And what I find fascinating is, like Bob Marzano came out with a book a couple of years ago, 480 Different Teaching Methods. And we thought, “Oh, this is great.” We went through them and said, “Which one of them focus on the surface, the content, and which one of them focus on the deep and the relationships. And it turns out that there’s one, maybe two, out of the 485 that do both at the same time.

John Hattie: Problem based learning, discovery learning, is excellent for deep thinking, but it’s not so good for content based. And so that’s why it’s so low. It’s because of that lack of expertise, that lack of understanding by teachers. They’ll introduce it before the kids are ready for it. And so that helps resolve what the problem is. There’s a right time. That’s why we call our model the Kenny Rogers model. You gotta know when to hold ’em. You gotta know when to fold.

Kyle Pearce: Absolutely. You know, that is so helpful. And I have a video. I’ll put the link in the show notes for those who are listening, because you did discuss a bit about this idea of inquiry learning. And it makes a lot of sense to me because I fear that sometimes people, when they try to do inquiry learning what they end up doing is they start with something that is too high level.

Kyle Pearce: And something Jon and I advocate all the time on the show is about having tasks that are low floor, but high ceiling, tasks that we can actually vertically mathematize. We can actually help students to get beyond where they’re at now and really go deeper, but it’s not leaving a bunch of students behind as well, because where you see students just falling right off the map when you come in with this really high level rich task and it’s rigorous and it’s all of these things that makes you feel great as an educator because you’re really going to help these students. But you’re not really helping all the students. You’re only helping some of the students. And that, I think, really helps me to understand this.

Kyle Pearce: I’m wondering, you mentioned surface versus deep. Do you mind going … ? I know we’re getting close here. We don’t want to hold you too much longer. But I did want to talk to you a little bit about surface versus deep versus even transfer knowledge. Can you help, for someone who’s at home and that’s a new term for them, what is the difference between them and sort of what roles do they play in the learning process?

John Hattie: Yeah. The roles, all the roles of the three parts are very critical. The surface is the content, facts, the knowledge. The deep is the relationship between those ideas and the transfers then is applying those facts to those relationships to different contexts. And all three of them are important.

John Hattie: Sometimes we denigrate the fact side of things and say, “Oh, they don’t need to know facts. They don’t need to overlearn. They don’t need to do memorization.” Well, actually, they do so they can move on to do the relationships.

John Hattie: In many ways, we don’t want to say that it’s that linear, that you do the facts, then you do the relationships, then you do the transfer. But it certainly is the case that you do need to attend to all three parts as you’re teaching your mathematics. There are things kids need to know. But then when you go on to your problems and look at the relationships, you do need to ask about do they have the sufficient knowledge to get to the relationships? Sometimes we switch back and then over focus on the knowledge side of it, or some teachers we were talking before about problem based learning, over focus on the relationship side of things.

John Hattie: But it’s getting that balance, and in any lesson, you switch between the surface, the deep, and the transfer. And we find that a very useful distinction to not only look at how you go about your teaching and your lesson planning, but also to help the students understand that there are three different parts to the equation of learning mathematics.

John Hattie: But we also do that in our assessment. We separate out and we say to the students for any particular instance, “This is the surface level question. These are the things we need you to know, and these are the deep level questions.”

John Hattie: We’ve also, in our latest work, started to [inaudible 00:46:17] about success criteria and talk about the surface level success criteria and the deep. And again, when you go back to our students, what do they understand by doing mathematics? Too often, unfortunately, I think it’s kids who are good at mathematics who are kids who know lots. And I bet that’s not what you want to have in the kids. You want them to know lots and be able to use it.

John Hattie: And so this distinction between surface and deep and transfer, we find a very powerful one.

Jon Orr: Awesome. I love how you mentioned that really all three are important. When I think about, in mathematics specifically, a lot of times you’ll hear people talking about back to basics or some say inquiry learning, discovery, all of these things. And you’ve done a great job.

Jon Orr: And I see a connection here, where it’s showing that these things, they are all important and we have to have them all. And when I picture surface, deep, and transfer learning, I kind of picture almost a continuous cycle, not one after the other, but just this idea that, as you’re working on surface learning of this idea over here, you could be going into deep learning about something that you have learned in the past. It’s really almost like an eco system for learning. You have to have them all. It’s not turning one on and turning one off.

John Hattie: And Kyle and Jon, you’re actually doing Kenny Rogers. That skill of when to do that and when not to do that is the skill of evaluative thinking.

Jon Orr: Awesome. We have one last question here, and we’re going to let it be one of the questions that some of our listeners did want to ask you and it is about class size. Sean Sealy is asking if you’ve figured out or have you done more research on why class size isn’t a larger factor in student success. He feels like his district is using your research to overload class size because that effect size isn’t as high as some of the other strategies.

John Hattie: Oh, yes, it’s a particularly hot topic in your province at the moment.

Jon Orr: I was about to say that.

Kyle Pearce: Sadly.

Jon Orr: I was going to say this is a hot topic for us right now, too, so we’re wondering your thoughts on this.

John Hattie: Well, firstly, let’s start with the evidence. The evidence says the average effective class size is, reducing class size, is about 0.1, 0.2. Now, what that means, guys, it’s reducing class size enhances achievement. It’s a positive effect size, and anybody who argues that they should increase class size is ignoring the evidence. The only reason you could do that is if the effect size was negative, and it is not.

John Hattie: The significant part of it is accepting the evidence and asking the question, “Why is that effective class size so relatively small compared to what many of us would expect?” And certainly I’ve done the research on that, as others looked at that, and it turns out there is a very simple reason. If you take a teacher in a class size of 25 to 30, and you put them in a class of 15 to 20, and they teach the same way, who’s surprised?

John Hattie: And certainly what we’ve done over the years is we’ve learned the class of 25 to 30 to use modifications of a tell and practice model. We tell, they practice. We talk a lot. We ask lots of questions. The absolute irony of this is that, when you research teachers that go into classes of 15 to 20, they do more tell and practice. They actually talk more. There is less feedback. There is less group work.

John Hattie: And so that’s why the effect size is so small. Imagine if we reduced the class size and we changed the nature of the teacher. Now, I can only find one study in the world that’s ever asked that question. So reducing class size has not enhanced learning to any major effect. Could it? Yes, it could. But we’re not asking the right question. We’re not asking about the different nature of teaching.

John Hattie: If I said to you guys tomorrow, “I’m going to put you in a class of 500,” which is the typical class I teach at university, the nature of your teaching would have to change very quickly. It’s the same notion going from 30 to 15 and we typically haven’t done it.

John Hattie: So accept the evidence is low, accept the evidence that it has not made a difference, and ask the question, “Can we come up with ways of enhancing it?” Because if you go to smaller classes and you change the teaching methods, it could work.

John Hattie: But let me leave you with one comment. Over the last 200 years, we’ve worked out pretty well how to teach in large classes. Most of the studies in Visible Learning are based on classes of 25 to 30. There are some unbelievable, stunning teachers out there who have learned how to have pretty important and major effects in classes of 25 to 30. I want to understand them, and I have to seriously ask, if those teachers are then put in classes of 15, 15 kids are deprived of great teachers.

Jon Orr: As you were mentioning this, I’m so happy that we had an opportunity to ask that question from Sean on Twitter because, as I’m flipping to the chart, and I know you had referenced earlier it’s not just about the chart, but I’m looking at even the top 30 of the approaches with high effect sizes, and I’m looking at so many of them, that we could do more effectively if we did them with a smaller class size. And maybe that class effect size would change.

Jon Orr: Like feedback. Feedback is 10 on this particular list. Response to intervention, number 3, I’m picturing this idea of small group instruction. But if I’m just going to continue to teach to the whole audience and not really worry about actually taking that and trying to do something different with the fewer students that we have to build that teacher-student relationship with, and just really interact with on a daily basis, I can only imagine that, over time, those numbers would change.

John Hattie: I’m sure you’re right.

Jon Orr: Fantastic. Fantastic. Well, John, listen, we know you are a incredibly busy person. We really want to thank you so much for joining us on the podcast, but before we go, is there anything coming up on the horizon that you want to share with the audience? As well as where they can learn more about John Hattie and the Visible Learning work that you are always doing and diving deeper into?

John Hattie: Well, yeah, as I mentioned earlier, we released all the data last weekend on a website called MetaX. I don’t want to be commercial here, guys, but the Corwin publishing company in America oversee all the implementation of my work. As you can imagine, I’m the researcher. I sit in the back room. I talk a lot, but I’ve got a team out there that implements Visible Learning around the world and your country. And so if you go to the Corwin Visible Learning Plus site, you’ll find all the resources you’ll need and lots more, so I welcome you to use that.

Kyle Pearce: Awesome stuff. Awesome stuff.

Jon Orr: Awesome. Awesome.

Kyle Pearce: So, John, again, we want to thank you for joining us here and we hope you enjoy the rest of, I guess, your morning.

John Hattie: It is, and look, Jon and Kyle, thank you so much, and as you can imagine, having done my PhD in Canada, I owe your country a tremendous amount. Love it dearly, and I wish all of you the very best in your enjoyment of your [inaudible 00:52:33].

Jon Orr: Awesome. Thank you very much.

Kyle Pearce: We want to thank John again for spending some time with us to share his insights with us and you, the Math Moment Maker community.

Jon Orr: As always, how will you reflect on what you’ve heard from this episode? Have you written ideas down? Drawn a sketch note? Sent out a tweet? Called a colleague? Be sure to engage in some form of reflection to ensure that that learning sticks.

Kyle Pearce: Also, the Making Math Moments That Matter podcast is excited to bring you the Make Math Moments With Corwin Mathematics Book Giveaway. That’s right, we’re giving away 10 books from Corwin Mathematics, including John’s book, Visible Learning, plus you’ll receive special Corwin discounts and digital downloads just for entering the draw. You can get in on the giveaway by visiting MakeMathMoments.com/giveaway by Wednesday, July 31st, 2019.

Jon Orr: Listening after Wednesday, July 31st, 2019? Don’t sweat it. We are always actively running giveaways, so check out MakeMathMoments.com/giveaway to learn about our current giveaway that’s going on right now.

Kyle Pearce: Don’t miss out. Dive in at MakeMathMoments.com/giveaway.

Jon Orr: That’s MakeMathMoments.com/giveaway.

Kyle Pearce: In order to ensure you don’t miss out on any episodes as they come out each week, be sure to subscribe on i-Tunes or your favorite podcasting platform.

Kyle Pearce: Also, if you’re liking what you’re hearing, do us a favor. Share the podcast with a colleague and help us reach a wider audience by leaving us a review on i-Tunes, and tweet us your biggest takeaway by tagging @MakeMathMoments on Twitter.

Jon Orr: Shout outs and links to resources from this episode can be found at MakeMathMoments.com/episode35. Again, that’s MakeMathMoments.com/episode35.

Kyle Pearce: Well, until next time, I’m Kyle Pearce.

Jon Orr: And I’m Jon Orr.

Kyle Pearce: High fives for us …

Jon Orr: And high fives for you.



Why not join our year-round membership platform packed with courses, problem based tasks, past Virtual Summit Session Replays, and a vibrant community forum to jump-start your journey to Make Math Moments during each and every lesson.

Try it FREE for 30 days!

LEARN MORE about our Online Workshop: Making Math Moments That Matter: Helping Teachers Build Resilient Problem Solvers. https://makemathmoments.com/onlineworkshop

Thanks For Listening

To help out the show:


  1. Jason Rice

    Awesome conversation! I found this podcast giving me so many implementable ideas for this upcoming year. Thank you!

    • Kyle Pearce

      Thank YOU for listening and stopping by to say “HELLO!”



  1. Episode #160: How to Make Connections In Your Problem Based Lesson - A Math Mentoring Moment - […] Visible Learning: An Interview with John Hattie https://makemathmoments.com/episode35/  […]

Submit a Comment

Your email address will not be published. Required fields are marked *