John Hattie is Wrong | School Matters Foundation
top of page
Marble Surface
image.png
Robert Slavin.png
Robert Slavin's blog.png

John Hattie is a professor at the University of Melbourne, Australia.  He is famous for a book, Visible Learning, which claims to review every area of research that relates to teaching and learning.  He uses a method called "meta-meta-analysis", averaging effect sizes from many meta-analyses.  The book ranks factors from one to 138 in terms of their effect sizes on achievement measures.  Hattie is a great speaker, and many educators love the clarity and simplicity of his approach.  How wonderful to have every known variable reviewed and ranked!

​

However, operating on the principle that anything that looks too good to be true probably is, I looked into Visible Learning to try to understand why it reports such large effect sizes.  My colleague, Marta Pellegrini from the University of Florence (Italy), helped me track down the evidence behind Hattie's claims. 

 

And sure enough, Hattie is profoundly wrong.

He is merely shoveling meta-analyses containing massive bias into

meta-meta-analyses that reflect the same biases.

​

Part of Hattie's appeal to educators is that his conclusions are so easy to understand.  He even uses a system of dials with color-coded "zones", where effect sizes of 0.00 to +0.15 are designated "developmental effects", +0.15 to +0.40 are "teacher effects" (i.e., what teachers can do without any special practices or programs), and +0.40 to +1.20 as the "zone of desired effects".  Hattie makes a big deal of the magical effect size of +0.40, the "hinge point", recommending that educators essentially ignore factors of programs below that point, because they are often no better than what teachers produce each year, from fall to spring, on their own.

​

In Hattie's view, an effect size of from +0.15 to +0.40 is just the effect that "any teacher" could produce, in comparison to students not being in school at all.  He says, "When teachers claim that they are having a positive effect on achievement or when a policy improved achievement, this is almost always a trivial claim:  Virtually everything works.  One only needs a pulse and we can improve achievement." [Hattie, 2009, p. 16].  An effect size of 0.00 to +0.15 is, he estimates, "what students could probably achieve if there were no schooling" [Hattie, 2009, p. 20].

​

Yet this characterization of dials and zones misses the essential meaning of effect sizes, which are rarely used to measure the amount students gain from fall to spring, but rather, the amount students receiving a given treatment gained in comparison to gains made by similar students in a control group over the same period.

So an effect size of, say, +0.15 to +0.25 could be very important.

​

​

Commentary from T.L. Zempel, author of CHAOS in our schools

In my book, I have devoted a chapter to the research and postulations of Hattie's effect sizes and ideas on what Visible Learning actually entails.

​

In truth, rather than being a simple way to understand how students learn and what makes learning 'stick', it turns teaching into a very complicated enterprise emphasizing uniformity and supposed visibility for students without making a meaningful impact on achievement.  I saw this during school years 2012 to 2014, where we teachers in our Colorado public school were compelled to reform our classroom protocols to conform with Hattie's ideals and held to account for it on our final evaluations.

​

In addition to assessing the methods teachers use and their effects on learning (as discussed in Robert Slavins' blog), Hattie stresses that students must buy into their learning so it will stick.  The best way to achieve this, he insists, is to explicitly communicate what we are teaching, and to do it with uniformity so all students get identical lessons.  To this end, we made posters to display the curriculum standards and charts to communicate our Learning Goals, Learning Targets, and Success Criteria, Hattie-style.  Administrators would walk through our rooms daily to inspect these charts, making sure they showed change.  Every poster and chart had to be identical, which meant we teachers had to meet to agree on the wording and then line up to use the laminator for the posters we had just printed.  Try to imagine that collaborative process!   Invariably, if the district's curriculum document listed "Analyzing Text Features Guides the Reader" as a standard, someone was sure to propose alternate wording, such as "Text Features Provide Important Clues", arguing that it is easier for students to understand.  Then we would have to "discuss" until agreement was reached.  Usually, the teacher with the most aggressive personality would win her point and the rest of us would acquiesce.  Officially, anyway.  This is the process we had to go through to create the uniform posters that adorned our walls throughout any unit of study. 

 

I wondered why the district didn't mass produce these posters for all teachers in every school so true uniformity could occur.  When I raised this question at a staff meeting, the response was not satisfying:

"Teachers must also achieve buy-in to what they are doing.  If they don't parse the curriculum together, that investment is not likely to happen."

​

The chart that communicated our Learning Goals (the unit of study), Learning Targets (the daily lesson), and Success Criteria (the assignment for the day), was another story.  And yes, these charts also had to be uniform among the five 6th grade teachers.  Or nearly so.  The first thing an administrator would look for on his walk-through was this chart.  Teachers ended up spending a lot of time making their classrooms look as if powerful teaching and learning were happening, when in reality, what our administrators often saw was merely a facade.  That's because we knew what we were required to do was for show only.  It wasn't authentic for learning, and our students knew it.  There was a lot of eye-rolling among my students when they learned the rationale behind why were were doing it.  Some of them even found the language we were compelled to use on these charts amusing.

How would you react if you were 11 years old and saw a chart on your teacher's whiteboard that looked like this:

Visible Learning Chart.png

Theoretically, this type of chart fulfills a student's need to know why he is learning what he is learning.  Judging from the reactions of my students, they only cared about the Learning Targets and the Success Criteria, but they did not appreciate the convoluted wording used to share those ideas.  When one student asked me why I had used the term ThinkSheet for my science assignment instead of worksheet "like a normal person", I had smiled and said, "Because I'm not allowed to anymore.  The bosses think that term is meaningless."  More eye rolls and snickers.

​

John Hattie's Visible Learning exemplified the old adage 'If it ain't broke, don't fix it.'  I'm not saying education was perfect prior to his idiotic, improperly researched theses, but what I am saying is that for the last two decades, at least, the education power structure has been looking for things to 'fix', even if they didn't appear to need fixing.  Hattie's postulations were deemed 'cutting edge', and anyone who wished to further his administrative career jumped on that bandwagon.  What no one bothered to assess, realistically, was if making the process of preparing to teach much more laborious and time-consuming would positively affect the lesson we were delivering.  Our school had, for years, been an award-winning institution, with teachers who signed on because they knew quality was at the core of everything we did and parents who open-enrolled their children because they knew quality instruction would occur.

​

But because we teachers were now compelled to spend so much of our prep and meeting time creating ridiculous posters and charts, it eroded how much time we spent studying the lesson prior to teaching it.  Teachers often prepare the way actors do -- going through their lines, imagining the scene and how the students will respond, anticipating snags to the 'performance'.... That process was seriously compromised, at least for me.

​

Thankfully, the Visible Learning fad faded from our school psyche after two years, further enhancing the cynicism teachers feel toward professional development: "Yeah, we gotta do it this year and maybe even next, but eventually, it will go away, like every other 'great new fad' does."  And for us, it did.  I wonder if the "data" we were culling from our classrooms helped that process...I know my data didn't show any improvement in achievement from prior to 2012.  In fact, it showed a decline.

​

Incidentally, prior to 2012, we teachers did post daily info about the lesson and the assignment.  We already knew such information aided focus for students in our classrooms.  Here's what a posting in my room looked like, circa 2010:

2010 Assignment chart.png

Research Visible Learning yourself.  You can find the information Hattie and his sycophants want you to have at Visible-Learning.org.  Then put into your search engine "Visible Learning Critiques", and you will find a plethora of blogs and articles.  Here's a link to one of my favorites from a blog written by Ollie Orange 2.

​

​

​

 

The trouble with education researchers is two-fold, as I see it:

  1. The researcher often does not have the same goal that teachers have: to help students learn.  No, their goal usually has to do with something called 'a feather in one's cap'.  The prime motivation is the accolades they will receive for their research, with a secondary motivation being, of course, the money.

  2. The research is not conducted in authentic settings.  Check out the reality of this with the instructional videos on our webpage What's Wrong With Education?

​

You can find out more about the lunacy that is Visible Learning

in my book CHAOS in our schools.

Original on Transparent_edited.png
bottom of page