Sunday, February 2, 2020

Interpreting graphs with "What's Going on in this Graph?"



It's been three years since the New York Times partnered with the American Statistical Association to bring "What's Going on in this Graph?" to teachers and students. The premise is simple, every week during the school year they publish a graph asking students to think about it and discuss their observations whether on the site itself or on Desmos. A week later, they have a reveal, with more information and highlights from the moderation.

I started using the activity with my 8th grade Science and Engineering classes last year, as I struggled to teach them to interpret motion graphs. What I was looking at at the time, was students who jumped to conclusions without stopping to actually look at axes or units,  or choosing to bar graphs over line graphs simply because they were more familiar. I also, at the time, was invested in having students create infographics for projects, but again, they were doing so without actually looking or thinking about why one choice was better than another, or even worse, Googling for ready-made ones and pasting them without realizing that what they had included actually contradicted or was completely irrelevant to the message they were trying to send. The need was obvious - How do I help students acquire the skills they need to analyze graphs and charts.

During the first few months of using them, I presented the graph posted by the NYT that week and used their prompt:

  • What do you notice?
  • What do you wonder?
  • What's going on?
This was fine except for one little thing, they started looking at the comments posted for "correct" answers or as a guide for what to write in their response to me. While it did force them to at least open and read the comments, they were still taking that short-cut of having someone else do the thinking and noticing. So I started adding some other questions that would require the act of reading the graph in order to answer them. These questions were not necessarily of a very high DOK, but rather of the "actually look at the graph" type. Things like
  • "What is the military spending worldwide presented on the graph?" for this installment
  • "Which destination seems to be the most popular for Thanksgiving Travel?" for this one
  • "Describe how the author represents data in the graphic."
These types of questions are always presented before the analysis questions and have served us well to help students look at the graphs before moving on to the deeper analysis posed by: 

  • What's going on in this graph? (i.e. what would be an accurate conclusion that can be supported by this graph). To answer this question use the CER framework
  • Write a >140 character Tweet that could accompany the sharing of the graph. Your response must include a relevant hashtag.
By now, I have a bundle of 29 such activities in the GoFormative library. Most of which come from the weekly NYT publishing, with a couple added from other sources due to my students' interest in a topic. The activity for me is weekly and takes up about 30 minutes of class time. Since many of them are related to my content it ends up supporting not only the acquisition of the skills for interpreting graphs espoused in the CCSS and NGSS but also addressing ISTE standards for students, making the activity an absolute win.

Documenting Peer-Reviews leads to better builds



As a project-based learning teacher, I know about the importance of feedback during project runs. I constantly conference with students, formally and informally, trying to push them to think critically about the project as well as what they are creating. Unfortunately, these conversations and documents don't always make it into their final projects, leaving both me and the students frustrated come unveiling day. You see, often my students become enamored with an idea and it is really hard to sway them, even when they realize that they should have taken a different approach. They come up with "patch" solutions to "fix" the immediate problem, but they seldom take a step back and realize that they should start from scratch. While I know that the learning is in the process, I also see a lack of transfer of those lessons from project to project - i.e. this did not work last time, why would it work this time?

Case in point - I run a project based on Teach Engineering's "Adding Helpful Carrier Devices to Crutches", and have this whole set-up that walks students through the engineering design process for it (Assistive Technology - Crutches). Last year we even had one of the students as an actual client that needed the device and built to her specifications, supposedly. What happened was that students basically attached whatever they had on hand without much consideration for usability - boxes and bags that make holding the crutches almost impossible, or too big causing a severe imbalance, etc. When challenged about this during first testing, their solutions were always about fixing what was already there (create a hole for the hand or adding dividers so help with the swaying of things), but they never included "take the whole thing off and re-work from scratch". After weeks of patches, students unveiled final products that did comply with the requirements but were not actually useful. The posed to the student we were building for, "Would you buy this?",  was always met with, "No, not really" - even for her own build! This led me to a bigger reflection of where I was dropping the ball and/or what could I do to help promote better builds.

As I pondered this question, I came up with two key things that I've implemented and seem to be working:

The fast build 

As soon as the project is introduced, the students have one class period to create and test a quick prototype. This happens even before the brainstorm. The goal of the fast build is to help students identify where the problems may eventually arise.


The Peer-Review documentation

After the fast build, my students continued through the engineering design project as usual. However, when it came time for testing the first full build, I introduced a "Prototype Evaluation" rubric.
 
The key portion for us was the requirement of providing specific ideas to help the team improve.  I "sold" this to the students as "your team has already thought about different ways to address issues, but that is only 4 brains. You are getting the benefit of 28 other brains that are seeing other things you have to address." After the testing and prototype evaluation rubric has been filled out, each team is responsible for compiling the feedback and presenting a summary of the information obtained from those rubrics and creating a plan of action for the next prototype. 

This peer review documentation seems to be working, though it does add three days of work for each prototype. In this case, I am requiring at least three rounds, extending the project 9 more class periods.  while this may not be feasible in every situation, I believe that it will be time well spent.

What do you think? What scaffolds do you have in place to ensure yous students are successful during their project runs? I'd love to hear about them.