SXSWedu 2014: Thoughts about Pedagogy and Assessment

One of my takeaways from the 2014 SXSWedu conference was noticing that two critically important topics in education and technology—pedagogy and assessment—weren’t discussed in any real detail. Although I realize that I was only able to attend a fraction of the sessions, and in many cases presentations are designed to provide an “airplane” view of a particular topic, the subjects of pedagogy and assessment seemed noticeably absent. My sense is that the apparent lack of emphasis at SXSWedu on the specifics of pedagogy and assessment in the classroom and with education technology is consistent with how rarely these detailed topics are discussed in regards to edtech and blended learning.

Fortunately, it appears others have the same concerns. In the SXSWedu panel discussion, “Startups Should Talk with Researchers and Educators,” (link here) researcher Dr. George Veletsianos (Canada Research Chair of Innovative Learning and Technology and a Professor at Royal Roads University) captured some of my concerns when he posed these two questions:

  1. Do we really know how people learn or are we just starting to find out? (Pedagogy)
  2. How do we measure learning? What does it mean to say that a student “learned something”—that she can replicate what was just demonstrated or that she can perform it six months later? (Assessment)

To be sure, these are critically important questions. Dr. Veletsianos brought them up somewhat rhetorically because conversations about education and technology too often ignore them—and in the panel’s view, most edtech startups overlook such subjects. However, they are certainly questions worth discussing.

As an example, there are many claims that analyzing “big data” gathered from online learning and edtech will reveal how students learn best. Such claims have an underlying implication about the pedagogy question above—namely that how people learn is still somewhat of a mystery. In reality, we have decades of educational and cognitive research confirming solid principles surrounding how people learn. One of the most well-regarded books in education research is actually called, How People Learn. In fact, the lead editor, John Bransford, was one of the original members of the DreamBox Advisory Board. Our team here at DreamBox has used the research and engaged in dialog with educators and researchers from our earliest days as a startup. We believe that while big data can certainly reveal some new insights about learning and engagement, it’s the pedagogical approach used to collect those data and assess student learning that most significantly affects the quality of those insights.

This discussion of questions about pedagogy leads directly to Dr. Veletsianos’ second question—how do we assess learning? Whenever there are claims about how students “learn best,” we must use the lens of assessment to clearly define both “learn” and “best.” For example, if big data only reflect student responses to multiple-choice questions, then the information and insights from those data will be limited. Similarly, if assessments are only administered immediately after direct instruction that shows a student how to perform a skill—whether in-person or via a recorded lecture—then once again those data provide limited evidence of student learning, sense-making, and transfer. In that instance, the test or quiz is often an assessment of short-term memory rather than long-term understanding; indeed the real assessment is whether the student will be able to perform the skill with understanding six months later. Furthermore, this type of instructional approach doesn’t align with the findings of educational research: “Providing students with opportunities to first grapple with specific information relevant to a topic has been shown to create a ‘time for telling’ that enables them to learn much more from an organizing lecture.” (Bransford et al., p. 58).  Note now how a discussion of questions about assessment leads directly to conversations about pedagogy. They are necessarily interconnected in critical ways.

Everyone wants all students to learn more, understand deeply, and achieve success. And in order to accomplish these goals, we need more emphasis and conversation around the specifics of pedagogy and assessment—both in classrooms and in edtech. Fortunately, we’re not starting from scratch; there’s a strong foundation of research to build on. However, we need to better understand the research, have discussions about how this specifically impacts teaching and learning, and use the takeaways to design lessons and learning experiences in ways that align with the research-based principles. 

Source:

Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, Mind, Experience, and School (2nd ed.). Washington, D.C.: National Academy Press.

Tim Hudson

VP of Learning for DreamBox Learning, Inc., Hudson is a learning innovator and education leader who frequently writes and speaks about learning, education, and technology. Prior to joining DreamBox, Hudson spent more than 10 years working in public education, first as a high school mathematics teacher and then as the K–12 Math Curriculum Coordinator for the Parkway School District, a K–12 district of over 17,000 students in suburban St. Louis. While at Parkway, Hudson helped facilitate the district’s long-range strategic planning efforts and was responsible for new teacher induction, curriculum writing, and the evaluation of both print and digital educational resources. Hudson has spoken at national conferences such as ASCD, SXSWedu, and iNACOL.