3 Questions That Still Boggle Educators, Technologists and Researchers
Originally posted by EdSurge, Sept 27, 2016
By Alex Sigillo
How can educators, technologists and academic researchers—three groups that don’t often convene—shape new ways to assess teachers and students? That was one of many questions posed at CRESST CON, an annual conference for those working at the intersection of education, technology and research. Held at the newly opened Meyer and Renee Luskin Conference Center at UCLA, the gathering drew more than 200 attendees and speakers—including big names such as Pedro Noguera, Alan Kay and Jim Shelton.
Throughout the two days, the conversations attempted to cover a range of topics, from assessment in informal learning environments to the effectiveness of games in promoting learning outcomes. Some questions elicited thoughtful responses and follow-up action steps. Other problems left folks—even those with doctorate degrees—scratching their heads. Here’s what they talked about.
How do we invest in education?
To improve student learning and success, schools need innovative solutions that can scale. Shelton, who heads the Chan Zuckerberg Education Initiative, articulated—rather forcefully—that both the public and private sectors must invest more dollars in education. Total spending on K-12 education amounted to $613.6 billion in 2014, but the proportion of total dollars allocated to technology reached only $2.9 billion. Moreover, research and development expenditures in education are even more minute relative to total spending at just $1.2 billion.
More investors are trying to fill the gap. Since 2010, venture capitalists have poured more than $2.3 billion in companies building education products, but this is a far cry below the money invested in other markets like computers and software, $115.7 billion and $8.7 billion respectively in the second quarter of 2016 alone.
So why are so few dollars spent on education research? Shelton rationalized that most people don’t see education as an investment. Government officials and investors, he argued, must push research agencies and edtech companies to develop well-designed, evidence-based products that can scale quickly to all students. These products should be validated to produce positive learning outcomes prior to coming to market.
Missing from his keynote, however, were the specific steps—the what and the how—to achieve those goals. Shelton’s team is drawing up plans, and will play a pivotal role in shaping that incentive infrastructure.
Following Shelton’s talk, conference sessions centered around two major themes: the importance of human interaction and the efficacy of tech tools in learning environments.
Will technology replace human engagement in student learning?
As instruction becomes more differentiated, many attendees wondered whether educators can keep pace with the needs of their students. Games, online instruction and other technologies implemented in the classroom address personalized learning and enhance student success. Among presenters, however, there was little fear that technology would replace teachers.
Many shared that using technology to improve academic skills is just a part of what students need to succeed. Participants recognized that tech tools themselves cannot address students’ cognitive, emotional and social needs.These are areas where teachers and parents must still provide the majority of support.
From these cues, educators can assess what’s working and what’s not and for whom as they have a broader understanding into what each child needs, in addition to instructional supports. Clearly, technology is important, but it can’t drive instruction or define student success.
How effective are tech tools in student learning?
Few participants offered a direct answer to this question. Rather, they responded with other questions, such as: “What is the research question you want to answer?” and “How do you want to measure the impact of tech tools on student learning?”
The participants weren’t dodging the original question, per se, as much as they recognized that it’s problematic. It implies that there are end-all, be-all tools that are proven to work for everyone.
Yet what works for one group of students may not work for another. Methodology, implementation, environment—all of these factor into whether something “works.” It’s a problem that John Hattie, the Director of the Melbourne Educational Research Institute, has written about: according to his meta-analysis of efficacy studies, 95 percent of everything educators do has a positive influence on learning!
Questions over the use and efficacy of technology in the classroom, and how quickly tools can be scaled, haven’t fully been resolved. But educators, technologists and researchers agree that technology supports instruction and more value should be placed on how students learn—rather than how they perform on academic tests.