Design-based research (DBR) is ” a systematic but flexible methodology aimed to improve educational practices” (Wang & Hannafin, 2005, p.6). That is, DBR is all about creating “better” learning designs, but that begs the question – what is meant by “better”? Reigeluth & Frick tell us that what we consider better is based upon our values. They then go on to provide an example using effectiveness, efficiency, and appeal (Reigeluth & Frick, 1999). I like this idea, but I prefer to think of it as effectiveness, efficiency and fun!
Effectiveness is what we usually measure for when we evaluate training programs. More specifically, did the learners learn? However, Scanlon & Issroff warn us that “evaluations based solely on maximising learning outcomes are too limited to help us to fully appreciate the impact of technology on learning” (Scanlon & Issroff, 2005, p.431). When using a design-based research approach, effectiveness is something that you may choose to evaluate on in an early iteration of the learning intervention. Once you have figured out effectiveness, then you need to address efficiency.
Efficiency refers to whether or not you are getting value for our effort. Fishman et al. highlight an issue with design-based research, in that it often “does not explicitly address systemic issues of usability, scalability and sustainability” (Fishman, Marx, Blumenfeld, Krajcik, & Soloway, 2004, p.43). Research that values efficiency should address the issues of usability, scalability and sustainability. Note that in this sense, the word sustainability refers to the ability of the intervention to be continued in the absence of the researcher. It does not mean the loaded definition of sustainability currently in use by environmental researchers.
Fun refers to the emotional appeal of the learning intervention. Learners should walk away feeling that they enjoyed themselves, and be willing to recommend the intervention to their peers. This is especially true when dealing with technology adoption, as “emotional value, a product’s potential to arouse emotions [74, 75], heavily influences the adoption and use of hyped technologies” (Hedman & Gimpel, 2010, p.169).
So, today’s big idea is that the evaluation of technology adoption programs needs to address the values associated with effectiveness, efficiency, and fun.
References
Fishman, B., Marx, R., Blumenfeld, P., Krajcik, J., & Soloway, E. (2004). Creating a framework for research on systemic technology innovations. The Journal of the Learning Sciences, 13(1), 43-76. Retrieved from http://www.tandfonline.com/doi/abs/10.1207/s15327809jls1301_3
Hedman, J., & Gimpel, G. (2010). The adoption of hyped technologies: a qualitative study. Information Technology and Management, 11(4), 161-175. doi:10.1007/s10799-010-0075-0
Reigeluth, C., & Frick, T. W. (1999). Formative research: A methodology for creating and improving design theories. In C. Reigeluth (Ed.), Instructional-design Theories And Models: A new paradigm of instructional theory (Second., pp. 633-651). Mahwah, NJ: Lawrence Erlbaum Associates.
Scanlon, E., & Issroff, K. (2005). Activity theory and higher education: Evaluating learning technologies. Journal of Computer Assisted Learning, 21(6), 430-439. doi:10.1111/j.1365-2729.2005.00153.x
Wang, F., & Hannafin, M. J. (2005). Design-based research and technology-enhanced learning environments. Educational Technology Research and Development, 53(4), 5-23. Springer. doi:10.1007/BF02504682
Leave a Reply