Preprint
Review

When Video Improves Learning in Higher Education

Altmetrics

Downloads

96

Views

30

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

28 December 2023

Posted:

03 January 2024

You are already at the latest version

Alerts
Abstract
The use of video in education has become ubiquitous as technological developments have markedly improved the ability and facility to create, deliver and view videos. The concomitant pedagogical transformation has created a sense of urgency regarding how video may be used to advance learning. Initial reviews have suggested only limited potential for the use of video in higher education. More recently, a systematic review of studies on the effect of video use on learning in higher education, published in the prestigious Review of Educational Research, found, overall, effects to be positive. In the present paper we critique this study and provide some needed clarity to the state of research on learning via video in higher education. We reveal significant gaps in the study methodology and write-up and use a cognitive processing lens to critically assess and reanalyse study data. We found results of this study to be only applicable to learning requiring lower-level cognitive processing and conclude, consistent with prior research, learning benefits remain limited.
Keywords: 
Subject: Social Sciences  -   Education

Introduction

The use of video in education has become ubiquitous as technological developments have markedly improved the ability and facility to create, deliver and view videos. As an increasingly important pedagogical tool, the pace of its penetration into modern educational processes has outstripped the ability of researchers to evaluate its effectiveness (Woolfitt, 2015). In one recent and significant effort to evaluate the effect of video usage in education, Noetel et al. (2021) conducted a meta-analysis titled, “Video improves learning in higher education: A systematic review”, published in the prestigious Review of Educational Research and directed at the use of video as either a replacement or supplement to classroom learning.
Given the ranking of the journal, it was reasonable to expect greater clarity may be achieved on the effect of video, at least in higher education. Yet a close inspection of the study reveals significant gaps that call into question the overall study findings. As accurate research findings are urgently needed to inform policy decisions, this paper sets out to critically assess the aforementioned study and conduct a re-analysis of associated study data. In our critique and reanalysis we use a cognitive processing lens to build on Noetel et al.’s (2021) work and provide some needed clarity to better inform future educational research and policy development.

Initial Critical Assessment

In early media reports, Noetel et al. (2020) characterised their findings by stating the use of video was “consistently good for learning” and later, in the published study, stating the effect as “unlikely to be detrimental and usually improve student learning” (Noetel et al., 2021, p.204). Though such a disparity in characterisations may raise concern, of further interest was how their overall study conclusions appeared to contradict prior research suggesting the video medium has only limited potential for advancing student learning in higher education.
For example, prior related research regarding the use of video, claiming to be first experimental evidence of its kind, yet not covered by Noetel et al., has shown detrimental learning effects for some minority and other student demographic groups when relying on videos for instruction (Figlio, Rush & Yin, 2013). This mirrors findings from a recent meta-analysis at the school level for a related medium: low SES students were found to be disadvantaged by the use of screen- as compared to paper-based books (Furenes, Kucirkova & Bus, 2021). As an important issue regarding generalisability, Noetel et al. do not account for any participant demographics in their meta-analysis.
Moreover, previous reviews on the use of video in education also suggest limited potential. For example, an early review by Hansch et al. (2015) concluded there was “little conclusive research to show that video is indeed an effective method for learning”, recommending consideration of a variety of pedagogical resources rather than a simple reliance on video (p.10). Their conclusion and recommendations were later confirmed in a review by Poquet et al. (2018) which systematically analysed 178 papers published between 2007 and 2017 which met strict inclusion criteria. Their findings highlighted some of the complexities involved in this research suggesting the effectiveness of video depend on such variables as students’ prior knowledge related to the learning objectives and the nature of the learning tasks or knowledge being learned (e.g. simple recall vs. conceptual understanding).
Indeed, an emerging body of research has found the efficacy of different instructional mediums varies depending on the nature of the associated learning task (ChanLin, 1998; Garrett, 2016; Hong, Pi & Yang, 2018). For example, in early research ChanLin (1998) investigated (n=135 undergraduate students) the use of three visual treatments (no graphics, still graphics and animated graphics) using co-variates of learner prior knowledge (high vs. low) and the nature of knowledge being learned (procedural vs. descriptive facts), with results supporting early claims that visual treatment effects vary according to the nature of knowledge being learned. Particularly of note, the use of visuals did not always guarantee successful learning. In related research, Garrett (2016) used a novel data-mining approach to analyse PowerPoint files (n=30,263) and understand differences in slide presentation relative to academic discipline. Though focused on teaching approach rather than learning effect, the nature of the discipline was found to significantly predict how text and graphics were used. Finally, Hong, Pi and Yang (2018), in a randomised controlled experiment, examined the learning effectiveness of video lectures (n=60 undergraduate students) using co-variates of knowledge type (declarative vs. procedural) and instructor presence (with vs. without). Results suggested “the learning effectiveness of video lectures varies depending on the type of knowledge being taught and the presence or absence of an instructor” (p.74). Taken as a whole, prior research suggested the nature of learning in each study context is an important moderating variable needed to properly understand the effects of video on learning.
In contrast with these findings, Noetel et al.’s (2021) considered learning tasks as a relatively simplistic “skill” vs. “knowledge” dichotomy (later characterised in their review as “teaching skills” vs. “transmitting knowledge”; p.222). Indeed, complicating the ability to interpret their findings, they provide very limited information about the educational contexts represented in their meta-analysis, which they refer to as “learning domains”, while providing no definition for what is meant by a learning domain. Furthermore, perhaps most surprising, no descriptive summary of domains was included in the study. In sum, their review employed an analytical framework which was not based in theory but one that appeared to reflect an overly simplistic view of the nature of learning. Additionally, little information was provided concerning the learning contexts represented in their meta-analysis.
Given this assessment, we executed a close investigation of the study data (available on bit.ly/betteronyoutube) asking two research questions:
  • What is the nature of the learning contexts covered in the study as suggested by (i) a basic descriptive analysis of Noetel et al’s (2021) data, and (ii) a descriptive analysis by way of using a relevant established theoretical framework?
  • What does a reanalysis of the data tell us about how the use of video affects learning in higher education when the aforementioned theoretical framework is employed?
We first present the methodology, results and some discussion for each research question. Following this we present a summary discussion where we conclude, consistent with prior research, the use of video has limited potential for learning in higher education.

Research Question 1

As previously discussed, the original study write-up provides limited information about the learning contexts represented by the included studies. This led us to seek a clearer understanding of the contexts covered by the review.
We first made use of the original source data and learning domain categorizations (as categorised by Noetel et al.) to present a simple descriptive analysis of the contexts, the results of which may be seen below in Table 1.
From this basic analysis it is clear, consistent with expectations (e.g. Czerniewicz & Brown, 2007), more than 80% of included studies were in health science contexts (e.g. medicine, nursing, dentistry). As a broad domain, learning in the health sciences has been found to focus mostly on lower-level cognitive processes, such as learning facts and procedures, as typically revealed using the lens of Bloom’s (1956) taxonomy of cognitive learning objectives 1 (e.g. Medicine: Cooke, Irby, Sullivan & Ludmerer, 2006; Légaré et al., 2015; Callaghan-Koru & Aqil, 2020; Nursing: Laschinger & Boss, 1984; Dentistry: Albino et al., 2008; Gonzalez-Cabezas, Anderson, Wright & Fontana, 2015). For example, approximately half (49%) of included studies are classified in the learning domain of “medicine”, an area of study long known for its “persistent focus” on learning “factual minutiae” (Cooke et al., 2006, p.1343). In other words, the vast majority of included studies focussed on learning contexts targeting lower level cognitive processing. Indeed, virtually all (102 of 106 or 96.2%) of the learning domains relate to what we would categorise as professional degree programs. This skewed representation raised some concern regarding generalisability across higher education which prompted us to take a closer look at the nature of learning represented in the meta-analysis.
To undertake this investigation several potential theoretical frameworks were considered including the seminal works of Bloom (1956), Biggs (1979) and Biglan (1973). The latter, a taxonomy for classifying academic disciplines in higher education, was identified as a clear choice given the nature of the available data. Indeed, strengthening this selection, Biglan’s (1973) framework is perhaps the most well-known system for classifying academic disciplines in higher education (Simpson, 2017). Moreover, the taxonomy was originally developed to provide a “framework exploring the role of cognitive processes in academic fields” (Biglan, 1973, p.202) and has repeatedly demonstrated its validity in subsequent research (Smart & Elton, 1982; Stoecker, 1993; Simpson, 2017). Importantly, as it relates to our research questions, and notwithstanding further complexities (Entwistle, 2005), prior research using this framework has found generalities and differences regarding the nature of learning within and between disciplinary contexts (Donald, 1986; Neumann, Parry & Becher, 2002; Smith & Miller, 2005; Czerniewicz & Brown, 2007).
We first make use of this framework to categorize each of the 106 studies included in this review and demonstrate how the included studies represent a relatively limited learning focus. Our results, displayed in Table 2 below, indicate that almost all (94 of 106 or 88.8%) learning domains were confined to teaching and learning contexts in the applied sciences where, consistent with our previous findings, learning has been associated with lower-level cognitive processes (Paulsen & Wells, 1998; see also, for example: Swart, 2009). Moreover, a closer look at the 12 remaining studies, all in pure disciplines, similarly suggests a focus on lower-level learning 2. We conclude, based on the use of Noetel et al.’s categorizations and Biglan’s framework, that lower-level learning was targeted by the vast majority, if not all, of the learning contexts represented in this review.

Research Question 2

We next make use of Biglan’s framework by undertaking a meta-regression reanalysis of the data sets behind figures 2 and 3 in the review investigating, respectively, the effect of using video as a replacement for and supplement to live instruction (see Noetel et al., 2021, p.214 and 218, respectively) 3. However, we include the three levels from Biglan’s classification shown in Table 1, above, as an additional moderator variable (i.e., hard vs soft, pure vs applied, and life vs nonlife).
Our re-analysis was conducted via meta-regression (MR). MR is a regression model applied to data obtained from a meta-analytic study and in which, most likely, the dependent variable is numeric and corresponds to effect sizes. The regression model is usually the ordinary least squares linear model but other alternatives exist when parametric assumptions such as normality and homoscedasticity are not met (mainly, the distribution of the residuals is not normal).
The model to be investigated had the following structure:
D V v 1 + v 2 + . . . + 1 | r v
That is, the model was additive (i.e. no interactions are included), the dependent variable ( D V ) is numeric, and there is a random (intercept) variable ( r v ). In order to investigate parsimonious models (i.e. models with few covariates), the number of covariates included variables that seem essential to the model according to the results by Noetel et al. (see their Table 1). That is, no variable selection method was pursued.
Thus, as applied to the original data sets, the model investigated was:
s m d h a r d _ v s _ s o f t + a p p l i e d _ v s _ n o n . a p p l i e d + l i f e _ v s _ n o n l i f e + S e t t i n g + C o m p a r i s o n + O u t c o m e + W h i c h _ i s _ m o r e _ i n t e r a c t i v e + T o p i c _ o r _ c o u r s e + 1 | s t u d y n u m b e r
As may be seen, the effects of available covariates on the response variable are adjusted for. Details of our analysis, including statistical decisions and R software outputs may be found at UNBLINDED URL).
Overall, effect sizes remained positive, but results demonstrate much greater complexity associated with learning via video. For example, when swapping video for any other learning opportunity, results indicate soft learning domains had significantly higher effect sizes than hard learning domains (Mdn soft domain = .37, 95% CI [.19, .55]; Mdn hard domain = .15, 95% CI [.07, .23]; t permutation = 2.69, p = .008). However, somewhat in contrast, when videos are provided in addition to existing content, hard learning domains tended to have higher effect sizes than soft learning domains (t permutation = -1.76, p = .08).
Other significant differences emerged when video was used as a replacement. First, differences in effect sizes were found between ‘educational settings’; particularly between ‘tutorial’ and ‘homework’, where the use of video was found more useful with homework than with tutorials (Mdn homework = .59, 95% CI [.52, .65]; Mdn tutorial = .40, 95% CI [.35, .46]; t permutation = -3.02, p = .018). Second, in contrast to the original study findings where the use video was found more effective when ‘skill acquisition’ was assessed (vs. knowledge), no difference in effect was found between the two types of outcome assessments (Mdn knowledge test = .16, 95% CI [.06, .26]; Mdn skills assessment = .27, 95% CI [.13, .42]; t permutation = 1.26, p = .238).
Additionally, other new findings emerged when investigating the use of video as a supplement to existing content. First, significant differences were found in effect sizes between educational settings: mixed 4 and homework (Mdn mixed = .30, 95% CI [.22, .39]; Mdn homework = .56, 95% CI [.46, .65]; t permutation = -2.80, p = .046) and between mixed and tutorial (Mdn mixed = as above; Mdn tutorial = .68, % CI [.60, .75]; t permutation = 4.85, p < .001). Second, there was an effect of comparison such that there was a difference between ‘human’ (or teacher) and ‘static media’ with static media found more effective as a supplement than human input (Mdn human = .30, 95% CI [.08, .52]; Mdn static media = 1.07, 95% CI [.80, 1.34]; t permutation = 5.76, p < .0010). Third, the difference between type of outcome was borderline at the .05 level with video supplements found more helpful for skills assessments than knowledge tests (Mdn knowledge test = .54, 95% CI [.28, .80]; Mdn skills assessment = 1.05, 95% CI [.86, 1.23]; t permutation = 2.16, p = .048).

Discussion

The original study concluded the effect of video on learning in higher education was generally positive. This effect Noetel et al. concluded based on an analysis which employed no theoretical framework for categorising their data while providing little contextual information concerning the source of that data. As a methodological issue, this approach was surprising given the use of theory and importance of contextualising findings are considered basic research practices. This was all the more surprising given the study appears in the leading international education journal where one would expect the nature of learning to be properly addressed according to current theory. Given such deficiencies we set out to examine the study data more closely and conduct a reanalysis using a relevant theoretical framework.
The results of our analysis were at variance with Noetel et al.’s findings. First, in our descriptive analysis we found almost all included studies were in contexts where the associated learning may be characterised as involving lower level cognitive processes, such as learning facts and procedures. Second, in our meta-analysis using an established theoretical framework, though effect sizes remained positive, we found greater complexity around learning contexts and video usage. Taken together, from a cognitive processing perspective, the rediscovery of positive effect sizes may be expected given the relative homogeneity of study data, suggesting, overall, the use of video in higher education may benefit learning but only learning requiring lower level cognitive processing.
Indeed, we suggest significant negative effect sizes would emerge if learning requiring higher-level cognitive processing were adequately represented in the original review. These are, for example, disciplines typically associated with abstract reasoning, such as pure mathematics (Ferrari, 2003) where associated cognitive demands are known to be high (Henningsen & Stein, 1997; McCabe et al., 2010). For example, related meta-analytic research comparing distance education to live classroom instruction has found mathematics instruction “best suited to the classroom” (Bernard et al., 2004, p.400). Indeed, regarding the specific use of video, recent consecutive systematic reviews have found, overall, student use of recorded lecture videos (RLVs) 5 in undergraduate mathematics negatively correlated with academic performance 6 (Author, Year; Lindsay & Evans, 2021), with some early research supporting causality (Author, Year). In particular, RLVs appear to enable students to engage in surface learning, such as rote memorization, of course content which is leading to poorer academic performance (Le, Joordens, Chrysostomou & Grinnell, 2010; Author, Year). Indeed, early research is suggesting mathematics students approach the use of RLVs in similar ways as they approach viewing television (Author, Year), weakening their cognitive investment (Collins & Wiens, 1983; Kubey & Csikszentmihalyi, 1990; Klemm, 2012; Schwab, Hennighausen, Adler & Carolus, 2018). Though such approaches may be sufficient to undertake tasks involving lower-level cognitive processing, such as learning facts or acquiring procedural knowledge, they may be detrimental when tasks require higher-level processes, such as acquiring richly connected conceptual knowledge (Baroody, Feil & Johnson, 2007).

Conclusion

In sum, understanding the effects of any pedagogical innovation involves the unravelling of a complex web of influences related to the learning process in varied contexts. In relation to this review, we highlight crucial yet missing complexities. We conduct simple descriptive analyses as well as a reanalysis providing evidence demonstrating, consistent with prior reviews, that current findings do not support broad generalisations across higher education writ large. Moreover, while we have no doubt the use of video has some beneficial effects in higher education, as demonstrated by Noetel et al.’s review, we remain concerned about potentially adverse effects a reliance on this innovation may have on students from differing demographic backgrounds studying in different disciplinary and learning contexts. More research is needed to reveal how video may be an optimal or suboptimal instructional medium for varying students and learning objectives.

Funding

No funding was sought or acquired for the research reported in this study.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and material for the originating study may be found at https://bit.ly/betteronyoutube.

Conflicts of Interest

The authors declare they have no competing interests.

Notes

1
A later version categorizes Bloom’s objectives as remember, understand, apply, analyze, evaluate, and create, in order of cognitive complexity from those requiring lower to higher levels of cognitive processing (Krathwohl, 2002; Adams, 2015).
2
For example, learning facts about biology (for an introductory microbiology course) or learning how to use statistical software (for a psychology course).
3
The original study R code and files related to these two data sets are found in the repository linked to the original paper (bit.ly/betteronyoutube). Related data sets are termed “swap” (i.e. video as a replacement) and “sup” (video as a supplement).
4
“Mixed” is a term used in the original data set though not explained in the main manuscript.
5
RLVs are experiencing rapid growth (Research and Markets, 2021). For both systematic reviews, included studies permit individual students to use RLVs as a supplement to and/or replacement for attending live lectures.
6
Pure mathematics learning contexts were not represented in this review. For comparison, the review included only two studies in a pure discipline (i.e. both biology) where the comparison involved student performance when using recorded only vs live only lectures (Adams, Randall & Traustadóttir, 2015; Thai, De Wever & Valcke, 2015). Both reported negative effects.

References

  1. Adams, A. E. M., Randall, S., & Traustadóttir, T. (2015). A tale of two sections: an experiment to compare the effectiveness of a hybrid versus a traditional lecture format in introductory microbiology. CBE Life Sciences Education, 14(1). ar6. [CrossRef]
  2. Adams, N. E. (2015). Bloom’s taxonomy of cognitive learning objectives. Journal of the Medical Library Association: JMLA, 103(3), 152. [CrossRef]
  3. Albino, J. E., Young, S. K., Neumann, L. M., Kramer, G. A., Andrieu, S. C., Henson, L., ... & Hendricson, W. D. (2008). Assessing dental students’ competence: best practice recommendations in the performance assessment literature and investigation of current practices in predoctoral dental education. Journal of Dental Education, 72(12), 1405–1435. [CrossRef]
  4. Baroody, A. J., Feil, Y., & Johnson, A. R. (2007). Research commentary: An alternative reconceptualization of procedural and conceptual knowledge. Journal for Research in Mathematics Education, 38(2), 115–131. [CrossRef]
  5. Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., ... & Huang, B. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379–439. [CrossRef]
  6. Biggs, J. Individual differences in study processes and the quality of learning outcomes. Higher education 1979, 8, 381–394. [Google Scholar] [CrossRef]
  7. Biglan, A. The characteristics of subject matter in different academic areas. Journal of applied Psychology 1973, 57, 195. [Google Scholar] [CrossRef]
  8. Bloom, B. S. (1956). Taxonomy of Educational Objectives: Handbook 1: Cognitive Domain. London: Longman.
  9. Callaghan-Koru, J. A., & Aqil, A. R. (2020). Theory-Informed Course Design: Applications of Bloom’s Taxonomy in Undergraduate Public Health Courses. Pedagogy in Health Promotion: The Scholarship of Teaching and Learning, December 202. [CrossRef]
  10. ChanLin, L. J. Animation to teach students of different knowledge levels. Journal of Instructional Psychology, 25 1998, 25, 166–175. [Google Scholar]
  11. Collins, W. A., & Wiens, M. (1983). Cognitive processes in television viewing: Description and strategic implications. In M. Pressley, & J. R. Levin (Eds), Cognitive Strategy Research (pp. 179–201). Springer. [CrossRef]
  12. Cooke, M., Irby, D. M., Sullivan, W., & Ludmerer, K. M. (2006). American medical education 100 years after the Flexner report. New England Journal of Medicine, 355(13), 1339–1344. [CrossRef]
  13. Czerniewicz, L., & Brown C. (2007) Disciplinary differences in the use of educational technology. In Proceedings of the Second International E-learning Conference, New York. 28–29 June 2007.
  14. Donald, J. G. (1986). Knowledge and the university curriculum. Higher Education, 15(3), 267–282. [CrossRef]
  15. Entwistle, N. (2005). Learning outcomes and ways of thinking across contrasting disciplines and settings in higher education. Curriculum Journal, 16(1), 67–82. [CrossRef]
  16. Ferrari, P. L. (2003). Abstraction in mathematics. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 358(1435), 1225–1230. [CrossRef]
  17. Figlio, D., Rush, M., & Yin, L. (2013). Is it live or is it internet? Experimental estimates of the effects of online instruction on student learning. Journal of Labor Economics, 31(4), 763–784. [CrossRef]
  18. Furenes, M. I., Kucirkova, N., & Bus, A. G. (2021). A comparison of children’s reading on paper versus screen: A meta-analysis. Review of Educational Research, 91(4), 483–517. [CrossRef]
  19. Garrett, N. (2016). How do academic disciplines use PowerPoint? Innovative Higher Education, 41(5), 365–380. [CrossRef]
  20. Gonzalez-Cabezas, C., Anderson, O. S., Wright, M. C., & Fontana, M. (2015). Association between dental student-developed exam questions and learning at higher cognitive levels. Journal of Dental Education, 79(11), 1295–1304. [CrossRef]
  21. Hansch, A., Hillers, L., McConachie, K., Newman, C., Schildhauer, T., & Schmidt, P., (2015). Video and Online Learning: Critical Reflections and Findings from the Field. HIIG Discussion Paper Series No. 2015-02. [CrossRef]
  22. Henningsen, M., & Stein, M. K. (1997). Mathematical tasks and student cognition: Classroom-based factors that support and inhibit high-level mathematical thinking and reasoning. Journal for Research in Mathematics Education, 28(5), 524–549. [CrossRef]
  23. Hong, J., Pi, Z., & Yang, J. (2018). Learning declarative and procedural knowledge via video lectures: Cognitive load and learning effectiveness. Innovations in Education and Teaching International, 55(1), 74–81. [CrossRef]
  24. Klemm, W. (2012). Television effects on education, Revisited. Psychology Today - Memory Medic. Retrieved December 29, 2021: https://www.psychologytoday.com/us/blog/memory-medic/201207/television-effects-education-revisited.
  25. Krathwohl, D. R. (2002). A revision of Bloom's taxonomy: An overview. Theory into practice, 41(4), 212–218. [CrossRef]
  26. Kubey, R. W., & Csikszentmihalyi, M. (1990). Television and the quality of life: How viewing shapes everyday experience. Hillsdale, NJ: Erlbaum.
  27. Laschinger, H. K., & Boss, M. W. (1984). Learning styles of nursing students and career choices. Journal of Advanced Nursing, 9(4), 375–380. [CrossRef]
  28. Le, A., Joordens, S., Chrysostomou, S., & Grinnell, R. (2010). Online lecture accessibility and its influence on performance in skills-based courses. Computers & Education, 55(1), 313–319. [CrossRef]
  29. Légaré, F., Freitas, A., Thompson-Leduc, P., Borduas, F., Luconi, F., Boucher, A., ... & Jacques, A. (2015). The majority of accredited continuing professional development activities do not target clinical behavior change. Academic Medicine, 90(2), 197–202. [CrossRef]
  30. Lindsay, E., & Evans, T. (2021). The use of lecture capture in university mathematics education: A systematic review of the research literature. Mathematics Education Research Journal, 1–21. [CrossRef]
  31. Lloyd, S. A., & Robertson, C. L. (2012). Screencast tutorials enhance student learning of statistics. Teaching of Psychology, 39(1), 67–71. [CrossRef]
  32. McCabe, D. P., Roediger III, H. L., McDaniel, M. A., Balota, D. A., & Hambrick, D. Z. (2010). The relationship between working memory capacity and executive functioning: Evidence for a common executive attention construct. Neuropsychology, 24(2), 222–243. [CrossRef]
  33. Neumann, R., Parry, S., & Becher, T. (2002). Teaching and learning in their disciplinary contexts: A conceptual analysis. Studies in Higher Education, 27(4), 405–417. [CrossRef]
  34. Noetel, M., del Pozo Cruz, B., Lonsdale, C., Parker, P. & Sanders, T. (August 11, 2020). Videos won’t kill the uni lecture, but they will improve student learning and their marks. The Conversation. Retrieved December 20, 2021: https://theconversation.com/videos-wont-kill-the-uni-lecture-but-they-will-improve-student-learning-and-their-marks-142282.
  35. Noetel, M., Griffith, S., Delaney, O., Sanders, T., Parker, P., del Pozo Cruz, B., & Lonsdale, C. (2021). Video improves learning in higher education: A systematic review. Review of Educational Research, 91(2), 204–236. [CrossRef]
  36. Paulsen, M B.; Wells, C.T. Domain differences in the epistemological beliefs of college students. Research in Higher Education 1998, 39, 365–384. [Google Scholar] [CrossRef]
  37. Poquet, O., Lim, L., Mirriahi, N., & Dawson, S. (2018, March). Video and learning: a systematic review (2007--2017). In Proceedings of the 8th International Conference on Learning Analytics and Knowledge (pp. 151–160). [CrossRef]
  38. Research and Markets. (2021). Lecture capture systems market - growth, trends, COVID-19 impact, and forecasts (2021–2026). Mordor Intelligence.
  39. Schwab, F., Hennighausen, C., Adler, D. C., & Carolus, A. (2018). Television is still “easy” and print is still “tough”? more than 30 years of research on the amount of invested mental effort. Frontiers in Psychology, 9, 1098. [CrossRef]
  40. Simpson, A. (2017). The surprising persistence of Biglan's classification scheme. Studies in Higher Education, 42(8), 1520–1531. [CrossRef]
  41. Smart, J. C., & Elton, C. F. (1982). Validation of the Biglan model. Research in Higher Education, 17(3), 213–229. [CrossRef]
  42. Smith, S. N., & Miller, R. J. (2005). Learning approaches: Examination type, discipline of study, and gender. Educational Psychology, 25(1), 43–53. [CrossRef]
  43. Stoecker, J. L. (1993). The Biglan classification revisited. Research in Higher Education, 34(4), 451–464. [CrossRef]
  44. Swart, A. J. (2009). Evaluation of final examination papers in engineering: A case study using Bloom's Taxonomy. IEEE Transactions on Education, 53(2), 257-264. [CrossRef]
  45. Thai, T., De Wever, B., & Valcke, M. (2015). Impact of Different Blends of Learning on Students Performance in Higher Education. In Proceedings of the 14th European Conference on E-Learning (ECEL).
  46. Woolfitt, Z. (2015). The effective use of video in higher education. Lectoraat Teaching, Learning and Technology Inholland University of Applied Sciences, 1(1), 1-49. Retrieved March 31, 2023: https://www.academia.edu/download/43014151/The_effective_use_of_video_in_higher_education_-_Woolfitt_October_2015.pdf.
Table 1. Learning domains as classified by Noetel et al. (2021)a.
Table 1. Learning domains as classified by Noetel et al. (2021)a.
Learning Domain(as categorised by Noetel et al.) Tally Percent(1 d.p.)
biology 3 2.8
computer science 2 1.9
dentistry 8 5
engineering 1 0.9
english as a foreign language 5 4.7
medicine 52 49.1
nursing 13 12.3
nursing, paramedicine 1 0.9
nutrition 1 0.9
pharmacy 1 0.9
physical education 1 0.9
physical therapy 4 3.8
physics 1 0.9
physiotherapy 1 0.9
psychology 4 3.8
psychology, education 1 0.9
sign language 1 0.9
sport science 2 1.9
teaching 4 3.8
Total 106 100
a See bit.ly/betteronyoutube > Supplementary File 3 Characteristics of Included Studies, Consensus Extraction and Risk of Bias Spreadsheets supplementary file > column titled “learning_domain”. .
Table 2. Learning domains as classified using Biglan’s (1973) taxonomya.
Table 2. Learning domains as classified using Biglan’s (1973) taxonomya.
Hard Soft
Life Nonlife Life Nonlife Totals
Pure 3 2 2 5 12
Applied 63 11 20 0 94
Totals 66 13 22 5 106b
79 27
a Stoecker’s (1993) revision was used to classify previously unclassified domains of dentistry and nursing. b Number disparity due to Study Number 17, representing two study contexts, being counted twice (see bit.ly/betteronyoutube).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated