A study from McMaster University has shown that traditional ways of learning anatomy remain superior to those that rely on digital media.
The research suggests that virtual reality (VR) and mixed reality (MR) are inferior to traditional physical models of learning, and have major disadvantages in cost and functionality.
The findings also support the pivotal role of stereoscopic vision – the ability to perceive depth using the slightly different view from each eye – in efficient anatomy learning.
The study results were published today in the journal Anatomical Sciences Education.
"These newer technologies promise to provide dynamic and vivid imagery that the user can interact with for an active and self-paced learning experience, without having to enter an anatomy laboratory," said Bruce Wainman, first author and director of the education program in anatomy at McMaster.
"Surprisingly, the evidence for this apparent advantage over traditional instructional materials is scarce."
The study of human anatomy has traditionally included cadaveric dissection and the viewing of prosections, illustrations, photographs and physical models of anatomy.
Rapid advancements in computer technology have led to many different forms of digital anatomic simulations designed to supplement, and even replace, traditional instructional materials, said Wainman.
The McMaster study compared an MR model (Microsoft HoloLens) and a VR model (HTC VIVE) derived from a physical model to the actual model. The researchers focused on overall learning performance and the effects of stereopsis by using a strategy where the non-dominant eye was covered in one test condition.
Groups of 20 undergraduate students at McMaster with no prior anatomic training learned pelvic anatomy under seven conditions: physical model with and without stereo vision; MR with and without stereo vision; VR with and without stereo vision, and key views on a computer monitor. All were tested with a real human pelvis and a 15-item, short-answer recognition test. Students were not allowed to touch any of the physical models.
The results showed that, compared to the key views on a computer monitor, the physical model had a 70 per cent increase in accuracy; the VR a 25 per cent increase, and the MR a non-significant 2.5 per cent advantage.
"At the end of the day, there was little advantage to learning from virtual or mixed reality compared to a photo on a piece of paper, and they were much worse than a solid model," said Wainman.
"We found that that when you took away the stereo vision from the virtual reality headset tested, it was even worse than learning from a piece of paper. Promoters of this technology often say it is a superior way to learn, but our research shows that isn't true."
Geoff Norman, co-author of the paper and professor emeritus of health research methods, evidence, and impact at McMaster has spent the past 20 years focused on educational research, including the last decade working with Wainman on anatomical education best practices.
"There are claims about virtual reality being better, but then you find it is not just worse, but significantly worse, and a lot worse for segments of the population who have challenges already with their stereoscopic vision," said Norman.
"We encourage more quantitative research in this area to further assess mixed and virtual reality systems prior to implementation in anatomical education programs."
Prior to primary testing, 40 undergraduate students from McMaster were recruited to obtain qualitative data regarding the optimal environment for the MR and VR models.
"When we surveyed people about how long they were willing to learn in that virtual environment, no one indicated they were able to learn for more than 30 minutes," said Wainman. "Meanwhile, we have students who study in the anatomy lab six or seven hours a day looking at human material.
"We're not thinking about the technology so much as what is the best way to learn. We want technology to be in the service of education, and not the other way around."
The study had no external funding.Read the paper here.