Instructional design (ID) is a dynamic and increasingly complex field that is subject to ever-changing competencies as it tries to keep up with increasingly diversifying online learning environments and rapid technological developments (Wang et al., 2021). As the landscape of multimedia learning evolves to include virtual reality and other immersive technologies, there is an urgent need to train and prepare instructional designers to develop effective learning materials within these 3D spaces (Petrina & Zhao, 2021). This preparation should encompass the many roles that instructional designers tend to have (Ritzhaupt et al., 2021; Wang et al., 2021). For example, 3D learning environments can be used in various ways to enhance instructional design and technologies, including promoting experiential learning by allowing learners to actively explore and engage with content in a realistic, immersive setting; and encouraging collaboration and communication among learners and instructors by providing virtual spaces for interaction and discussion. By leveraging the unique affordances of 3D environments, instructional designers can create engaging, learner-centered experiences that align with real-world contexts and promote deep learning (Dalgarno & Lee, 2010; Fowler, 2015).

Various 3D learning environments have been developed and applied in higher education settings. Examples include creating virtual field trips for students to explore historical sites, natural environments, or cultural landmarks (Cecotti, 2022); designing immersive simulations for professional training, such as medical procedures or emergency response scenarios (Grady, 2017), and incorporating virtual collaboration spaces that allow learners to interact with their peers and instructors in real-time (Caprara & Caprara, 2022). Research has shown that integrating 3d learning environments in instructional design builds communication and collaboration skills (Khlaisang & Mingsiritham, 2016) and supports learning transfer and retention (Krajčovič et al., 2021). Despite their potential benefits, 3D learning environments also present usability and pedagogical challenges. Usability challenges include efficiency, effectiveness, and ease of use of the technology (Miller et al., 2018), while pedagogical challenges involve designing effective learning experiences that align with the unique features of 3D environments (Dalgarno & Lee, 2010; Fowler, 2015). Integrating 3D learning environments into instructional design also presents several challenges and barriers. For instructional designers in training, these challenges may include limited familiarity with technologies, the need for specialized technical skills to develop and maintain 3D learning environments, the high costs associated with developing and maintaining 3D learning environments, and ensuring accessibility for learners with disabilities (Glaser et al., 2021; M. Schmidt & Glaser, 2021).

However, before delving deeper into the challenges and barriers associated with integrating 3D learning environments into instructional design, it is important to acknowledge the existing body of literature that predominantly takes a techno-centric approach, with researchers primarily focused on proving the functionality of the technology (Dalgarno & Lee, 2010; Baceviciute et al., 2021). This type of investigation involves conducting research to substantiate claims by juxtaposing a well-established, conventional learning experience with an emerging and inventive learning experience that is still in its infancy. Consequently, this form of research deprives the field of instructional design of crucial and valuable data pertaining to the effectiveness, efficiency, and appeal of such learning experience (Honebein & Reigeluth, 2021).

To address this research gap, we here describe an examination of learner experiences and usability of a 3D Virtual Learning Environment (3D VLE) called the Museum of Instructional Design (MID). The MID was created to offer a space for Instructional Design and Technology (IDT) students to participate in activities during a course concentrating on current and historically significant trends and issues in the field. This paper seeks to explore the potential of the MID in facilitating authentic learning experiences and addressing the challenges and significance of this approach in the field of IDT. While a broad qualitative approach was used to gain insight into the phenomenon of using the MID in an online course, we situate the research question as one overarching inquiry:

  • RQ1: How do expert evaluators rate and describe the usability of the Museum of Instructional Design?

  • RQ2: What is the nature of learner experiences as they used the Museum of Instructional Design?

Literature Review

As higher education courses transition online (Palvia et al., 2018), many instructors have embraced the use of synchronous web conferencing tools to provide an experience approximate to traditional classrooms (Lowenthal et al., 2020). Synchronous tools maintain a number of affordances for teaching and learning such as identifying and clarifying problems in real time (Lowenthal et al., 2020), and decreasing social isolation that is common in online learning contexts (Hammond et al., 2020; McInnerney & Roberts, 2004). However, research suggests that many students perceive synchronous online learning negatively as instructors often struggle to motivate and engage students in online spaces (Kauffman, 2015; Lee et al., 2021). Synchronous class sessions that are mediated through these tools often turn into long lectures as instructors struggle with embodying constructivist learning opportunities into web conferencing tools, or as they try to directly emulate the traditional classroom experience (Lowenthal et al., 2020).

In response to the challenges posed by synchronous online learning, educators and researchers have been investigating alternative methods to enhance student engagement and motivation. One such innovation that has gained significant attention is the implementation of 3D Virtual Learning Environments (3D VLEs) in education. These immersive and interactive platforms offer a more engaging and authentic learning environment, addressing some of the limitations and criticisms of traditional web conferencing tools (Kavanagh et al., 2017). Defined as computer-generated educational spaces that present information in a three-dimensional format, 3D VLEs allow learners to interact with and manipulate objects within the environment, regardless of their location or time (Dalgarno & Lee, 2010; Nevelsteen, 2018). Designed to replicate real-life learning experiences, virtual learning spaces provide targeted instructional activities and tasks that can be performed in distributed settings (Scott & Campo, 2023). Avatars, or digital representations of users-learners, serve as the nexus for the user’s interactions and virtual presence (Denoyelles & Seo, 2012), enabling them to interact with one another, engage in collaborative learning, and receive feedback from various mechanisms. The utilization of 3D virtual environments in learning has been associated with numerous benefits, such as increased engagement, collaboration, motivation, joy, and idea generation (Kavanagh et al., 2017). Furthermore, learners can access virtual content simultaneously and receive multifaceted feedback, which enriches the overall learning experience (Cheng & Wang, 2011).

Importantly, 3D VLEs can be presented on various hardware platforms, including desktop computers, mobile devices, and head-mounted displays (HMDs) like Oculus Rift and VIVE. The integration of a 3D VLE with different hardware types can provide a range of immersive experiences (e.g., Glaser & Schmidt, 2022). When presented on a desktop computer, a 3D VLE offers a semi-immersive interactive 3D environment, requiring adequate processing power. In contrast, HMD-based systems deliver highly immersive and interactive experiences but necessitate high-end computers to run the software. Another option, the CAVE VR environment, provides a highly immersive experience with semi-level interactions, though it typically accommodates only one or two primary users and demands a spacious, dedicated room with multiple walls for displaying the virtual environment. Mobile 3D VLEs, which can be used on smartphones and tablets, are portable and user-friendly but tend to be less immersive or interactive compared to other hardware configurations. Lastly, web-based 3D VLEs are highly accessible, easy to use and install, highly collaborative, and a cost-effective option that can be accessed through various devices including smartphones, computers regardless of their operating system, and within HMD.

Mozilla Hubs and Web-based VR

The advent of web-based VR platforms has significantly simplified the process of creating and deploying 3D experiences accessible through web browsers on various devices, such as mobile phones, tablets, desktop computers, and VR headsets. These platforms offer many of the same benefits as traditional VR environments but are usually more cost-effective and do not require extensive 3D modeling or programming expertise. A number of web-based VR platforms are available, including Frame, React VR, Vizor, Mozilla Hubs, and Babylon.js.

Several studies have explored the use of web-based VR scenarios in various contexts, such as conferences (Le et al., 2020), classrooms (Eriksson, 2021; Yoshimura & Borst, 2020), and workshops (Bredikhina et al., 2020). For example, Mozilla Hubs has been employed to enhance social interactions and user experiences in remote conference settings (Le et al., 2020) and online classrooms (Yoshimura & Borst, 2020). In a more novel approach, a team of researchers used Mozilla Hubs to create a series of prototypes that focused on aligning the affordances of the technology to the strengths of autistic users and including them in the design process. This project, called Project Phoenix included a VR training space that taught users how to navigate and use the various features of Mozilla Hubs, a virtual gallery space in which users learned the history of VR for autistic people, and a second gallery space in which users learned about and rated a range of different commercial, off-the-shelf VR software tools (M. M. Schmidt et al., 2023). Findings of these studies suggest that using web-based VR platforms can increase users' sense of presence, foster feelings of belonging, improve social connectivity, and provide an overall satisfying experience (Le et al., 2020). However, some research has also identified potential drawbacks, such as usability issues and technical problems with audio and performance (Eriksson, 2021), and instances of cybersickness for users wearing head-mounted displays (Yoshimura & Borst, 2020). Despite these challenges, web-based VR platforms continue to show promise as a flexible and accessible tool for learning and collaboration.

As web-based VR platforms continue to evolve rapidly, their potential for enhancing learning experiences necessitates further research, especially concerning the learner experience. Investigating the efficacy of these platforms across diverse educational contexts, identifying best practices for optimizing user experience, and exploring strategies to address potential drawbacks are essential steps for harnessing their potential to revolutionize education and improve learner outcomes (Kavanagh et al., 2017; Warburton, 2009). However, with this emerging technology comes a set of challenges, such as designing user interfaces, navigation, layout, feedback, and addressing other user experience (UX) and accessibility factors. Ensuring effective usability for all users requires careful consideration of these issues. Moreover, the implementation of instructional and learning strategies in 3D VLEs presents its own set of challenges, as it demands meticulous planning and execution to achieve the desired outcomes (Warburton, 2009). To tackle these challenges and optimize the learner experience in 3D VLEs, the application of Learning Experience Design (LXD) processes can be instrumental. LXD can help researchers and practitioners focus on the learner's perspective, thereby addressing usability and accessibility issues, as well as aligning instructional strategies with the unique affordances of 3D VLEs. In this way, a comprehensive understanding of the learner experience can be achieved, paving the way for the successful implementation and adoption of 3D VLEs in educational settings.

Project Description: the Museum of Instructional Design

The MID is a 3D VLE developed using Mozilla Spoke and Mozilla Hubs (Glaser et al., 2022) and created to offer a space for IDT students to participate in activities during a course concentrating on current and historically significant trends and issues in the field. The MID employs a museum theme as a core design component, drawing on critical museology (Shelton, 2013), a constructivist approach that emphasizes participatory design and critical dialogue in exhibit design and implementation (Lundgren et al., 2019). The MID is designed to replicate an in-person museum experience, featuring various gallery spaces (e.g., Influential Leaders from the Field, as seen in Fig. 1).

Fig. 1
figure 1

A screenshot of the influential leader exhibit

Upon logging in, learners assume the role of a virtual avatar they have chosen and control through different input device configurations (e.g., keyboard and mouse). Within this 3D VLE, students can interact, converse, and create and share their exhibits representing the IDT field (see Fig. 2 for a screenshot of students opening their Learning Analytics Gallery). Both the instructor and students create exhibits with the aim of developing an evolving museum gallery throughout the semester (see Appendix 1 for an example of an exhibit assignment).

Fig. 2
figure 2

A screenshot of the student-led learning analytics gallery being opened (Herman et al., 2022)

As most students in the class work full-time, providing an accessible learning environment that accommodates their schedules is essential. Unlike standard synchronous web tools (e.g., Zoom), the MID is accessible at any time, allowing students to meet and design exhibits at their convenience.

MID and Course Curriculum

The curriculum for the course encompassed a comprehensive exploration of various facets of IDT. The course began with an introductory module focusing on course overview and familiarization with the tools and platforms like Mozilla Hubs, followed by a historical view of instructional design, where students created a timeline for the IDT field and explored influential leaders in the domain. Subsequent modules delved into specific areas of IDT. The audio-visual foundations module examined the role of technology and design in instructional materials, while the communication foundations module focused on the influence of media in learning, encouraging students to engage in group projects and create relevant exhibits. The systems foundations module emphasized the study of different instructional design models, notably ADDIE, and their application in creating museum exhibits. Psychological foundations were explored in three separate modules, each addressing different aspects like instructional objectives, individualized instruction, and the structure of education. These modules included discussions on educational theories and practices, and the formulation and use of instructional objectives. The latter part of the course dealt with the professional aspects of IDT. This included examining the standards, competencies, and credentialing in the field, as well as management foundations focusing on the diffusion and adoption of innovations. The course concluded with a module on the past, present, and future of instructional design, capped by a final group project where students created exhibits on instructional issues, thereby synthesizing their learning and insights gained throughout the course.

Student Activities in the MID

In the course, students engaged in hands-on assignments primarily focused on the creation and development of museum exhibits, reflecting key concepts in the field of instructional design and technology. A central assignment is the Influential Leader Exhibit, where students create a multimedia museum exhibit about a prominent researcher in instructional technology, synthesizing their background, interests, and contributions to the field. This exhibit was complemented by a detailed paper, expanding upon the research and insights presented in the exhibit. Additionally, the course included a significant group project titled 'Instructional Issues Exhibit.' For this, students worked in teams to identify and research a critical issue in instructional design, culminating in the creation of a museum exhibit that presents their findings. This project emphasizes collaborative learning and the application of research and design skills, showcasing students' ability to translate theoretical knowledge into practical, engaging, and educational displays (see Fig. 3 for an example of the final project and the output of an exhibit on culturally relevant pedagogy).

Fig. 3
figure 3

A screenshot of the student-led learning analytics gallery being opened

Alongside the main assignments, each week was marked by activities within the MID. These activities included lectures, group discussions, and the creation of various artifacts for the museum exhibit. For instance, students engaged in making exhibits for different instructional design models, constructing an exhibit for the Great Media Debate, mapping out a timeline of key moments in the field of IDT, and learning how to use rapid prototyping methods. These weekly activities not only enriched the learning experience but also provided students with practical skills in curating and presenting information in an interactive and immersive virtual environment. This hands-on approach, coupled with the major assignments like the Influential Leader Exhibit and the group project on Instructional Issues, underscored the focus on applying theoretical knowledge in practical, collaborative, and innovative ways.

Methods

In this study, we aimed to examine the nature of learner experiences as they used the MID through a multi-method, multi-phase approach (see Fig. 4). The research design comprised two distinct phases: expert evaluation and student interview. This combination of methods and phases allowed for a more comprehensive understanding of the learner experiences and usability of the MID. Research activities were approved by the local IRB and informed consent was obtained by all participants.

Fig. 4
figure 4

Outline of research design and focus

Participants

In the expert review phase, participants were selected using a purposive sampling strategy (Suri, 2011). The study focused on individuals with experience and expertise in virtual environments and/or educational technologies in their professional settings. A total of three experts (n = 3) were remotely recruited for the study. Pseudonyms were assigned. Comprehensive information about these participants can be found in Table 1.

Table 1 Demographic information for participants in expert review and evaluation

During the interview phase, participants were drawn from the pool of students who had completed the MID course. These students were recruited via email by a researcher not involved in the class activities. In total, five students participated in individual interviews, all of whom were doctoral students enrolled in instructional design and technology programs. Pseudonyms were assigned. Detailed information about these participants can be found in Table 2.

Table 2 Demographic information for participants in individual interviews

Research Procedures

Expert Review

During the expert review session (Vermeeren et al., 2010), researchers began by welcoming the participants and providing an overview of the study's purpose. The researchers then outlined their responsibilities, which included recording the session, observing, taking notes, and ensuring that no suggestions or hints would be provided during the session. The participants' roles were also clarified, including engaging with the usability scenario, completing usability tasks, conducting think-aloud during tasks, and answering follow-up questions after the session. To help situate the participants in a specific context, a usability scenario was presented, along with a few reminders to create a comfortable environment for participation (e.g., emphasizing that there were no right or wrong answers and encouraging questions or comments on areas of confusion). Expert participants were given usability tasks to interact with the MID and were instructed to complete these tasks sequentially: logging into the MID with personal credentials, beginning the training environment to learn navigation and basic operations in MID (e.g., keyboard and mouse navigation, photo taking, chat functionality, dragging and dropping objects), providing feedback on the training experience, and transitioning to MID under the facilitator's guidance. Once in the MID, participants were encouraged to explore the space while verbalizing their initial thoughts, impressions, likes, and dislikes, as well as commenting on what worked and what did not. They were also invited to try various features within the MID environment (e.g., media panels, instructional content, interactive map) to assess whether these features or the visual design met their expectations. The researchers gathered participants in the lobby area to discuss potential improvements to the environment. Subsequently, participants were invited to the classroom space in MID to complete a series of instructional tasks and activities, replicating the experience of actual learners in the class. The average time spent interacting with the experts was approximately one hour. Upon completion of the session, participants were asked to complete the Computer System Usability Questionnaire.

Individual Interview

The interview process commenced with a brief orientation to outline the study's purpose, followed by the semi-structured interview itself. Each interview session lasted approximately 60 min.

Data Sources

A variety of data sources were collected during the expert review, while the individual interview only collected the audio data (see Table 3).

Table 3 Data sources

Data Analysis

Expert Review

Quantitative measures were analyzed using methods outlined by the survey (Lewis, 2018) and descriptives were also calculated. The recorded videos were coded using a combination of Nielsen's heuristics (Joyce, 2021; Nielsen, 1994) and Kushniruk & colleague’s coding structure (2008), both of which are widely recognized in usability inspection. Researchers compared both coding structures, eliminating repetitive codes and combining similar ones to create an exclusive, comprehensive coding scheme consisting of 25 codes across four major categories (see Table 3 below). After developing the coding scheme, one researcher initiated the coding process, with iterative reviews by another researcher to ensure validity. Researchers held weekly meetings to discuss discrepancies and make revisions, going through a total of four iterations. The final coding scheme was reviewed by all co-authors. Timestamps, issue descriptions, and participant quotes were also documented.

Student Interview

Researchers utilized a grounded theory approach to first open-code the data, conduct axial coding, then selective coding, and finally, draw the conclusion of the pattern (Strauss & Corbin, 1990). Two coders independently coded one transcript and developed a codebook. For the remaining transcripts, one coder conducted the initial coding using the codebook, which was further expanded as new codes emerged. All codes were reviewed by the other coder. Next, the codes were categorized and axial codes were generated, based on which the themes were generated and reviewed by all the researchers. Differences were discussed and modified until a 100% consensus was reached at different phases.

Results

RQ1: How do Expert Evaluators Rate the Usability and Describe the Usability of the Museum of Instructional Design Results

The quantitative results for the CSUQ indicate an above average overall usability score of 81.3. Looking at the individual assessments, Deborah gave the highest overall usability score of 85.4, with her system usability score reaching an impressive 93.7. She rated information quality at 82.6 and interface quality slightly lower at 74.2. Jack's evaluation showed an overall usability score of 79.8, with his highest score being for the interface quality at 85.3 and system usability at 74.2. His information quality score was also strong at 85.3. Daniel's overall usability score came in at 78.7, with notably high marks for system usability and interface quality at 96.4 and 90.9 respectively. However, his information quality score was considerably lower at 52.0. Averaged across all categories, the scores for system usability, information quality, and interface quality were 88.1, 73.3, and 83.5 respectively (see Table 4).

Table 4 CSUQ results from expert evaluation

The qualitative analysis of the expert reviews revealed a range of usability issues with the system. Deductive analysis was conducted based on Nielson’s usability heuristics for user interface design (see Fig. 5) and Kushniruk and colleague’s coding structure (see Fig. 6) revealed that the primary concerns involved graphic quality (such as issues with small font size), system loading speed, controls/navigation challenges, and inconsistencies in system standards, specifically the lack of support for universal hotkeys in Mozilla Hubs.

Fig. 5
figure 5

Count of Nielson’s heuristic issues encountered by expert evaluators

Fig. 6
figure 6

Other usability issues encountered by expert evaluators

Each participant brought unique insights to these common issues, highlighting the importance of individual experiences in usability evaluations. Deborah, for example, focused predominantly on graphics, navigation, consistency, and standard, as well as flexibility and efficiency of use. She experienced confusion due to graphic issues affecting her navigation, remarking, “what am I bumping into?… maybe it is you (facilitator)… I feel like there are some invisible collisions or something right there. It is weird, and I run into it, and it makes me a little confused.” She also noted inconsistencies with conventional standards within the Mozilla Hubs environment, commenting on her expectations formed from other gaming experiences: “I am used to reading modeling packages, I was able to rotate and have a little bit more control. Honestly, a lot of games have the photography camera things that also have rotations and other things to it.” Deborah also expressed concern over the flexibility and efficiency of the platform, citing limited accessibility, particularly for those without a mouse: “if you do not have a mouse, you are kinda limited to click on stuff. Those turning buttons are great, but just not for me.”

While each participant—Deborah, Jack, and Daniel—encountered a variety of challenges, such as broken links, visual problems, and occasional lags or asset loading issues, these did not significantly overshadow their overall experience. Jack, for instance, noted the MID's need for improvements in response time, navigation, and instruction clarity, yet simultaneously acknowledged the system's immersive appeal and intuitive controls. His comparison of the experience to popular gaming interfaces like Minecraft and Doom further highlights the potential of the platform, despite the current technical hurdles.

The effectiveness of the training session also emerged as a salient point, underscoring its role in familiarizing users with the system's navigation and features. While Deborah and Jack conveyed a preference for self-directed exploration, they acknowledged the training's value in equipping them with the requisite knowledge. "I'm not sure I would have found everything without the training session, so I guess it's useful," Deborah reflected. This, along with their observations on the intuitiveness of the controls, hints at the user-friendly nature of the system. Both Deborah and Jack lauded the ease of navigation, with Jack particularly appreciating the familiar gaming-like feel of the 3D environment, again stating "It feels like a game. I like it."

In regards to the usability of the system, both Deborah and Daniel found that their experiences were hindered more so by the exhibits themselves than by the system and its controls. The exhibits that students created were hard to read, poorly designed, confusing, and had conflicting messaging design. For example, Daniel commented that many of the 3D models provided users with no context into the exhibits and felt like they were being used just for the sake of including a 3D model: "The 3D models don't really provide any context. It's like they were just thrown in here." Reviewers also suggested improvements in font size (Deborah, Jack), consistency (Deborah), and interaction for a better user experience (Deborah, Jack). Jack pointed out that many of the exhibits had issues where, "the font size is so small, it's hard to read some of the text." This finding is in line with the lower CSUQ score for the information quality of the MID.

Despite the challenges mentioned earlier, it is noteworthy that all three participants—Deborah, Jack, and Daniel—found the system easy to use and enjoyable. They were able to complete all activities without major issues, which highlights the system's overall effectiveness and potential for a positive user experience. These positive remarks indicate that the system's usability and enjoyment aspects are well-received by the users. However, the platform should not overlook the issues raised during the expert evaluation.

RQ2: What is the Nature of Learner Experiences as They used the Museum of Instructional Design Results

The thematic analysis generated five themes: 1) the 3D immersive environment provided multiple affordances to enhance learning; 2) Students showed attitudinal change as they navigate through the space; 3) multiple challenges emerged as students navigate the 3D immersive environment; 4) there is potential of applying the 3D immersive environment in teaching and learning; 5) students indicated multiple considerations for course design in using 3D immersive environments.

Theme 1: The 3D Immersive Environment Provided Multiple Affordances to Enhance Learning

The integration of the 3D immersive environment greatly supported participants’ positive learning experiences. The affordances provided by this tool can be demonstrated through seven different perspectives, according to the axial codes as shown in Table 5.

Table 5 Theme 1 axial codes and quotes

Authentic Learning Experience

Four out five participants highlighted the authenticity in their learning experiences. For example, Chase referred to it as a “novel” approach by enabling an “authentic feel of the environment” where he would not get from learning management systems or conference tools. Sophia liked the “authentic” and “natural” conversations with peers within that space. Like a real-world project, John thought what they developed was not just “an assignment” since they were designing with “consequences.” To level it up, Chase and Dianna thought the “unstructured approach” and “take risks” made it even “more authentic.”

Engagement

The immersive environment fostered engagement both emotionally and behaviorally as it allowed for opportunities for everyone to be involved in the design and collaborative process. They reported being part of the conversations, even the quiet ones, as it was “more conversational” in nature and engaged emotionally. For example, Sophia stated that she “enjoyed that problem solving aspect” and the teamwork in Hubs, as she had “more genuine feeling in that space.” Behaviorally, they had to walk around and “switch from room to room” to be engaged rather than “just stood in one room the whole entire night and listened.” Diana thought that “taking a selfie inside of a learning project is pretty cool.” This “multimodal” engagement made sure that no one “highing off to the side,” according to Sophia.

Peer Interaction

In a space they felt they were physically, considering the “similarities between the virtual world and the real world,” multiple participants thought their interactions became “pretty authentic and real.” Conversations occurred naturally and organically even outside class time as people ran into each other in the space. Besides, the funny avatars was “oddly comforting” for Sophia as she interacted with peers. With such an interactive atmosphere, collaboration became more intense with long working and meeting hours each week.

Equity and Privacy

After experiencing online teaching in high school for the past several years during the pandemic, Dianna recognized the value for promoting equity using virtual space as “there is an anonymity within hubs,” where students did not have to worry about expressing themselves unnecessarily. On the other hand, Sophia highlighted her preference of not having a camera on after a long Zoom day, so that she could have a more “broad and regular conversation” in class.

Social Presence

Participants perceived a strong awareness of the environment and people within that environment. Entering the space was like “leaving the regular world and just going to be immersed in this place.” Both Chase and Dianna referred to their perceptions of others as “real” though they were looking at avatars. Most of the participants thought the instructor was more standout and present in the space compared with Zoom. To Kirby, it was the developed sense of social dynamics within the environment that determined her perception of social presence and willingness to participate. When it comes to the perception of themselves, however, Chase and John pointed out that since they did not see themselves as they did in Zoom, they were less aware of themselves.

Conceptualize Abstract Knowledge through Visualization

The opportunity of seeing the “ physical representations of all things that we are discussing” allowed John to see the potential of this tool within a few weeks. Having to represent abstract concepts with concrete designs was challenging but helped his team understand the concepts themselves.

Course Space

The 3D immersive environment allowed students to be “in a physical space in the sense” to conduct different activities. Unlike Zoom, this space stayed open after class. Participants loved to “hop into” the “live space” and explore around. The work they posted in the space was perceived to be long-lasting and could stay forever.

Comparison with Conference Tools

Besides what was already mentioned, participants tended to compare this tool with Zoom, the conference tool they normally use for other courses. While they thought the navigation in Zoom was much easier in terms of “visual cues” and “sharing screens”, they did not get fatigue in hubs as they were constantly doing something. They were more motivated to be part of the activities. What’s more, the added humor of avatars made them more relaxed.

Theme 2: Students Showed Attitudinal Change as They Navigate Through the Space

Students showed different attitudes and experienced attitudinal changes as class time passed by (see Table 6). The shown attitudes were coded into three dimensions: affective, cognitive, and behavioral. The affective dimension included their positive and negative feelings of the tools and class. There were some negatives in the beginning. Chase was “totally confused” and did not like the “glitch” occasionally. Dianna got frustrated about figuring out logistics in the space and “panicked.” John felt “overloaded” to make it work. Kirby expressed her “fear” as “the learning environment had changed.” Sophia was “hesitant” and thought it was “intimidating.” However, even those who got really uncomfortable or upset in the beginning had enjoyment in the process. Dianna liked the atmosphere in there and was “exhilarated” to learn the neat tool. Sophia “enjoyed the problem solving aspects of it” and the teamwork in the space as they “huddled together and potentially put together a quick design together.” John liked it after he saw the placeholders around the space and realized what would design and fill up the space.

Table 6 Theme 2 axial codes and quotes

Cognitively, multiple participants expressed their low self efficacy in designing in the space but also perceived value of using this tool. Chase and John thought it was challenging if not frustrating to design projects in this environment. Kirby did not have the “confidence” in developing stuff and would be hesitant to recommend it to someone else because of this. Despite the perceived difficulties, participants recognized the usefulness of integrating this space in their course to prepare them for the comprehensive exams or just the value as a useful tool for instruction. For example, Dianna thought it was a “great educational tool.” She was more mindful when in the space and referred to it as a “very visual” experience as everything laid out in front of her.

When it comes to behavioral responses, participants varied from each other in reacting to it. Dianna acted as an advocate and showed it to students and colleagues. She also took initiatives in figuring out things on her own and even obtained help from her students. Sophia “took upon” herself and practiced a lot. Kirby, on the other hand, did not revisit the tutorials as she did not feel comfortable about it. The time spent in the space also varied. Some students spent only a couple of hours outside class time while the others might spend 5–6 h each week.

It was evident from participants’ own words that they experienced attitudinal changes across time. All five participants became comfortable eventually though they showed different pace, ranging from a few class sessions to three quarters of the semester. What was confusing or frustrating started to “make more sense” as they progressed.

Theme 3: Challenges in Navigating the 3D Immersive Environments

Participants revealed multiple challenges from different perspectives (see Table 7). Some of the challenges were caused by how the course was designed such as the ambiguous or lack of guidance on assignments, a lack of tutorials, or the disconnected space which led to their lack of immersiveness. Some challenges were related to the usability of the tool itself such as the sound issues for communication (e.g., not able to “send direct messages to one person”), difficulties in navigation (e.g., “tell exactly where you were, how close you are to people” or “glitch in the program”), space capability (e.g., slowed down tremendously with 3D objects), low readability, and some intuitive features (e.g., screenshare, volume control, read documents, compatible programs, and breakout room activity). Teamwork can be challenging in this space due to a lack of interest in learning this tool or disagreement from partners in leveling up the project. Despite the fact that the students were doctoral students in a technology focused program, there was a lack of experience and competencies in using immersive technologies. Some students might have had 3D gaming experience but navigating an immersive space like this was the first time. They demonstrated a low level of comfort in the beginning and sometimes did not have the needed skills to make things work smoothly.

Table 7 Theme 3 axial codes and quotes

Additionally, graduate students in this class and even in this program were working professionals who had to balance work, personal, school lives. The class normally started right after they got off a long day’s work in the middle of their working week. On top of that, COVID added another layer of pressure to their work. Dianna said, “this is the most difficult semester I have ever taught in my entire life, ever!” Chase thought raising four kids along with his full-time work and doctoral program was “the most challenging thing in the world.” Kirby had to find time to work on the school work after her kids went to bed. School close during COVID made it harder as the kids stayed at home. John also was “overloaded” especially when he wanted to do a good design in this course while balancing all other responsibilities.

Theme 4: The Potential of Applying the 3D Immersive Environment

The conversations with participants showed potential but also challenges of applying the 3D immersive environment in education due to the technology infrastructure and learners’/instructors’ competencies (see Table 8). As Sophia said, integrating such tools would be hard as learners had “very low digital literacy.” All participants believed that the instructor needed to have a certain level of skills before trying to utilize it in their contexts, which was a gap in their existing competencies. This gap is especially wide in K-12 where young teachers were not prepared with this skill and there is a lack of sustainability. Research conducted in higher education was also not bridged with K-12 practice. In addition, there is a lack of VR readiness to support the implementation of such technologies in higher education, as pointed out by Kirby. One feature of the tool, especially important to K-12 applications, is that “it’s free.” This highlighted the importance of providing Open Educational Resources (OER) to support the adoption of new technologies.

Table 8 Theme 4 axial codes and quotes

There are still possibilities to promote the immersive environment more widely, according to Dianna, as future learners would most likely be “comfortable with 3D atmosphere.” The impacts of COVID also made it even more necessary as everything went virtual, while students were not engaged in learning in online environments. Chase also pointed out the potential of applying such a tool to teach autistic students, “who have difficulty in social communication and nonverbal skills.” The virtual environment, according to him, might “remove that pressure from communication and making eye contact.”

When it comes to the application of immersive technologies in teaching, Kirby and Sophia pointed out the importance of evaluating the content before considering the tool, which was echoed by John, as he cautioned about “not jumping after every technology” one might come across so that we can choose the most effective tools for the content. Considering the limitation of reading text in the 3D space, Sophia reminded instructors to be mindful of learners’ cognitive load by not making it so text heavy. Designers also have to be aware that learners have autonomy in engaging with the content, according to Kirby, implying the necessity of considering learner diversity.

Theme 5: Considerations for Course Design in using 3D Immersive Environments

Reflecting on their own experiences in this course, participants pointed out several recommendations for course improvement or things they thought instructors should be aware of in designing with 3D immersive environments (see Table 9). Participants expressed preferences of having more autonomy in the design process, such as having access to the tool that was used to create the space for those who have the design competencies. Considering the diversity of learners, Dianna suggested a pre-survey to better understand their levels and needs so that their instruction could be tailored accordingly. John and Kirby also stated the need for tiered challenges so that they could have individualized experiences. Pairing people with different technology skills might also be a good idea as suggested by Sophia.

Table 9 Theme 5 axial codes and quotes

John mentioned the need for “a primer” on 3D design multiple times during the conversation, even though he showed a higher comfort level with this tool than others. Since she occasionally got lost in space, Dianna wanted to have “a map” so that she could tell her exact location and navigate to different places more easily. Though not everyone might use it, a “repository” of resources was recommended for those who needed extra help or wanted to level up their games.

Students appreciated the instructor’s being flexible and responsive, which was mentioned repeatedly by multiple participants. When glitches happened. The instructor jumped into the space and fixed the issues quickly so that students were not frustrated with these issues. As mentioned earlier, students in this class might need further accommodation considering their background and technology skills. Being flexible with assignment deadlines and different learning paces was considered important by participants.

They also stated the need for clear expectations on design projects from the instructor. John thought having the end goal in mind would obtain buy-ins from students as he said, “it was a good way to invest people in the class from the beginning, knowing that this is going to be the final product.” Regarding the lack of design rules for team projects in the class, Sophia cautioned the instructor from her own instructional design experiences, that putting “stipulations” was needed to “fit the needs for this room and classmates.”

Discussion

RQ1 Discussion

Our study investigated the usability of MID through the lens of expert evaluators, focusing on its overall usability, system usability, information quality, and interface quality. The quantitative results from the CSUQ revealed an above-average overall usability score of 81.3, suggesting that the system was generally well-received despite the identified challenges (Lewis, 2018). Our qualitative analysis identified a range of usability issues related to graphic quality, system loading speed, controls/navigation, and system standards. The presence of these issues across multiple usability heuristics emphasized the importance of individual experiences in usability evaluations (Pinelle et al., 2009), as each participant brought unique insights into these common challenges (Ashtari et al., 2020). For example, Deborah's observations on the graphic quality and navigation issues significantly affected her experience and underscored the necessity for improvements in these areas.

The expert evaluators also noted the important role of the training session in preparing users to navigate and interact with the system. Their feedback highlighted the balance between self-guided exploration and the guidance provided during these sessions (Tam, 2000). Their remarks about the intuitiveness of the controls suggest the potential of the system to provide a user-friendly experience. Interestingly, the evaluators found the design and content of the exhibits to be more problematic than the system and its controls. The evaluators critiqued these exhibits as being hard to read, poorly designed, confusing, and inconsistent, indicating that improvements in these areas could significantly enhance the overall user experience. This finding is consistent with the lower information quality score on the CSUQ.

Despite the identified challenges, the evaluators generally found the system enjoyable and easy to use. They completed all activities without significant issues, underscoring the potential of the MID, and the Mozilla Hubs system itself to provide positive user experiences. Nevertheless, the platform should consider addressing the usability concerns raised during this expert evaluation to further enhance its appeal and effectiveness—a finding shared by others who have deployed Mozilla Hubs in educational settings (e.g., Brown et al., 2021; Chessa & Solari, 2021).

RQ2 Discussion

Our investigation into the educational implications of 3D immersive environments draws a picture of a rich, promising yet challenging landscape. These environments provide learners with a deep sense of immersion and interactivity, transforming traditional education into a vibrant, immersive, and exciting journey of discovery. Immersion, presence, motivation, and emotion are the major factors researched by multiple studies that have profound effects on learning in the immersive learning environments (Dengel & Mägdefrau, 2018). Our study through students’ learning experiences confirmed those findings as students stressed the importance of their feelings of being immersed and present in facilitating authentic learning experiences. Students showed mixed attitude due to some challenges in the beginning but experienced attitudinal changes once they became comfortable with the environment. The positive attitudes and motivation helped them stay engaged throughout the learning process. The virtual spaces created by these environments serve as a stage where learning is not just a process but an adventure, bringing educational concepts to life in new and innovative ways.

However, the journey through this new learning landscape is not without its hurdles. There have been multiple challenges with immersive virtual environments identified by earlier studies such as lack of VR specific pedagogy, cognitive demand, immersion breaking, time to familiarize learners with the technology, image quality, and technology difficulty (Lege & Bonner, 2020; Taylor et al., 2013; Thompson et al., 2020). Participants revealed that technical issues, such as glitches or lack of familiarity with the tools, could cause frustration and impede the smooth progression of learning. It becomes evident that technological literacy plays a significant role in these environments, as comfort levels varied widely among the participants. This disparity underscores the necessity of adequate technical support and the importance of creating user-friendly platforms that are accessible to all learners.

Despite the challenges, our participants demonstrated resilience and adaptability, painting a picture of learners eager to engage with technology and willing to overcome obstacles for the rewards it offers. This readiness for new learning modalities is echoed in the diverse educational possibilities these 3D environments provide. From collaborative projects to individual studies, these virtual spaces serve as multi-purpose classrooms, offering a platform for a variety of instructional strategies (Fowler, 2015). Yet, the potential of these 3D environments extends beyond the classroom. Participants envisaged broader applications of the technology, pointing to its relevance for real-world scenarios and potential for global reach. Here, the idea of accessibility comes into play once again (Cook et al., 2019), especially when considering matters of how usability and acceptability interplay with one another (Chong et al., 2021). The discrepancy in access and technical skills could contribute to a digital divide, especially in K-12 education and with adults and those who are older (Seifert & Schlomann, 2021). Thus, the conversation turns to the importance of reducing these barriers to create equitable learning experiences for all.

Limitations

The interpretation of the findings presented in this study is limited by several factors. Owing to the nature of user-centered design, our results, derived from a small, homogenous sample of participants, are context-specific and not designed for generalization. Instead, our goal was to gain insights into the user experience within this particular 3D virtual learning environment (VLE), leading to its continued refinement for the specific target population. The potential impact of participant demographics on our findings was not the focus of the current study, thus suggesting future research to include a more diverse sample. Also, the subjectivity inherent in qualitative analysis, coupled with other unexplored factors such as the quality of exhibits in the Mozilla Hubs system and key areas like security, privacy, and scalability, may have limited our insights. Further, the technological literacy of the participants may have heavily influenced their experiences and perceptions, possibly skewing the results towards those who are more comfortable with technology. Despite these constraints, our study paves the way for further research to augment our understanding of 3D VLEs and their possible applications.Future research should consider these limitations and make efforts to address them by diversifying the sample size, using varied data collection methods, and exploring different course contexts over a more extended period.