Thursday, July 3, 2025
Science
No Result
View All Result
  • Login
  • HOME
  • SCIENCE NEWS
  • CONTACT US
  • HOME
  • SCIENCE NEWS
  • CONTACT US
No Result
View All Result
Scienmag
No Result
View All Result
Home Science News Space

Effects of visual and auditory instructions on space station procedural tasks

July 4, 2024
in Space
Reading Time: 6 mins read
0
Fig. 1. Schematic diagram of visual instructions in the space station simulation model (please locate in front of the human systematic science cabinet, which is in column IV of Quadrant IV, inside the laboratory module).
66
SHARES
600
VIEWS
Share on FacebookShare on Twitter
ADVERTISEMENT

Firstly, the authors provided a detailed explanation of the experimental methods and procedures. This study recruited 30 healthy subjects (15 males and 15 females), aged between 20 to 50 years, with an average age of 42 ± 6.58 years. All participants had no severe visual or auditory impairments and were right-handed. The subjects met the biometric standards for astronaut candidates and were rigorously screened. The experiment used Unity 3D to model the space station, simulating the internal scenes of the space station. The subjects started from the core node module and found the laboratory module I, where they operated the Space Raman Spectrometer (SRS), performing steps including retrieval, adjustment, installation, assembly, calibration, and testing. The subjects used the Vive Pro Eye professional VR headset with built-in headphones and Tobii Pro VR Analytics software for data analysis. Visual instructions were displayed on a black background with white bold font (32 dmm). The schematic diagram of the visual instructions in the space station simulation model is shown in Fig. 1. Auditory instructions were played at a volume of 60 decibels, with intonation conforming to normal conversation habits.”

Fig. 1. Schematic diagram of visual instructions in the space station simulation model (please locate in front of the human systematic science cabinet, which is in column IV of Quadrant IV, inside the laboratory module).

Credit: Space: Science & Technology

Firstly, the authors provided a detailed explanation of the experimental methods and procedures. This study recruited 30 healthy subjects (15 males and 15 females), aged between 20 to 50 years, with an average age of 42 ± 6.58 years. All participants had no severe visual or auditory impairments and were right-handed. The subjects met the biometric standards for astronaut candidates and were rigorously screened. The experiment used Unity 3D to model the space station, simulating the internal scenes of the space station. The subjects started from the core node module and found the laboratory module I, where they operated the Space Raman Spectrometer (SRS), performing steps including retrieval, adjustment, installation, assembly, calibration, and testing. The subjects used the Vive Pro Eye professional VR headset with built-in headphones and Tobii Pro VR Analytics software for data analysis. Visual instructions were displayed on a black background with white bold font (32 dmm). The schematic diagram of the visual instructions in the space station simulation model is shown in Fig. 1. Auditory instructions were played at a volume of 60 decibels, with intonation conforming to normal conversation habits.”

This experiment used a 2 × 2 Latin square design. Thirty subjects were randomly divided into AB and BA sequence groups and received training on VR device operation before the experiment. Completion time, operation error rate, and eye movement data were recorded, and the NASA-TLX scale was used post-experiment to assess mental workload. The Shapiro-Wilk method was used to test the normality of the data. For normally distributed data, paired t-tests were used; for non-normally distributed data, Wilcoxon paired signed-rank tests were used. Data were presented as mean ± standard deviation (x ± SD), with a significance level of P < 0.05 and a highly significant level of P < 0.01.Additionally, to analyze eye movement data, researchers defined the control panel and markers in the space station model as Areas of Interest (AOIs). In this experiment, AOIs were defined as the areas marked with blue boxes on the screen (Fig. 2 shows the AOI of the SRS). Tobii Pro VR Analytics software was used to analyze the frequency and duration of views in these areas. To gain deeper insights into the participants' subjective experiences with AR visual and auditory guidance, unstructured interviews were conducted with the 30 subjects after the experiment to gather their subjective feelings and suggestions for improvement regarding AR visual and auditory guidance.

Then, the authors filtered and processed the outlier data, excluding cases with abnormal data from 13 male and 12 female subjects. They used the NASA-TLX scale to subjectively score the participants’ mental workload. In terms of psychological demand, time demand, frustration, and total score, AR visual guidance was significantly lower than auditory guidance. In terms of task completion time and task operation error frequency, AR visual guidance also showed advantages. As shown in Fig. 3, eye movement data analysis revealed that in aspects such as fixation points, scan distance, total fixation duration, and average pupil diameter, the AR visual guidance group performed significantly better than the auditory guidance group.

Finally, the authors concluded the study. This research compared the effects of AR visual and auditory instructions on space station astronauts completing procedural tasks, using a Latin square experimental design. The results showed that AR visual instructions were superior to auditory instructions in terms of task completion time, number of operation errors, and eye movement data, supporting the view that AR instruction modes have advantages in task performance and cognitive load. However, there were no significant differences between AR visual and auditory guidance in the “physical demand” and “self-performance” aspects of the NASA-TLX scale, as well as in regression time. This may be because the tasks did not involve high-intensity physical activity or aspects requiring special attention to self-performance. This study fills a gap in previous research, and the results provide new evidence for the design of human-computer interaction instruction modes, helping to reduce astronauts’ cognitive load and impr

Firstly, the authors provided a detailed explanation of the experimental methods and procedures. This study recruited 30 healthy subjects (15 males and 15 females), aged between 20 to 50 years, with an average age of 42 ± 6.58 years. All participants had no severe visual or auditory impairments and were right-handed. The subjects met the biometric standards for astronaut candidates and were rigorously screened. The experiment used Unity 3D to model the space station, simulating the internal scenes of the space station. The subjects started from the core node module and found the laboratory module I, where they operated the Space Raman Spectrometer (SRS), performing steps including retrieval, adjustment, installation, assembly, calibration, and testing. The subjects used the Vive Pro Eye professional VR headset with built-in headphones and Tobii Pro VR Analytics software for data analysis. Visual instructions were displayed on a black background with white bold font (32 dmm). The schematic diagram of the visual instructions in the space station simulation model is shown in Fig. 1. Auditory instructions were played at a volume of 60 decibels, with intonation conforming to normal conversation habits.”

This experiment used a 2 × 2 Latin square design. Thirty subjects were randomly divided into AB and BA sequence groups and received training on VR device operation before the experiment. Completion time, operation error rate, and eye movement data were recorded, and the NASA-TLX scale was used post-experiment to assess mental workload. The Shapiro-Wilk method was used to test the normality of the data. For normally distributed data, paired t-tests were used; for non-normally distributed data, Wilcoxon paired signed-rank tests were used. Data were presented as mean ± standard deviation (x ± SD), with a significance level of P < 0.05 and a highly significant level of P < 0.01.Additionally, to analyze eye movement data, researchers defined the control panel and markers in the space station model as Areas of Interest (AOIs). In this experiment, AOIs were defined as the areas marked with blue boxes on the screen (Fig. 2 shows the AOI of the SRS). Tobii Pro VR Analytics software was used to analyze the frequency and duration of views in these areas. To gain deeper insights into the participants' subjective experiences with AR visual and auditory guidance, unstructured interviews were conducted with the 30 subjects after the experiment to gather their subjective feelings and suggestions for improvement regarding AR visual and auditory guidance.

Then, the authors filtered and processed the outlier data, excluding cases with abnormal data from 13 male and 12 female subjects. They used the NASA-TLX scale to subjectively score the participants’ mental workload. In terms of psychological demand, time demand, frustration, and total score, AR visual guidance was significantly lower than auditory guidance. In terms of task completion time and task operation error frequency, AR visual guidance also showed advantages. As shown in Fig. 3, eye movement data analysis revealed that in aspects such as fixation points, scan distance, total fixation duration, and average pupil diameter, the AR visual guidance group performed significantly better than the auditory guidance group.

Finally, the authors concluded the study. This research compared the effects of AR visual and auditory instructions on space station astronauts completing procedural tasks, using a Latin square experimental design. The results showed that AR visual instructions were superior to auditory instructions in terms of task completion time, number of operation errors, and eye movement data, supporting the view that AR instruction modes have advantages in task performance and cognitive load. However, there were no significant differences between AR visual and auditory guidance in the “physical demand” and “self-performance” aspects of the NASA-TLX scale, as well as in regression time. This may be because the tasks did not involve high-intensity physical activity or aspects requiring special attention to self-performance. This study fills a gap in previous research, and the results provide new evidence for the design of human-computer interaction instruction modes, helping to reduce astronauts’ cognitive load and impr



Journal

Space: Science & Technology

Article Title

Effects of Visual and Auditory Instructions on Space Station Procedural Tasks

Article Publication Date

9-May-2024

ADVERTISEMENT
Share26Tweet17
Previous Post

Norway can lead the fight against plastic pollution

Next Post

Uncovering “Blockbuster T cells” in the gut wins NOSTER & Science Microbiome Prize

Related Posts

blank
Space

Future Lunar Exploration: Undergrads Paving the Way for Robotic Moon Crawlers

July 2, 2025
BABAR-ERI
Space

Unveiling BABAR-ERI: A Small Satellite Poised to Transform Our Understanding of Clouds and Energy Flow

July 2, 2025
Simulated galaxy evolution with and without AI
Space

AI Triumphs Over Supercomputers in Round 1: Breakthrough in Galaxy Simulation

July 2, 2025
Infographic: Clingy planets can trigger own doom, suspect Cheops and TESS
Space

Clingy Planets May Seal Their Own Fate, Suggests Cheops and TESS Findings

July 2, 2025
Mars landscape Mt Sharp taken by NASA
Space

New Study Suggests Potential Reasons Behind Mars’ Desolate Landscape

July 2, 2025
Pole trajectories of the Lambda(1405) help establish its dynamical nature
Space

Investigating the Pole Trajectories of \(\Lambda(1405\): Shedding Light on Its Dynamic Nature

July 2, 2025
Next Post
Uncovering “Blockbuster T cells” in the gut wins NOSTER & Science Microbiome Prize

Uncovering “Blockbuster T cells” in the gut wins NOSTER & Science Microbiome Prize

  • Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    Mothers who receive childcare support from maternal grandparents show more parental warmth, finds NTU Singapore study

    27519 shares
    Share 11004 Tweet 6878
  • Bee body mass, pathogens and local climate influence heat tolerance

    639 shares
    Share 256 Tweet 160
  • Researchers record first-ever images and data of a shark experiencing a boat strike

    502 shares
    Share 201 Tweet 126
  • Warm seawater speeding up melting of ‘Doomsday Glacier,’ scientists warn

    308 shares
    Share 123 Tweet 77
  • Probiotics during pregnancy shown to help moms and babies

    255 shares
    Share 102 Tweet 64
Science

Embark on a thrilling journey of discovery with Scienmag.com—your ultimate source for cutting-edge breakthroughs. Immerse yourself in a world where curiosity knows no limits and tomorrow’s possibilities become today’s reality!

RECENT NEWS

  • Muscle Rev-erb Drives Time-Based Exercise Adaptations
  • Diver-Operated Microscope Illuminates Hidden Coral Biology
  • Long COVID Effects in Mental Health Patients
  • Tumor-to-Parenchyma PET Ratio Predicts Chemotherapy Response

Categories

  • Agriculture
  • Anthropology
  • Archaeology
  • Athmospheric
  • Biology
  • Bussines
  • Cancer
  • Chemistry
  • Climate
  • Earth Science
  • Marine
  • Mathematics
  • Medicine
  • Pediatry
  • Policy
  • Psychology & Psychiatry
  • Science Education
  • Social Science
  • Space
  • Technology and Engineering

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,192 other subscribers

© 2025 Scienmag - Science Magazine

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • HOME
  • SCIENCE NEWS
  • CONTACT US

© 2025 Scienmag - Science Magazine

Discover more from Science

Subscribe now to keep reading and get access to the full archive.

Continue reading