Similar to the evolution of the cellphone—from bulky to sleek—virtual and augmented reality hardware is undergoing dramatic changes.
Cumbersome VR and AR headsets are now being replaced by lighter, wireless headgear that allows users to more freely experience their real and virtual surroundings. But this new generation of immersive hardware also brings fresh challenges.
Since the hardware is wireless, technological advancements are necessary to improve battery capacity. Today’s headsets are also hindered by the longstanding issue of rendering performance—that is, the quality of how a virtual scene is displayed in relation to the user’s movements.
Gaps in visual quality, or “flickering,” can detract from the immersive experience. Some users may also experience motion sickness if what they’re seeing isn’t corresponding quickly and accurately enough to their movements.
At the University of Maryland, researchers are addressing these issues with a concept known as foveated rendering, a computational technique that uses innovative eye-tracking software to replicate our natural eye function.
As you read this article and focus on your device, for example, neighboring objects appear blurry. That’s your fovea at work—a small part of the retina whose job is to make your vision more acute to what you’re actively choosing to focus on, while blurring your peripheral vision.
The UMD researchers are developing systems to simulate the fovea within VR. Not only does this make imagery appear more realistic by mimicking our natural eye function, it also greatly enhances the quality and speed of graphics—and reduces power consumption—by only fully rendering, or loading, the user’s spot of focus.
“We are still in early stages of foveated rendering. A promising direction is the concept of visual metamers—how can dissimilar scene elements that humans perceive to be similar, be used to enhance foveated rendering,” says Amitabh Varshney, professor of computer science and dean of the College of Computer, Mathematical, and Natural Sciences. “We believe the best way to explore these new directions will be through a close collaboration between perceptual psychologists and computer scientists.”
Varshney has long been a leader in the development of new immersive tools and technologies. In 2017, he launched the Maryland Blended Reality Center, a multidisciplinary partnership that joins computing experts at the University of Maryland with medical professionals at the University of Maryland, Baltimore.
He was also a driving force behind the development of a new undergraduate major in Immersive Media Design, joining faculty and students interested in exploring the crossover between computer science and digital art.
Varshney’s work, along with others including Matthias Zwicker, professor and interim chair of computer science at UMD, has led to national recognition of the university’s research and innovation in immersive technologies.
The university’s computer science faculty are ranked number one in the nation for virtual and augmented reality visualization research according to CSRankings, a metrics-based ranking of the top computer science institutions.
In addition to their tenured appointments in computer science, Varshney and Zwicker are both active in the University of Maryland Institute for Advanced Computer Studies.
Zwicker’s research is currently focused on kernel foveated rendering. Using a GPU-driven technique, the researchers are able to develop a method of parameterized foveation that mimics the distribution of photoreceptors in the human retina.
Ultimately, this type of research will have across-the-board benefits for VR users, says Zwicker. “By improving the motion to photon latency—the time between the user’s head motion and when the corresponding image is displayed on a VR headset—we’re able to greatly accelerate the quality of 3D graphics, decrease power consumption, and help alleviate the onset of cybersickness caused by choppy visuals,” he says.
Zwicker attributes some of his team’s early breakthroughs to Xiaoxu Meng (in photo), a former doctoral student he helped advise who graduated in 2020. Meng now works as a research scientist at Tencent America.
She was the lead author of three papers—"Kernel foveated rendering;” “3D-Kernel Foveated Rendering for Light Fields;” and “Eye-dominance-guided Foveated Rendering”—which included input from Zwicker, Varshney and others.
Another peer-reviewed paper on this topic recently received accolades at IEEE VR 2021, considered the most prestigious international conference for immersive technologies and 3D user interfaces and environments.
UMD researchers were awarded honorable mention for “A Log-Rectilinear Transformation for Foveated 360-degree Video Streaming.” The paper was authored by graduate student David Li, undergraduate Adharsh Babu, former graduate student Ruofei Du, former undergraduate Camelia Brumar and Varshney.
In the paper, the team described how they designed a log-rectilinear transformation that combines foveation, summed-area tables and standard video codes, to enable foveated 360-degree video streaming.
To validate their approach, they built a client-server system prototype for streaming 360-degree videos, which leverages parallel algorithms over real-time video transcoding. Go here to see a video overview of their project.
Driving all of these advances in foveated rendering is the interdisciplinary strengths found at the University of Maryland, says Varshney.
“To have a real impact in improving the quality and efficiency of immersive technologies will require close collaboration between psychologists, human-computer interaction experts, photonics physicists, nanomaterial scientists, and computer engineers working with computer scientists,” he says. “A modern research university like Maryland is the perfect environment to nurture such intellectual curiosities and enable them to germinate into transformative ideas.”
Original news story written by Maria Herd
The Department welcomes comments, suggestions and corrections. Send email to editor@cs.umd.edu.