Eating Metaverse Food in the Real World? New Research Says Yes

Ukemochi is a new VR overlay developed by the Nara Institute of Science and Technology and the University of Tokyo, which can make eating metaverse food sync with real world eating.
Ukemochi is a new VR overlay developed by the Nara Institute of Science and Technology and the University of Tokyo, which can make eating metaverse food sync with real world eating.

One of the current limitations of virtual reality (VR) headsets is the lack of sensory experiences. While users can see and hear things within the metaverse, they currently don’t have the ability to touch, taste, or smell anything. This can make the metaverse seem more distant or alien, especially in situations where food is involved, such as virtual restaurants. But things seem to be changing, as researchers are working on updating VR headsets with technology that stimulates touch. One team is going even further, as scientists from both the Nara Institute of Science and Technology and the University of Tokyo have developed a superimposing video for a VR headset that allows users to eat in the real world and the virtual one.

Previous Metaverse Food Experiences

Currently, most VR users have to lift their headsets to eat food in the real world or use the gap under the headset to watch themselves eat. This creates a disconnection between the virtual world and the real one, as characters may eat at different times than the user, or different foods. Previous studies have tried to use Augmented Reality (AR) to make real-world food look virtual, with mixed results. In changing the food’s appearance, users found the food to taste different. This change in taste became more apparent when the environmental context of the food wasn’t changed to look virtual. For many of the AR studies, the food’s image did not change as the user ate it, making the experience all the stranger for an individual.

Designing the VR Food Overlay

Building off these previous studies, researchers from the Nara Institute of Science and Technology in Tokyo and the University of Tokyo collaborated to develop something new. The team decided to use a VR video in a headset as opposed to augmented reality as it was easier to manipulate. To develop their overlay, the researchers employed a process called semantic segmentation, which classifies pixels into different categories (e.g., food, table, etc.) within an image. This allows for better resolution and helps change the overlay in real-time as the user is eating both real and metaverse food. The researchers also employed a tracking program to track the food areas in the video and change them accordingly.

Responsive Image

The researchers called their video overlay system Ukemochi, after the ancient Japanese god of food. Ukemochi included its own server, along with a front camera to track the eating process. In talking with the headset, the Ukemochi server accurately overlayed a virtual reality food image of the food and changed it while the user ate. To test their system, the scientists had participants each with their hands and then separately with a fork and spoon as well as using the overlay and later using the gap under the headset. From their results, the team found that the overlay provided the most satisfying experience, with no changes in taste. Participants did report that they found it difficult to eat with a fork and spoon using the Ukemochi overlay is it was hard to concentrate on what their hands were doing.

A Long Way to Go for Metaverse Food

While the researchers found a successful way to make metaverse food more appealing, their technology is still a long way off from being commercially adopted. For instance, the low-resolution images in the Ukemochi overlay gave some users pause. However, the implications of being able to eat metaverse food in the real world are huge for the industry. This technology is especially important for virtual restaurants, as restaurant companies are trying to branch out their businesses. With this research, the metaverse industry is one step closer to making virtual reality more life-like.

For more market insights, check out our latest Digital Twin news here.

The Future of Materials Discovery: Reducing R&D Costs significantly with GenMat’s AI and Machine Learning Tools

When: July 13, 2023 at 11:30am

What: GenMat Webinar

Picture of Jake Vikoren

Jake Vikoren

Company Speaker

Picture of Deep Prasad

Deep Prasad

Company Speaker

Picture of Araceli Venegas

Araceli Venegas

Company Speaker

Kenna Hughes-Castleberry

Kenna Castleberry is the Science Communicator at JILA (formerly known as the Joint Institute for Laboratory Astrophysics) at the University of Colorado, Boulder.

Keep track of everything going on in the Digital Twin Market.

In one place.

Related Articles

Stay ahead in the virtual realm with Digital Twin Insider, subscribe to our newsletter today

Join Our Newsletter