The metaverse is coming – and sooner than you think. Gartner predicts that by 2026, a quarter of people will spend at least an hour a day in the Metaverse. For enterprises, it will likely add value with new business models and ways of working that we can only dream of today, but for cybersecurity, the Metaverse is a formidable challenge.
According to Accenture, the Metaverse “will change how companies interact with customers, how work gets done, what products and services companies offer, how they make and distribute them, and how they run their businesses.” From a cybersecurity perspective, however, the metaverse is challenging. Most companies today already struggle to secure their existing data and infrastructure. In the multidimensional world of the metaverse, this will become exponentially more difficult.
The metaverse is still a moving target. It is currently at a stage of development like the Internet was in the early 1990s. But unlike then, we now have a much better idea of the types of threats that can occur in powerful digital ecosystems. As a result, we can prepare much better for what comes next. It is critical that the security industry discuss the challenges of the metaverse and mitigate them now before they become a problem.
- What are the risks of the Metaverse?
- Question 1: How can personally identifiable information (and other sensitive data) be protected in the metaverse?
- Question 2: How can users be authenticated?
- Question 3: How can users be protected from bullying, harassment and exploitation?
- Question 4: How can this type of rapidly growing attack surface be addressed?
- The security of the metaverse starts now
What are the risks of the Metaverse?
Many people are familiar with the current security challenges facing digital organizations. The Metaverse will bring similar challenges, only adapted to the different forms of engagement, interaction, and access that come with immersive, virtual environments. With this in mind, there are four key questions that all CISOs and technology teams should be asking today about the Metaverse.
Question 1: How can personally identifiable information (and other sensitive data) be protected in the metaverse?
Personally identifiable information (PII) must be protected simply because of legal requirements such as the GDPR, the California Consumer Privacy Act (CCPA), and China’s Personal Information Protection Law (PRPL).
These, of course, also apply to the Metaverse. However, the amount of personal and other sensitive data that companies will collect, store and manage to deliver Metaverse experiences will increase exponentially. Much of this data here will come from technologies that enable the digital/physical blurring that defines the Metaverse, such as biometric devices, smart speakers/microphones, and virtual reality headsets.
Data governance, endpoint, and network security will become significantly more important as the number of personal data increases. In doing so, these functions must be deployed in a way that does not impact the performance of the underlying network. After all, a time-delayed, jerky metaverse would quickly lose its subscribers.
Question 2: How can users be authenticated?
Another challenge we’re familiar with from current enterprise technologies is verifying the identity of individuals when they access sensitive digital services such as banking applications or corporate networks. Today, this is often achieved through multi-factor authentication, but this approach won’t work in the metaverse. We are entering a world of avatars that will populate 3D environments in real-time.
It is hard to imagine a person leaving their virtual session and taking off their headset to perform an authentication process in the real world. Businesses and public sector organizations will therefore need reliable methods to be sure that a person’s avatar is really controlled by that person and that the avatar has not been faked or “deepfaked.” How will users be able to tell if it really is the other party they want to interact with? Especially if it looks and acts exactly like that person? And how will we be able to trust the flow of these identities between different Metaverse platforms?
There are a number of ways in which this can be achieved: For example, there are numerous approaches that use biometric data to determine “normal behavior.” User behavior and idiosyncrasies are as individual as fingerprints. Accordingly, security teams can be automatically alerted when a user’s avatar behaves unusually. Other possible approaches include using iris pattern recognition to link a specific avatar to an individual VR headset, or embedding unique encrypted identifiers into avatars to protect against counterfeiting. As this technology continues to evolve, other mechanisms and approaches will also emerge.
Question 3: How can users be protected from bullying, harassment and exploitation?
We all know the dark side of social media platforms: Aggression, bullying, harassment and exploitation. There is no reason to believe that these abuses will not affect the Metaverse. However, because it is an immersive 3D experience, the psychological impact of such behavior is likely to be even more severe for victims. Avatars are extensions of the user and are closely tied to the user’s identity. For many people, a metaverse experience will feel as real as it does in daily life. This will be even more true if innovations such as haptic gloves and tactile feedback mechanisms bring the feeling of touch to the metaverse.
Even in the early stages of the metaverse, there are significant problems. For example, after complaints from female users of its Horizon Worlds platform about assault, Meta Platforms introduced a “personal boundary” that surrounds their avatars with a three-foot-high shield.
Every company needs to think about where the boundaries are between the physical and virtual worlds, what duty of care it has to users, and how best to balance user safety with the usability of the metaverse. But what if there are vulnerabilities in the code and the boundaries can be compromised? What liability does the company assume if this security mechanism fails?
Ultimately, solving this problem requires clear legislation on what is not allowed in digital domains and the ability to monitor these new laws. But how will this be regulated and monitored? Who will be the central authority when these environments cross jurisdictional boundaries, let alone the platforms of the Metaverse? However, companies can help with their own moderation teams, much like they do now with abusive content on social media platforms. As outlined in this World Economic Forum statement, one solution will also be to “incentivize better behavior and reward positive interactions.”
Question 4: How can this type of rapidly growing attack surface be addressed?
Already, the proliferation of devices, the growth of data, and the increasing attack surface pose a significant challenge. The metaverse will add to this challenge. As mentioned earlier, the Metaverse will bring with it a wide range of associated hardware that will be connected to enterprise networks.
In doing so, each device is vulnerable in its own way and will require security monitoring and management. However, enterprises and their security teams also need to think about protecting the human brain, which also becomes part of the attack surface in the Metaverse.
It’s a truism that people are often the weakest link in an organization. It’s not for nothing that social engineering is responsible for a majority of successful attacks. In immersive, virtual worlds, it will be easier to psychologically manipulate and spread misinformation that criminals could exploit in a variety of ways. For example, Sensorium Corp.’s Metaverse has already been misused to spread misinformation about vaccines.
Accordingly, educating employees about security threats and lurking traps is just as important as establishing robust cyber protection. In the metaverse future, such training will likely need to include psychological resilience techniques and programs to recognize manipulative or coercive behavior. People should feel supported at all times and be able to report anything they are uncomfortable with.
The security of the metaverse starts now
These four questions represent just some of the challenges that the metaverse will bring. It is important and encouraging that people are already discussing this and thinking through problem areas. Especially since there are many more challenges, such as preventing terrorists from abusing virtual worlds (the Metaverse would make an extremely effective training ground for potential attacks) and combating fraud attempts targeting virtual assets (NFT fraud already exists, after all). However, companies should not take this as a reason not to explore the Metaverse.
The Metaverse is likely to have as big an impact on the world as the Internet did before it. Companies that don’t participate will likely have a hard time competing in the coming years. However, there is an urgent need for the security industry to come together now to discuss solutions to the many challenges ahead. Cybersecurity and risk experts are best equipped to guide organizations through these challenges and support the overall digital agenda.
They understand the difficulties and complexities and have a wealth of experience to bring to the table. There is still time to prepare for the metaverse. The more we do now and the more questions we ask, the greater the chance that the Metaverse will deliver maximum benefit with minimum risk.