The Online Journal of Technology and Ethics

Spring 2021:

Virtual Reality and the Ethics of Future Technology

The Second Edition of The Online Journal of Technology and Ethics is hosted by the Experimental College at Tufts University. This edition, titled Virtual Reality and the Ethics of Future Technology focuses on the ethical and philosophical issues presented by novel VR, MR, AR, and XR technologies. The journal is divided into six sub categories: Virtual Realities and the Technology of VR; Simulation Theory; Phenomenology, Immersion, and Embodiment; Communication, Language, and Art; and The Social and Ethical Implications of Virtual Reality

Virtual Realities and the Technology of VR

The technology of Virtual Reality is what makes the immersive, revolutionary experiences of traveling to another world possible. From head-mounted displays (HMDs) to haptic gloves and suits, there is a multitude of technologies that attempt to produce a realistic, immersive experience.

In this section, Molly Clawson and Theseus Lim have produced their research findings on various aspects of the Technology of VR. Molly Clawson primarily focused her research on HMD technology, attempting to find a balance between the physical load and realism of a headset. Theseus Lim researched the implications of security and privacy with the increase in VR tech, hoping to strike a balance between having VR tech be accessible and also secure.

Virtual reality has given users the opportunity to travel to worlds other than their own. This transportation is propelled by immersion, or the degree to which a user believes and is convinced that they are in a virtual environment, and therefore forgets about their presence in the physical world. The more that a user is immersed in their virtual space, the more they are convinced that they have been transported somewhere new. If immersion is the goal for virtual reality engineers, efforts in VR development should stray away from the heavyweight and cumbersome technology associated with hyper-realism and instead shift towards the development of wearable and comfortable technology with minimal weight to truly allow the user to be absorbed into their virtual experience.

As our world becomes increasingly digitized, cybersecurity is not keeping pace with the speed of software and hardware advancements. While the COVID-19 pandemic has highlighted our capacity to carry out many activities virtually, it has exposed the weaknesses of online security. When considering the future of virtual reality, we must analyze and weigh the factors of privacy, security, and the health of users. To create a system that balances all of these needs, it must be isolated from the central internet, air gapped on its own network. The system must also be non-addictive enough as to not create the desire to spend all of one’s time in the virtual reality. By placing limitations on VR while still preserving the exploration that people crave in an alternate reality, we can create a system that can successfully coexist with base reality by enhancing what our minds are already capable of.

Simulation Theory

Simulation theory is the idea that we could be living in a computer simulation run by a far more technologically advanced civilization, possibly so they could study how their society came to be. Though it entered pop culture with the release of The Matrix in 1999, simulation theory in its current form was proposed by Nick Bostrom in 2003. There are many possible variations of simulations, including at what physical level the simulation operates at and how similar the simulation is to the "true" world.

I argue that simulation theory does not significantly affect us, either in terms of how we live our lives or in terms of epistemology and metaphysics. It does not affect our day to day lives because our world is real to us regardless of whether there is ultimately a computer running it. It does not significantly affect us philosophically because we cannot ever know whether we are living in a simulation and therefore cannot conclude anything metaphysical from simulation theory. Simulation theory is just one more possible uncertainty about our world, not the life-changing theory many people intuitively believe it is.

In this project I discusses arguments behind a simulation theory, the probability of our reality being simulated, and the philosophical implications of a simulation. Essentially, the simulation theory consists of the idea that we are possibly in an enormous computer generated virtual reality, and our present reality is not true reality. The leader behind the simulation hypothesis is philosopher Nick Bostrom. His observation of constant growth within the technology of virtual reality indicated that a computer generated simulation could be feasible. In the essay, a full explanation of Bostrom's famous three proposals of the future is presented. The essay goes on to describe the responses of other philosophers to the argument of Bostrom. Notably, the opinion of David Chalmers, who incorporated the ideas of Bostrom in framing his own simulation theory. The key component to both Bostrom and Chalmers’ simulation theories is that we, the ones living within the simulated reality, would have absolutely no way of knowing we are within a simulation. A computer generated simulation with unlimited computing power, according to various scientists and mathematicians, would be completely unable to detect. This essay introduces an explanation of whether or not a simulated reality generated from unlimited computing power would be possible. In the eyes of Bostrom and Chalmers, this computing power is feasible and can be achieved. However, other scientists have deemed the necessary computing power unreachable, and impossible to develop. Finally, this essay will examine possible philosophical implications of what a life within a simulation would look like and whether or not one should change their outlook upon life.

I present an argument that the human experience would not be profoundly affected if Nick Bostrom (2003)’s simulation theory were to be proven true. Instead of attempting to prove or disprove Bostrom’s theory, I will instead consider a scenario in which it has already been proven true, and consider the implications and affects this would have on the human population and the human experience. I operationalize the human experience as one in which we are conscious beings, one in which we experience shared ideas and agree upon our version of reality, and one in which we believe we have (or have the illusion of having) free will, asserting that each aspect of the human experience would remain intact and not be affected by the assertion of the simulation theory if it were to be proven.

VR, Phenomenology, Embodiment, and Immersion

By studying the illusionary perception of existing physically and mentally in a virtual world, Theo Forget and Joel Lima have focused their research on philosophical questions regarding the phenomenology of virtual reality. These generally relate to the philosophical aspects of an immersed user’s perception, thought, emotion, and body awareness.

As of today, emotion still does not have a concrete definition. After attempting to define immersion and presence, this study will focus on the nature of emotions in virtual versus real experiences. Using Michel Cabanac’s four-dimensional plane for defining emotions, the first element to distinguishing emotions will be whether one can recognize whether they are immersed in a virtual experience. The replication of harm or damage taken on one’s avatar onto the user’s physical, real body is also a clear marker that their emotion is not unique to the virtual experience. The idea of time frames of experiences plays a crucial role in recognizing the nature of an emotion. If an emotion is limited to the time frame of the virtual experience, it is unique to that experience and is not a real experience emotion. However, if the emotion continues outside of the virtual experience, the nature of the emotion does not change in the real world.

Virtual Reality, Artificial Intelligence, and Consciousness

This section of the class has been thinking about artificial intelligence (AI), mainly concerning its ethics. We have done research on the applications of this exciting technology in the contemporary world, and looked toward the future with respect to how it all affects us humans.

John and Clark offer nearly opposite views of this technology and its impact on humans: Clark arguing for greater regulation and thoughtful use of AI, and how it will make humans obsolete; while John is more positive and argues that AI replacing humans in some tasks improves the lives of those same humans. Jimmy ponders the ethics of mind uploading and AI, postulating that we must treat cyborg beings and mind uploaded entities must be treated equally to their natural cousins.

This paper explores what the true essence and usefulness of artificial intelligence (AI) is in relation to humans. It aims to answer the questions of why have humans developed this technology, how does AI either support or detract from human goals, and how is it beneficial to humans. The central claim of this paper therefore is that a primary function of AI is to replace humans in certain tasks, and perform others impossible to them, in the decision making process and in the end this is beneficial to humans. The paper runs through early and contemporary applications of AI, analysis of the technology with relations to humans, disproving of counterarguments and finally a more philosophical look towards the future.

Deep learning is a nascent technology with great expectations. However, excessive hype and optimism obscures a darker reality; deep learning is a technology of great peril to the future of the human experience.


Today, deep learning already threatens human privacy and autonomy through commercial and despotic applications. For example, media platforms like Facebook or Youtube harvest mountains of user information to train their deep learning systems. Consequently, advertisements become more personalised and intrusive while content becomes more addictive. In China, deep learning is a tool of political oppression; the technology simultaneously identifies and tracks millions of citizens to determine if they are granted the right to travel or buy property. In both cases, deep learning is not a righteous technology, but an instrument that belittles our most sacred human values.


The menace of deep learning extends far beyond current applications. Indeed, as deep learning is cultivated with more data and better computers, it poses an existential reckoning for human intelligence and purpose. Without physical limitations on synaptic connections, memory, and sensory input, deep learning has the potential to utterly dwarf human intelligence. In such a future, it highly unlikely that humans will be competitive with faster, smarter, and cheaper machines. In short, deep learning may make humans utterly obsolete.

The consistent rise of available computing power as well as the growth in knowledge about the human brain continue to grow the scope and possibilities of artificial intelligence. Within the goal of creating human-level artificial intelligence, one possible approach is mind uploading, also known as whole brain emulation. In the process of mind uploading, a mind is initially scanned in some way in the physical world. The data found in the scan is then used to emulate the mind using a computer. In this paper, I will be exploring mind uploading and specifically the ethical concern of how mind uploaded entities should be treated. I will argue that mind uploaded entities should be treated the same as their equivalent biological forms.

VR, Communication, Language, and Art

Yelling fiercely during a heated debate. Grumbling after looking at yet another billboard in the city. The crash of a plate as it's been dropped to the floor. In every single one of these instances, we communicate, or share information with the world around us. As the old saying goes, communication is a two-way street. We simultaneously receive information from our surroundings and produce our own information for others.

In physical reality, we utilize our five senses to perceive physical phenomena. While there's more than enough sensations to fill multiple lifetimes and then some, the laws of physics unfortunately limit the types of data we can receive. However, that's not the case in virtual reality, which gives both designers of virtual worlds and their users far more control over the types of communication they engage in.

In this section, Ed Bielawa, Andrew Chen, and Daniel Schwartz discuss virtual reality as a communications platform. They analyze different types of information exchanges in virtual reality and their implications.

In everyday language, virtual reality has become almost synonymous with video games and other interactive forms of media. This paper aims to analyze the interactive aspect of virtual reality, what many of us see as its distinguishing feature. In comparison to physical reality, virtual environments enable users to present a wider array of information regarding themselves.

I conclude that interactions within immersive virtual environments, both with other users and the virtual space itself, rely on misrepresentation. Ultimately, this misrepresentation between physical and virtual actually enables us to further explore our self-perception and selves as a concept.

The malleable nature of virtual presentations, compared to physical ones, gives a user full control over how they are manifested in worlds. As physical reality lacks an underpinning sense of misrepresentation, communication and interaction in virtual environments obviously differ.

I then explore some potential implications of the misrepresentation in virtual environments, such as freedom in the context of self-perception, tools and abilities to present oneself differently, or asking how the definition of ‘genuine’ communication changes in the virtual landscape.

Human communication, built around a mix of social, cultural and physical complexities, has been one of the biggest issues that VR technology developers have been working on addressing to improve the quality of VR social interaction. One huge aspect that is lacking in VR avatar communication is the presentation of emotion, since emotion involves specific speech patterns, facial expressions and gestures that are current impossible to fully replicate in the virtual world. While the VR technology is currently unable to fully mimic real-life communication, emotion is so engrained in our biology that even imperfect VR-facilitated social interaction can foster positive emotions and eliminate negative emotions. VR's effect on emotion can carry over to the real world, and therefore can lead to new forms of psychotherapy, especially for those suffering from social-based psychological disorders.

Virtual reality musical instruments (VRMIs) have lots of potential as a new medium of music-making, specifically when it comes to providing musical experiences beyond those afforded by the real world. As VRMIs are developed, haptic feedback (relating to touch sensations) should be a primary focus because physical touch is so critical to playing real instruments and haptic feedback can generally enhance VR experiences. When it comes to different types and applications of haptic feedback, I believe that abstract (not realistic) haptic feedback should be pursued in VRMIs rather than concrete (realistic) haptic feedback. Abstract haptic contributes more to providing musical experiences beyond those afforded by the real world since it can be applied to new, abstract VRMIs and it can provide performance-related feedback.

Social Implications and the Ethics of Virtual Reality

Artist Unknown

With the emergence and exponential increase of virtual reality technology and machine intelligence, the topics and questions surrounding the ethics and morality of said devices are more prominent than ever. Thus, it is imperative that we seek to understand how certain ethical frameworks can be applied to the future development and regulation of these technologies.

In this section, both Brendon Bellevue and Dylan Maloy research the positive and negative externalities that virtual reality and artificial intelligence potentially present. Specifically, Dylan researches the effects of implicit biases on the accuracy of facial classifiers - presenting the practice of open source in conjunction with the usage of "de-biasing" algorithms as a means of mediation. Meanwhile, Brendon researches the benefits of virtual reality in terms of improving ethical and moral decisions in the online realm.

The reciprocal standard of “Treat others the way you want to be treated” is a common rule that is vital in maintaining human morality. Although this golden rule is typically justified and upheld in making decisions in the real world, this changes when referring to moral standards in the online realm. I’ve always been interested in how humans interact in the digital world, and I’ve always noticed the increasing amount of toxicity in certain online platforms. This toxicity brings concern in the virtual reality space. How will social interactions change in virtual reality? Will moral standards be upheld, or will toxicity prevail to be even more of an issue? With the extension of virtual reality added to our digital age discussion, questions about the bounds of moral behavior are being questioned and contested. In this paper, I make the case that virtual reality can improve social standards. considering how it could be a solution for remembering the golden rule. Virtual Reality can serve as a solution when addressing toxic behavior in the online realm because virtual reality has more of a sense of embodiment of real world characteristics, leading to different moral and social standards when entering a virtual space, but there are also potential negative externalities.

Artificial intelligence began in the 1950s with Alan Turing’s question “Can Machines Think?” Seven decades later, its impressive growth reflects the hope that it will enhance the human experience. Skeptics raise concern that it will invoke chaos and take away from the essence of humanity. At the root of this ethical concern is the reality that these algorithms are being developed and implemented by imperfect humans. These flaws can then be amplified and perpetuated by the technology. Responding to this, many believe that the solution to these dangers must come with a proper regulatory foundation. The article examines the history and ethical problems that may arise when we allow machines to make moral decisions with ethical consequences, specifically exploring the problem of implicit bias in AI through the lens of modern image classifiers. It views the root of bias from two notions - those which are a product of a poorly modeled dataset and those which are a product of the environment in which the algorithm is developed. The work draws evidence from the 2018 Gender Shades study and commentary from experts Daphne Koller and Timnit Gebru, concluding that prominent firms in the facial recognition industry display a lack of knowledge and care for such issues - often forgoing sound, qualitative results for higher overall accuracy. As a result of these practices, such algorithms often perform poorly on underrepresented demographics, further enforcing systemic and social disparities. Oversight and regulation of the industry have been offered as a solution to the problem of implicit bias. The study offers open-sourcing and algorithm “de-biasing” systems as a means to this end - presenting examples of the Linux Kernel and MIT’s “de-biasing” algorithm as successful examples of such. In order to achieve the best result possible, it is imperative that AI technologies have a solid moral and ethical foundation; thus, we must do everything we can to eliminate current ethical issues such as bias from said foundation, creating an equitable and dependable blueprint for future AI systems.