Learnings from the Virtually Human Working Group
The question of how technology shapes us, and is shaped by us, is complex. For some, this includes issues related to the interplay between technology and our individual mental health and psychological states. For others, the concerns center on access, digital and technical literacy, algorithmic bias, and trust. Coupled with a global pandemic and a social reckoning over the country’s racist history, these elements are now front and center as researchers and the general public, alike, critique our digital experiences. Our culture’s dependency on technology to work, eat, learn, govern, be entertained, and stay in touch underscores that digitally mediated experiences already profoundly shape our understanding of what it means to be human.
The scenarios are vast. Whether it’s via comment sections, video chat, AI agents, or newsfeed content, how the mediums through which we interact shape human-to-human interactions remains an open research question. What we communicate and how we do so is a performative expression of how we see ourselves and others around us. It is becoming clearer that how we talk about, engage with, and design technology must better align with our needs as humans, not just as end users. Yet, as a collective (from industry to academia to the public), we lack the necessary language to fully describe our modern relationship with technology. As a result, we often fall back on binary differentiation between online and offline experiences.
To explore this deeply, Aspen Digital, with support from Pivotal Ventures, sought to reimagine how technology can shape the human experience and our sense of place. The Virtually Human Working Group, composed of 24 thought-leaders from industry, civil society, and academia, came together to examine how we can better understand the cumulative effects on our human connections of continuously and ubiquitously interacting with technology. The group represents a diverse set of thinkers from various disciplines, such as developmental psychology, sociology, and computer science, and includes tech industry representatives from trust and safety teams and research labs. Each brought to the discussion decades of experience exploring topics like social connection, loneliness, youth and tech, and disability justice.
The group met virtually every quarter and was charged with the following:
- Establish a baseline understanding of technology’s impact on a person’s ability to connect with themself and others.
- Examine and critique the various methodologies and metrics used to understand these interactions.
- Formulate a common understanding and language among researchers in both industry and academia.
- Develop a repository of best practices in research methods and frameworks for evaluation
Over the course of the year, members exchanged ideas and research. External speakers were invited to address and engage with the group on various topics, from the neuroplasticity of well-being to social connection, to adult friendships, to youth and disability. The following offers key insights from these discussions.
The challenge of well-being in technology
From the beginning, members of the group struggled with the concept of well-being as a constructive framework for evaluating the impact of technology on both the individual and the collective. Well-being encompasses multiple layers and various inputs. According to one member, its “ambiguous definition is harmful”—often being misinterpreted as a static state as opposed to a process or journey that reflects multiple inputs, such as mental health, one’s environment, and socio-political occurrences.
Richard Davidson, Founder & Director of the Center for Healthy Minds at the University of Wisconsin-Madison, shared an example: the average adult spends 47% of their time being distracted from tasks, interrupting both declarative (learning about stuff) and procedural (acquired through practice) skills. According to Davidson, there is neuroscientific evidence that well-being is rooted in specific brain circuits that can be pliable and be modified for training. Davidson urged members to consider four components that may impact well-being and resilience: awareness (e.g., mindfulness); connection (kindness); insight (curiosity); and purpose (meaningfulness).
Building on Davidson’s comments, members debated the need to shift focus from the well-being of an individual to relationships between people. This, as one member noted, is how the industry and research institutions need to evaluate social connection. To learn more, Jinjoo Han and Andrew Nalani of NYU’s Listening Project shared their research in trying to understand and unpack relational well-being. In their findings, Han and Nalani point to factors such as interpersonal curiosity as a condition of social connectedness. In other words, the more an individual practices curiosity, the more likely they are to exhibit behaviors such as empathy and perceived common humanity.
For this group, the exercise of defining well-being, particularly as it relates to technology, felt unresolved and circular in logic. As a framework, well-being is powerful in illuminating the many different layers that constitute the human condition. Yet, some would argue that well-being is simultaneously ineffective at isolating causal effects. We see this lack of a shared definition of well-being ripple across discussions on the design, development, and accountability of technologies. This is further complicated by current systems, political or technological, that evolve quicker than researchers are able to study.
The narrative of measurement
A definition begets measurement—at least that is the hope. But, when a shared set of definitions is absent, how can we confidently know what we’re measuring and whether it is accurate. Members wrestled with a variety of questions on metrics, measurement, and methodologies. Similar to discussions on definitions, there is no definitive approach. Some voiced concerns about whether the narrative of measurement in well-being flattens its multi-dimensionality into oversimplified quantitative metrics. To mitigate this concern, several members suggested starting with the following questions: what are we trying to do with these measurements; and who will use them and for which purposes? “Without this clarification,” commented one member, “then they [the metrics] are meaningless.”
The group also spent time exploring different methodologies that may be useful to specifically measure well-being in technology. For the most part, a mixed-methods approach is the gold standard. One member suggested finding intersections between more quantitative longitudinal studies (e.g., time-series analysis) and qualitative longitudinal studies (diary studies). The group also discussed the need to shorten timelines for research and development. This means potentially finding creative ways to conduct real-time assessment and impact studies using qualitative methods, which traditionally take much longer to conduct.
Not surprisingly, a discussion on measurement and metrics often opens up more difficult questions on corporate motives and incentives. The working group had a robust discussion on the current tech market, which prioritizes profit over empathy, social connection, and other well-being indicators. As one member noted,
“Perhaps the most responsible principle for all tech developers is to explore the extent to which capitalism, as defined as pure pursuit of profit, continues to be the most effective path, or if we need new incentives that push for tech development in the context of delivering an improvement to the human condition.”
The group suggested beginning with the exercise of articulating current constraints on companies and figuring out how to reshape the industry within these constraints, focusing on processes. Another member posed,
“It’s not solely a capitalistic future. It’s more complex. It’s also a failure in creativity. How can tech companies unblock their imaginations in product development to say ‘no, we shouldn’t ship this’?”
Future design implications
Despite remaining in disagreement on definitions and metrics, the group produced several ideas for how to approach the design, build, and assessment of future technologies. First and foremost, the question of, “What is the purpose of technology?” must be reimagined. There is a limitation in focusing solely on technological determination (a.k.a., how technology should drive society). Instead, several members discussed the need to understand technology as it relates to the systems through which it is built, including structural racism and the perpetuation of white supremacy. One recommended starting any re-thinking in the design or development of technology from an intersectional place in order to uncover challenges such as racism and misogyny while avoiding the binary that technology is either the problem or the solution.
Second, and relatedly, members brought to light a legacy of technology that is highly biased, where it determines who succeeds and to what degree. This is illustrated in research shared by Meryl Alper, Associate Professor of Communication Studies at Northeastern University, whose work centers on youth, disability, and digital media. As Alper shared, current technologies rarely take into account the lived experiences of youth with disabilities, their parents, and their caregivers. Examples from her research demonstrate how certain interventions are leveraged by autistic youth to engage in social spaces online, despite not being designed explicitly with their needs in mind. To address this, the group identified practices such as participatory design or community-led design, which bring the intended end user into the design process, as the gold standard. However, to engage at this level requires alignment across many different teams within a company, thus once again bringing up concerns about business incentives. Another strategy discussed is to create and deploy product features that empower users to determine their own experiences with the technology. For example, privacy settings on newsfeed or on mobile devices that allow users to personalize what data is shared publicly or to third-parties.
Specifically focusing on social connection and engagement, the group explored a variety of different strategies. For instance, they suggested focusing on the trade-offs between low and high engagement. In some instances, digital platforms are designed for low engagement and little friction, otherwise known as mindless feed scrolling. Yet one member noted that it can also be exhausting to engage at a higher capacity with these technologies, thus decreasing well-being. The challenge then is to figure out how to reduce the friction of quality engagement. Miriam Kirymayer, Clinical Psychologist and Friendship Expert, reiterated the importance of quality over quantity. In her contribution to the group, Kirmayer emphasized the need for quality over frequency in adult friendships and explained that certain forms of communication can be considered cheap (e.g., the use of emojis by certain age groups). Kirmayer also added that power dynamics, whether inherent in the platform (such as followers versus friends*), complicates reciprocity in connections, which is a key factor in helping define the value of a friendship.
Concerns over data collection and use were also discussed in-depth. Group members voiced concerns over how to prevent and/or mitigate the unintended consequences of the use of data to customize and optimize user experiences. One member noted,
“If we are going to ask for consumer data, it needs to benefit them and it should be in their control.”
On the other hand, some members highlighted that completely disregarding the use of large data sets limits the potential for a positive outcome, such as more personalized medical diagnoses and/or education strategies. Of course, this cannot operate outside of the challenges outlined above related to economic structures and business motives.
From the beginning, the vision for this endeavor has been to examine our digitally mediated experiences and to cross-pollinate the many different perspectives on how exactly we can better engage with tech as humans. The hope was to develop a shared knowledge set and to surface linguistic connectors that will help galvanize powerful work already happening in this space, ultimately spurring further innovation and collaboration. In some ways, this was a success. New relationships were forged and an awareness of a community of like-minded, values-oriented stakeholders emerged. In addition, a shared lexicon began to percolate of what a human-oriented future with technology could and should look like. Despite differences around well-being, key words such as “intentionality,” “agency,” “control,” “reciprocity,” and “responsibility” echoed throughout the year.
In some ways, however, the group’s work merely scratched the surface. Aspen Digital recognizes the need for targeted, goal-oriented next steps. It hears the call for more directive, and potentially perspective, recommendations on how industry, academia, and the public can move beyond critique and dialogue. This may include developing industry guidelines, new approaches to product design, and/or creating public policy that holds bad actors accountable. Regardless of what it may be, the next step is to expand and energize more people in this pursuit. Aspen looks forward to shepherding the next iteration of this effort, however it may evolve.
*According to Kirmayer, the term “follower” may not reflect a fully reciprocated relationship between two people–meaning that in some cases, only one person in the relationship chooses to acknowledge and engage in the relationship.