On March 14, 2020, the Los Angeles Times published an article titled “The Week that Changed Everything – for Now” (Lopez 2020). Our personal lives were disrupted and our work lives as well. What changed everything for our work? Social Distancing, Sheltering in Place, Sheltering at Home, Quarantine, Stay the F*ck at Home. Whatever you call it, for most knowledge workers what it means is virtual work. While many people have worked virtually for years, suddenly almost all collaborative work came to be conducted using technology over a network. Since you’re reading this, you likely know the technology well: Microsoft Teams, Slack, FlowDock, Zoom, Webex, Google Meet, and many more.
For my “dayjob,” I manage 150 designers, researchers, developers, and other specialists who work in software product development. In one week, we went from optimizing our work practices for co-location and face-to-face collaboration to having to optimize for virtual collaboration. Our BCE (“Before COVID-19 was Everywhere”) strategy for working put an emphasis on collaborative design studios, team workshops, field research, and face-to-face training. In one week, everything changed.
One of the other hats I wear is as a university instructor, teaching human computer interaction (HCI). Ironically, the week that everything changed, I was teaching a module on “Social Computing and Distributed Cognition.” Social Computing is simply group collaboration of any kind through computers. Distributed Cognition proposes that in social computing environments, cognition and knowledge are “distributed across objects, individuals, artefacts [sic], and tools” rather than confined to an individual (learning-theories.com 2020). The theories, models, and frameworks related to these HCI concepts have direct and even profound application to understanding and designing better computing experiences for a suddenly completely virtual world.
What’s the Problem?
The problem is that in a virtual world, cognition (i.e. information including data and decisions) is more heavily distributed than ever before. Meaning that work is distributed across time and space and cognition is embodied not just in workers far apart from one another interacting asynchronously or synchronously, but actually in the collaborative technology itself. Work isn’t done in strictly verbal conversations and together at whiteboards. Rather, work is intermediated by technology across literal networks. This intermediation creates potential breakdowns and inefficiencies in all kinds of work. That alone is bad enough, but we typically solve for these breakdowns and inefficiencies by creating “Frankenstein” solutions. Frankenstein solutions are incremental and fragmented. They are piecemeal. The way to avoid Frankenstein solutions is through systems thinking. In systems thinking, the philosophical approach to designing a system is to consider it holistically. The application of systems thinking with HCI knowledge can be used to improve our systems, which is something we all benefit from.
Understanding Group Collaboration and Computing
What are the core HCI concepts related to group collaboration and computing? They are:
- Activity Theory
- Situated Action Theory
- Social Cognition
- Distributed Cognition
- Affective Computing
I’d encourage you to study each concept in detail, but here they are in brief.
Activity Theory: Activity theory is a foundational concept in psychology and dates back to a concept originating with Russian psychology theorists in the 1920’s. It laid the groundwork for subsequent theories around social and distributed cognition in HCI. Activity Theory purports that “human use of technology can only be understood in the context of purposeful, mediated, and developing interaction between active subjects and the world (that is, “objects”)” (Kaptelinin and Nardi 2012). The general idea is that actors in as system (i.e. subjects) have relationships with things that exist in the world (i.e. objects) and that the relationship is based on activities (i.e. interaction). Furthermore, the activities themselves transform both subjects and objects. So, for example, cows (subject) eat (activity) grass (object) and each is transformed in a meaningful way. In the context of HCI, users of a system are actors and computers are objects. Rather than just a “transactional” interaction, subjects build a relationship with objects in a manner that is akin to human-to-human relationships. This is particularly true with technology. As Byron Reeves and Clifford Nass argue in the book “The Media Equation,” “Individuals’ interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life” (Reeves and Nass 1996). While that may seem improbable and abstract, think about your own psychological attachment to your mobile phone, which can actually be measured through a Mobile Attachment Scale (MAS) (Krauss Whitbourne 2016). And if you don’t believe you have an attachment, just try an experiment and put your phone in a drawer for an entire weekend! Beyond attachment, working with objects changes our actual cognitive processes. I write differently working in a word processor than I do with pen and paper. We change the tools and the tools change us.
Situated Action Theory: Situation Action Theory was posited by Lucy Suchman in the book Plans and Situated Actions (Suchman 1987). Suchman was a researcher at Xerox PARC, which has a special place in HCI history as the source of invention of the Graphical User Interface (GUI), which was a revolution in computing and led us away from character based user interfaces, like DOS. Situated Action Theory claims that interaction (i.e. activity) can only be fully understood by knowing the context of the interaction between subject and object. In this case, context is a term of art often referred to as “context of use” and refers primarily to the physical environment in which the activity takes place. As an example, if you were redesigning a call center application, you might have one understanding of its use if you considered it outside of the context of use versus within the context of use. This difference becomes apparent during “contextual inquiry” (i.e. field studies) when you observe users not only interacting with software, but also using “sticky notes” and reference books to do their work, while also interacting with customers and getting interruptions from other software, telephones, and co-workers. This why it becomes very important to understand cognition “in the wild,” a phrase coined by cognitive scientist and anthropologist Edwin Hutchins (Hutchins 1995). The practical application of this idea was popularized by Hugh Beyer and Karen Holtzblatt in their body of work around “Contextual Design” and documented thoroughly in their book “Contextual Design” and subsequent works (Beyer and Holtzblatt 1998).
Social Cognition: Social cognition is the idea that most activity or work is conducted not just by an individual, but by individuals in part of a group. Thus, the concept of social interaction shapes human behavior and use of a system. Furthermore, collaboration and teamwork is vital to the outcome of the work. This applies to any system used by more than one user and includes interaction in settings ranging from software development to the use of social media to online gaming. There are important “interdependencies between the social and technical aspects of any system that is being developed or modified” (Ritter, et al. 2014).
Working as a team requires coordination and communication. In addition, the learnability of a system and learning in a general sense applies to the analysis of social cognition. Each individual engages in the system at their own level of expertise and learns as they continue to work in the system. Furthermore, the group as an entity learns and matures as it works in the system. Time, space, attitudes, and culture impact the interaction and work. All of these considerations affect individual and group decision-making and performance. So, we must design systems not just for how a person works, but for how people work together.
Distributed Cognition: Distributed Cognition (sometimes referred to in HCI as “DCog”) is closely related to social cognition and originated with Edwin Hutchins’ work. In a sense it builds on social cognition and states that information, knowledge, and decisions in a socio-technical system are embodied within the technical artifacts (i.e. objects) in the system as well as within the minds of all users (i.e. subjects). It provides a theoretical framework to understand and optimize for the “extended mind” that develops in such systems. It introduces the notion that computation is a factor in transforming information not just literally in a computer, but in the minds of both individuals and groups (Dillenbourg and Self 1992). It is a complex theory that in many ways ties together the other theories. While it may seem esoteric, it is another very useful theory to use for understanding human computer interaction in practical ways. Consider human memory, for example. When we fail to remember a phone number of someone close to us these days because we have their number as a “favorite” in our mobile phone, that is distributing cognition and effectively “outsourcing” our memory to a computer. Or similarly, when we use a GPS to drive someplace that we go to frequently. In decades past we would have remembered that information ourselves because we had to. Now, we have the luxury of freeing up our human computational resources by utilizing computers.
Accessibility: Accessibility is an approach to ensuring that “technologies are designed and developed so that people with disabilities can use them” (The World Wide Web Consortium (W3C) 2020). This includes cognitive or learning, neurological, hearing, speech, visual, motor and other physical disabilities. Designing accessible digital products means acknowledging that many users rely on assistive technologies, such as screen readers and screen magnifiers. In addition, consider the notion of “universal design” where best practices in accessibility also ensure benefits to all users who “access content in a situation where their eyes, ears, or hands are busy (e.g. driving to work, working in a noisy environment)” and are using technology in atypical situations or use alternative and older hardware and software.
Affective Computing: Affective computing is the study of emotions and affect as motivating and behavioral concerns in the use of computing systems. Psychologists accept that “it is impossible for people to have a thought or perform an action without engaging their emotional system” (Nass et al, 2005). We are not purely cognitive beings. Emotion and cognition cannot be separated. Humans have emotional relationships to systems and one another that need to be understood and respected in designing social computing systems. One key concept in affective computing is the idea of reactivity, which recognizes that there are physiological and psychological responses to stimuli, including computer use. The result of which is an effect on human attention, information processing, and decision-making. These effects can be either positive or negative and must be understood in order to optimize user engagement, productivity, and satisfaction with a system.
So What? And Six Questions to Answer…
So what? Is this going to be on the test? Is this all just academic fluff? What’s the point? The primary point in understanding all these theories is that by understanding them and reflecting on how they converge, we can better design systems for a virtual world.
In a social computing context, designers and engineers need to think about the whole system or system of systems that they are designing for and working within. Not just a piece of it and not just a single user persona or archetype. This can apply to system design of all kinds and in many ways, but especially in the creation of digital products. It also applies to the immediate challenge we face by living in lockdown and how we must refactor our collaborative systems for working as virtual teams, including selecting the tools we use, defining the processes for our work practices, and evolving our social protocols and norms.
A system is made of many different actors, or individuals, working together in one or more subsystems. Their cognitive effort and work products are distributed between one another and collected in the group of actors as well as in inanimate elements of the system itself. Each actor in the system learns how to work within the system at their own pace and the group learns teamwork collectively. And each actor has their own abilities and emotional experiences that impact their behavior and collaboration. So, in order to create better systems, you should ask yourself questions that will lead you to making better design decisions.
- What does the whole system look like? What are the elements of the system? Who are all the actors in the system? What are all the relationships between elements and actors, including all user types? Where does the information live? How is it transformed?
- What are the goals and tasks of all user types? All work is a process and can be broken down into tasks. People have goals and do work not for the sake of doing tasks, but for accomplishing goals. Understanding these things and the context of the user’s work is fundamental to human centered design.
- What are the breakdowns within collaboration across the system? What are the potential breakdowns in the system? What constraints exist? What are the implications of critical “hand offs” of work and communication?
- How will users learn within the system, both individually and collectively? What do the users know and how will they learn, both individually and as a group? How can you design elements of the system to aid learnability and mature collaboration?
- What are the range of user abilities and disabilities that need to be considered? Not all actors in a system have the same abilities. What are the accessibility needs and potential pitfalls? How can universal design be employed to benefit all users of the system?
- How will the likely emotions of users impact behavior within the system? How and why will users engage in the system? How can you motivate them? What are the likely emotions and behaviors expected due to the emotional impact of the system? Are those the emotions we want them to have?
Answering these questions gives us a better understanding of the problem space. Having a better understanding of the problem space allows us to create better systems by addressing impediments, respecting constraints, and optimizing for human behaviors that we can reasonably predict based on cognitive science.
This is never more important than in a purely virtual environment. The week that changed everything didn’t change it just for now. The situation in the world is fluid but we’re not going back to normal. The genie is out of the bottle. This will be a new era of computing and we will continue to be more virtual than ever before. We need to use what we know about how humans use computers to better design the complex technology systems needed to thrive in the new normal.
Beyer, H. & Holtzblatt, K. (1998): Contextual Design: Defining Customer-Centered Systems. San Francisco: Morgan Kaufmann.
Dillenbourg, P., Self, J.A. A computational approach to socially distributed cognition. Eur J Psychol Educ 7, 353 (1992). https://doi.org/10.1007/BF03172899
Hutchins, Edwin. Cognition in the Wild. Minds and Machines 7, 456–460 (1997). https://doi.org/10.1023/A:1008285430235
Kaptelinin, Victor and Nardi, Bonnie A. (2012): Activity Theory in HCI: Fundamentals and Reflections. Morgan and Claypool.
Krauss Whitbourne, Susan. “This Is Why We Can't Put Down Our Phones.” Psychology Today. Posted Sep 17, 2016. https://www.psychologytoday.com/us/blog/fulfillment-any-age/201609/is-why-we-cant-put-down-our-phones
Learning-theories.com. Distributed Cognition (DCog). Learning-theories.com. Accessed April 16, 2020. https://www.learning-theories.com/distributed-cognition-dcog.html
Lopez, Steve. “Coronavirus and the week that changed everything — for now.” Latimes.com. Los Angeles Times. Published March 14, 2020. https://www.latimes.com/california/story/2020-03-14/lopez-covid19-coronavirus-week-changes
Nass, Clifford, Jonsson, Ing-Marie, Harris, Helen, Reaves, Ben, Endo, Jack, Brave, Scott, Takayama, Leila (2005): Improving automotive safety by pairing driver emotion and car voice emotion. In: Proceedings of ACM CHI 2005 Conference on Human Factors in Computing Systems , 2005, . pp. 1973-1976. http://doi.acm.org/10.1145/1056808.1057070
Reeves, B., & Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.
Ritter, Frank E., Baxter, Gordon D, Churchill, Elizabeth F. Foundations for Designing User-Centered Systems: What System Designers Need to Know about People. London. Springer-Verlag. 2014.
Suchman, L.A. Plans and situated actions: The problem of human-machine communication. Cambridge, U.K.: Cambridge University Press. 1987.
World Wide Web Consortium (W3C). “Introduction to Web Accessibility.” W3c.org. Accessed April 16, 2020. https://www.w3.org/WAI/fundamentals/accessibility-intro/#what