As a designer, I’ve never been totally comfortable with referring to people as “users”. I find the term unethical as it minimizes the idea that people have any individuality or sense of agency, and I believe the term is obsolete as it is rooted in a past when the connection between a person using a computer and the computer itself was clear, which is no longer the case in the modern age.
Labelling people as “users” is inherently dehumanizing and reductive, it denies that people have complexity and instead reduces them to a group of quasi-automatons whose only purpose is to “use” the product in front of them, as if the utilization the product is the ultimate goal. It makes us lazy as designers and we fall into the habit of seeing people as only consumers of a product—as endpoints of interaction—and we must force ourselves into seeing the context and circumstances of people’s lives as well.
we must force ourselves into seeing the context and circumstances of people’s lives.
We have already seen the social costs of this widespread depersonalization and deindividualization, most notably in the trust violations and manipulation of people by companies like Facebook, and in the widespread practice of Internet-based tracking facilitated by Google and others’ vast troves of personal data to serve advertising, but this applies to the small-scale as well.
Stripping away the humanity of the people, can enable unethical behaviour in both you as the creators as well as in those who use the product itself. Referring to people in vague terms blurs the line between what is good or permissible and what is bad or off-limits in actions that affect them as a whole and can lead to overall objectification.
How people are framed changes how we treat them and in order to have to have a return to humanness in technology that I feel we need, we have to ask ourselves: if the consequences of what we make doesn’t elicit any sort of compassion or moral response, what good does it do?
if the consequences of what we make doesn’t elicit any sort of compassion or moral response, what good does it do?
If people are seen as just data values or endpoints in interaction, it’s doubtful you’ll ask “what’s the harm?” when manipulating them. It’s the difference between “data mining users and their input” and “surveilling people and their behaviour”. We must frame the people who comprise a user base in human terms to see have a clearer view on how something really is.
There has been a titanic shift in how we use computers, it is no longer a simple back and forth communication: as we use software, the software also uses us. The relationship that used to be just you and your computer has ballooned to include countless other providers of services and software, many of which have become integral to our lives, and well beyond any imagined scale.
With huge scale came a minimizing of the human aspect and the value of the individual people: “What’s a 100 users out of millions or 50 thousand out of 2 billion?” But it’s here where the cost of “users” is quite apparent, the amount of responsibility one has to people is now astronomical and it means one has to not fall into the habit of deindividualization and be aware that these are still people.
Though there is quite often a facade of openness on many of the major software or service providers, any actual knowledge of how they are really designed or affect us is deeply obfuscated, buried in lengthy “user agreements”—there’s some more dehumanizing—or executed in secret. A reality is that many of the things we use daily are often engineered to maximize its own finanical survival and not necessarily in the interest of social good or people’s privacy, rights or well-being. Which means many products are designed to keep you engaged, regardless of any personal detriment or ethical breaches.
To change how the things we design impact people, or to avoid the potential for misuse, we have to shift our perspectives away from the product level and toward the human and societal level and be completely open with people.
Often in design we claim to “put the user first” insofar as to even define whole fields based around “user experience”—fields that are so personal yet defined in impersonal terms—but in that I still see the crucial flaw that is a lack of humanness. So I think we have to shift our language to make us better designers.
Now, I can’t wave a magic wand and move the industry towards a human-focused design approach, I can only advocate for more humanistic, more ethical design thinking. We have to build technology that respects the rights, dignity, and experiences of human beings and that begins with calling them people.