DisAssemble

Archives
Subscribe

DisAssemble

Archive

This newsletter is your memory

You’re reading DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


File:Sportsfile (Web Summit) (22790692681).jpg - Wikimedia Commons
#41
January 3, 2021
Read more

We design our tools which design our jobs which design us. Or: why Excel sucks

You’re reading DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


Excel For Windows 3.0 Ad

I am repelled by Microsoft products.

#40
December 15, 2020
Read more

Why are affordances important? More questions with Jenny L. Davis

This is part of DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.

In the last issue of DisAssemble, Jenny L. Davis, a social psychologist and technology theorist, answered questions about what affordances are.

In this second part of my interview with her, we focus on why affordances are important to those involved in designing and building tech.

She recently published the excellent How Artifacts Afford: The Power and Politics of Everyday Things. I strongly recommend you read it - her answers below offer a taste of what it spells out, namely a powerful and unique framework on why and how affordances matter.

#39
November 24, 2020
Read more

What are affordances? An interview with Jenny L. Davis

This is part of Disassemble, a philosophy of tech newsletter aimed at those interested in digital products.


“When a man is tired of London, he is tired of life; for there is in London all that life can afford."

— Samuel Johnson. 

#38
November 11, 2020
Read more

How to design yourself: a primer

This is part of Disassemble, a philosophy of tech newsletter.

File:Strandbeest--Full-Walking-Animation.gif

#37
October 25, 2020
Read more

Google Maps sucks | you out of thought

This is part of Disassemble, a philosophy of tech newsletter.

#36
October 7, 2020
Read more

You're part of it, my friend

This is part of Disassemble, a philosophy of tech newsletter.


The Fall of the Magician by Pieter van der Heyden
The Fall of the Magician
#35
September 11, 2020
Read more

Design, Phenomenology, Capitalism & sucking at newsletter titles

This is part of Disassemble, a philosophy of tech newsletter.

In Defence of Marxism

In Capitalist Realism, cultural theorist and philosopher Mark Fisher noted that capitalism is:

#34
August 26, 2020
Read more

The Essentialism Pandemic

And I am whatever you say I am

If I wasn't, then why would I say I am?

In the paper, the news, every day I am

Radio won't even play my jam'

#33
August 2, 2020
Read more

A little bit about this newsletter

We should be happy at how much we have to read. Reading should make one better. Yet writing on the web tends to take the form of surface-level explorations of the world, in that it concerns itself with the immediately factual. A play-by-play description; a trip report; a contrast and compare - these are the most common forms of writing. Writing doesn't even go by 'writing' on the web. It goes by 'content, an ugly word. It suggests a meaningless delivery packaged to fit within a pre-existing frame.

Writing about technology suffers the most. It is design advice based on simplistic laws or principles. It is assessments of technology based on practical application. It is snippets of quotes from blindsided experts. 

None of this is good enough. The bulk of writing - be it news, reviews, or advice - doesn't attempt to apply concepts from elsewhere to help us us understand technologies and being as historical, as situated, as designed, as part of a framework that can be better understood with any number of conceptual lenses. Most of all, it doesn't explain how we can leverage these lenses to design and build technologies that change what we want to change. 

Here's an article: on how Google Docs is used by activists. It's an article with good 'content'. You should read it. Honestly. Its primary point is that Google Docs is easy to use; therefore, it's used by activists. That's interesting and informative.

But it doesn't discuss meaning: meaning for designers, meaning for the users, meaning for the reader. To do this you need conceptual lenses. 

Johan Redstrom and Heather Wiltse would call Google Docs fluid a assemblage - a symbolic material that is assembled on an as-needed basis. What does it mean that these dynamic assemblages are becoming a go-to resources for activists? How does the structure and meaning change based on the unique qualities (e.g. openness) of the medium?  How is it coded or afforded for certain activities but not others?

The philosopher Peter-Paul Verbeek talks about multistability - the way that humans co-opt technology for our own use. Is that what is happening with Google Docs, and if so how does the medium disempower and empower this? What does it mean that this technology is not being used as intended? 

These are the types of questions this newsletter will to wrestle with. It aims to flip the discussion table over to uncover the concepts beneath. All the stuff academics are talking about; all the stuff people have been thinking about for years, but aren't applying to our lived technological world. 

Ben Kraal recently started a newsletter called '1992'. Its aim is to examine academic papers from 1992 with the intention of applying them to UX practices today because there is so much we can learn from that which has already been written. This newsletter will do something similar, but will take a wider stance -- wider in terms of sources and wider in terms of application. 

Key topics this newsletter will deal with include:

  • Materialism

  • Systems thinking

  • The 4Es of cognition (Embedded, Extended, Enacted, and Embodied)

  • Design thinking

  • Ethics

  • Quantification and qualification

  • Design and User Research

  • Futurism

  • Semiotics

  • Ecological perception

  • (Post) Phenomenology

  • Modernism, post-modernism

If you don't know what these concepts are, I will define them as part of my efforts to display how they have enormous impacts on how we design and use technology, and indeed how we assemble into and through technology. 

See you soon.

#32
July 27, 2020
Read more

Untangling the technological human

File:Mechanics-bank-arm-hammer-tn1.jpg

Welcome to DisAssemble by me, Vikram Singh. I am a UX Designer, User Researcher, and writer based in London.

This is a weekly newsletter that untangles technological human. I use the philosophy of technology and a variety of theoretical lenses to untangle this system so that this clarity can help us build a better world

Sign up below if you’d like. I promise I won’t spam you.

Subscribe now

And, tell your friends!

#31
June 15, 2020
Read more

The Post-Covid March to Remote Worker Surveillance

via Claudio Shwarz

I run a Philosophy and Ethics in Technology salon in London. Its members are individuals who are involved in many different fields, but all have a special interest in technology. Each month we tackle issues and questions relating to technology. This month we discussed the topic:

“Watching Your Workers: How Surveillance Technology Can Change Remote Working”

Some insightful themes and solutions manifested themselves, which are worth sharing here.

A New Capacity for Spying

One of the things that is striking about changing the paradigm of work is that new ‘capacities’ occur. Managers can now easily spy (I won’t use quotes for that word!) on employees using a variety of methods — by tracking their typing, seeing their screens, or a plethora of other methods. This is related to the idea that the Philosopher Peter Paul-Verbeek discusses — that our relationship to our world changes, not just through technological extension of existing abilities, but also because the technology and society allow for whole new behaviours and behaviour choices to appear. In this case, the opportunity to monitor employees.

In our salon we discussed how new capacities as engendered by the mixture of new social dynamics and technology allows digital surveillance to happen. Social dynamics of trust, transparency, and habits have changed. And technology allows for surveillance. The dynamics on how someone is monitored, as facilitated through the type of work they were doing (seemingly easily digitally quantified work), and the medium they were using (i.e. a computer), is simply fundamentally different from how work was prior to digital technology.

In this new dynamic, questions around “can we?” become less relevant. The question becomes “should we?” — or, often, it is not questioned at all, it is just done.

An unpalatable frontier

Even within the umbrella of “should we” comes the question of palatability. Is worker surveillance palatable to the employer and employee? It shouldn’t be surprising that for a myriad of reasons this is unpalatable to the employee, but the effect on the employer can be a questionable on as well.

In an article we discussed, a NYT employee installed a time and screen tracking software and asked his manager to use it to monitor him. The manager did indeed do that, but began to feel ‘icky’. This is a major issue — new capacities for technological action on the part of managers appear, and managers themselves have to overcome their own ethical boundaries.

A depersonalised human

It’s not just that it’s icky, however; it’s that it is far easier to depersonalise the human at the other end of the computer. Our salon discussed how it is far easier to compress employees into a quantitative outlay of metrics with these technologies. Things like mouse movements and keyboard activity can be tracked. This is a dangerous precedent, as we noted that these aren’t reflective of work output. Designing, for example, may involve sketching on paper or just thinking, perhaps away from the computer. Coding may involve a lot of reading which may be perceived as inactivity. Moreover, because data is intrinsically reductive, it is easy to fool, and also likely would be subject to abuse and corruption. We noted how it would be easy to create invective to compete on these small metrics rather than other, likely more qualitative outputs (team building, learning etc).

Indeed, one of the articles we read was about Taylorism, which is managerial strategy, created a century ago, which worked by:

“breaking down tasks into inputs, outputs, processes and procedures that can be mathematically analysed and transformed into recipes for efficient production.”

Needless to say, this resulted in people being treated like machines, with employers carefully timing each action and squeezing efficiency out people by making them complete mindless tasks.

We felt that this ‘ depersonalised human’ is a distinct danger with surveillance tech. Interestingly, some of us mentioned that we are already feel like we we are being depersonalised. Some of us mentioned calendars and standups as being perhaps used for purposes they weren’t meant to, that is, ‘evidence’ of productivity.

The new abnormal

And these existing, creeping forms of depersonalisation point to the problem of normalisation — the worry that if this behaviour becomes normalised, the ‘ickiness’ will dissipate. If these technologies are treated as something everyone uses, then people won’t feel as ‘icky’ doing it, given that this type of spying feels natural. Indeed, treating people as digital objects or resources could begin to feel normal, as economic theory and Taylorism did in the non-digital world (e.g. ‘human resources’).

It follows that it’s only unnatural because it’s not currently how workers are treated online (at least, mostly), but there’s no reason why it couldn’t happen, especially with the precedent set by Taylorism.

We also discussed the panopticon of Jeremy Bentham, a prison in which prisoners’ cells were situated around a central hub of guards. The prisoners never knew if they were being watched. In a digital, globalised society we have become used to being watched. The philosopher Foucault used the panopticon as a metaphor for society. It wasn’t always normal to be observed, he noted, it’s only through the nation state and institutions supporting this behaviour that this became an expected state to be in.

Presidio Modelo, a panopticon

If surveillance tech is supported by institutions as remote work marches forward, then the abnormality of digital surveillance could become the norm.

A panopticon for the untrustworthy

A lack of trust from managers to employees is certainly one way it could become normal. We discussed how employers are being pushed by some of these ‘time-tracking’ companies (such as Time Doctor), and even by the capitalist system at large to not trust employees. We discussed the idea that this made little sense — people should be trusted, as for one, they likely would not be in the company they were in if they had no interest in contributing (at least for employees with employment mobility). Additionally, if someone isn’t working or not contributing, it may be difficult to understand why this is the case, and surveillance may be the ‘go-to’ solution for deeper issues. This is a psychological, organisational and societal issue, which can be challenging to parse.

The underclass always loses

But of course, poorer people and those deemed ‘unskilled’ are almost always trusted less. We discussed how individuals who have more to lose, or who are treated as more disposable will be less likely to protest against these surveillance methods. They can also often viewed by those in power as grifters. Indeed, due to remote working, they, like in many areas of society disproportionally lose with new systems of power.

So, how can we ensure that these themes don’t come about? We discussed some solutions, below.

Aggregate, don’t individualise

We felt any surveillance tech, if it must be used, shouldn’t be individualised in a way where individuals could be spied upon or surveilled in ways that do not account for their qualitative output. Methods such as screenshots or keystroke tracking are rife with not only ethical issues, but also are ineffectual. Instead, tracking the aggregate of workers to find key patterns is a useful way to understand how people behave, and what tools, methods or contexts may be useful for improving workers’ lives and how they can contribute to any organisation.

Champion workers’ rights

We thought ‘knowing your rights’ was a vital step to defend against this surveillance, yet workers rights as per their work computers are sadly limited. In the UK, it’s perfectly fine for employers to monitor employees emails, web history and emails. The company just needs to tell employees (in fact it doesn’t always — EPA guidance allows for covert surveillance of employees). Current laws in the Data Protection Act are pathetically limited, with ‘guidance’ just suggested:

“If e-mails and/or internet access are, or are likely to be, monitored, consider, preferably using an impact assessment, whether the benefits justify the adverse impact.”

Given the issues discussed, and the onset of remote work, this is something that needs addressing.

Champion change

The only way that these laws can change is through championing change. We discussed how just talking about it — whether in person or on the internet — is vital. Monitoring may seem normal as something that goes hand in hand with remote work, and challenging this narrative will require a great deal of discussion at all levels. Even challenging a surveillance paradigm through metaphors can be of help. Noting that conversations in offices are not monitored, even though the company owns the building is useful analogous framing.

Perhaps more than that, work computers are certainly company property, but what occurs on them on platforms that aren’t related to work should not be — computers are now far too intertwined in our lives, like infrastructure. In Changing Things: The Future of Objects in a Digital World, the authors liken digital content to a sort of ‘fluid assemblage’ that is made from a wide variety of different technologies, systems and data, into what the user sees on screen. How digital content is owned may need a conceptual change, and it’s likely that only through actively and loudly championing change can we make this happen.

P.S. If you’re interested in joining our Philosophy and Ethics in Tech Salon, email me at vikramsinghbc at gmail dot com! All are welcome!

#30
June 6, 2020
Read more

Facebook is already an arbiter of truth — it even creates truth

We get a warm and fuzzy feeling when we dwell on concepts like truth, love, and freedom. They seem so immutably transcendental — these concepts have no single physical correlate. Instead, we feel like we can point to them high above us as vague yet unchanging figures. Still, we strive to reach them — perfect forms for our capture (or our dismissal, if they are negative concepts). Plato, in his Theory of Forms, would argue that love has a perfect, unachievable form; all love in our world is a mere shadow.

Plato and Aristotle discussing something *more perfect*

#29
June 2, 2020
Read more

Perceiving and acting are forms of thought — product design needs to recognise this

Umberto Boccioni, 1913, Dynamism of a Cyclist

I spend my working days at a company that builds a social media management platform for charities. We recently conducted user testing on landing pages that advertised our product and kicked off our onboarding process. The idea was to explicitly ‘get across’ what our platform was like prior to having users sign up and actually use the platform. We wanted users to ‘get it’, and understand the advantages of our platform without actually having to use it first (as signing up can be a barrier for some people).

But in testing our advertising and landing pages, we received a lot of comments like:

“I wanted to know what the tool feels like”

“I just want to get to grips with it”

“I want to just have a bit of a play around”

People seemed to want a visceral experience with the tool. We tried clearly explaining what our platform was like in videos, descriptions, and images. We represented the platform and what it does in explicit detail. But it wasn’t enough. The participants had an almost indescribable urge for tangible experience, to know what each step of our tool felt like. They couldn’t put their finger on it, they just needed to use the platform.

Why do people feel like this? Why do people need to try out tools to ‘know’ them, even if they’ve seen them represented in explicit detail?

We like to think that we are in essence just brains floating ‘outside’ the world as impartial observers, with sensory apparatus like our eyes inputting data that we can process and act upon. We consider our cognition — our ability to understand — to be akin to computer processing.

So, when we talk about our cognition, we say things like “I need to process that”. We analogise the brain as hardware and thoughts as software, as though we are in essence an electronic machine. Importantly, we also consider thinking a linear sequence of perceiving, planning, doing, and interpreting, much like a computer program. We input data into our mind, process it, make a plan, then enact it, and interpret the results. You might call this a ‘computationalist’ theory of mind. Of course, it’s more than a theory, it’s a sociological metaphor. Metaphors are extremely powerful; the philosophers Lakoff and Johnson argue that we understand our world through metaphors.

Accordingly, a great deal of tools we use have been designed in such a way to reflect this metaphor of our cognition.

But we are not computers.

The way we go about knowing the world is fundamentally different.

We have bodies. We evolved with bodies. We evolved with our environment.

As our brains are parts of our bodies, they evolved with the rest of our bodies, and alongside our environment as well. Our ability to think wasn’t ‘created’ and it certainly wasn’t ‘created’ with an end goal in mind, such as processing information.

Think of our cognition, then, as being embodied — as part of our bodies, as a thing that has a context, a materiality, and a history of development. This means our cognition isn’t just thinking with the brain, it’s a systematic whole that involves perceiving and acting in and on the world.

Our perception is linked to interpretation — seeing faces in clouds, not noticing changes (‘in-attentional blindness’). Even basic things like recognising shapes, shadows, edges, movement — these are constructed as a perceptive act. We see the world not just subjectively, not just from a different angle as other people, but as a unique, on-the-fly construction. Our perception is attuned to interpret sensory input in a way that constructs meaning, based on past experience and from our biological evolution (we are attuned to recognise faces, for example). But we do not consciously think any of this out — rather, it is anticipatory, immediate, and implicit. Yet it is sensible to say that perception is part of cognition, in that it is a part of how we enact our individualised sense of the world.

We use our actions to alter the world to help us think. We organise our world to help us remember where things are, or that we have to do something: a note by the door; all forks in the drawer by the fridge; clean clothes in that basket not that one. Action reveals, organises and groups — it interacts with how we think about our world. Acting on the world can take the burden off our brain — and in doing so it becomes a cognitive act (the Academic David Kirsch referred to these as ‘epistemic actions’ — actions intended to facilitate information processing rather than pragmatic result).

These two elements — action and perception — are tied very closely to one another as well. The philosopher Merleau-Ponty gave the example of a blind man using a long stick to help him navigate his world through touch. The stick becomes ‘transparent’ to the man — he stops being aware of the stick as a separate object in space, but instead his focus is on how the stick interacts with objects in space. Perception and action are intertwined in an act of cognition.The same is true of all objects we interact with when we use them as tools, as well as our bodies.

This man is not focusing on the stick, but the the feel of his surroundings via an embodied stick

So, let’s start a sentence that builds on this point:

Perceiving and action are part of cognition.

Great. But it isn’t just that action and perception are a part of cognition, they are creative acts that feedback into themselves.

It’s perhaps easiest to understand this by comparing our cognition to a computer’s processing. You don’t plan your actions then enact them robotically the way a computer would. You just act , you just perceive— your actions aren’t analogous to you explicitly thinking: “Now I’m going to look to my left; next, I’ll reach over with my left hand to grasp a magazine”. While we are aware to varying degrees of how our body is engaged with the world, we are to a greater degree reflecting on wants, desires, feelings, etc, and that output of manifests as actions and perception. Ours is a generalised intent rather than a specific plan.

What’s more, as you act/perceive, the feedback from you doing it informs the next action/perception activity you undertake. Think how you explore what you are saying as you are saying it; when you are drawing, the act of drawing helps you to understand the shape and detail of the drawing as you are drawing it. Each action is an expression of cognition, of what you are thinking. Each act is a feedback loop that is inseparable from the next act. We do something and in that doing we learn more about what we are doing.

The anthropologist/philosopher Lambros Malafouris has argued that, in this way, cognition cannot be divided from our world “‘material culture is potentially co-extensive and consubstantial with the mind”.

So, normally, our immediate actions aren’t explicit. They are responsive, instinctual, implicit activity — more of a vague intention that a plan. Much like Daniel Kahneman’s System 1 thinking, we act and perceive without carefully modelling each activity we are going to do, and then planning how each activity is going to ‘run’ on the world. We just perceive and act to help us create an understanding.

Kahneman and Tversky’s System 1 and System 2. Via Eva-Lotta Lamm

This gets very abstract in certain actions which seemingly have no relation to what we are thinking about. Think about gesturing — people think it’s a way of communicating, but that’s very often not the case. Blind people gesture, for example.

This is why my research participants earlier couldn’t specify exactly what they meant — it’s very challenging to express how the combination of action and perception can help you understand things. It’s an intuitive understanding that isn’t just about impartially observing how things work, but implicitly understanding a process or tool by conducting a sort of acting/perceiving loop upon it. And it’s worth noticing that this is different from ‘practice’ — practice is about improving on an existing knowledge base, not creating an initial experience of embodied knowledge.

Let’s update that sentence:

Perceiving and action are an embodied part of our cognition that helps us intuitively create an implicit understanding of our world.

But of course, we can’t just create a world to understand out of nothing. Our world only allows for explorations that ‘afford’ it. This idea was pioneered by JJ Gibson who also coined the term ‘affordances’. Affordance, in his reckoning, simply meant a situation that enabled a possibility for for action. A stick can be used to hit someone with, or to point with, or a sensory tool for our previously mentioned blind friend. But different objects afford different actions better than others. Stairs afford stepping given their shape — you would be hard pressed to do something like lie down on them, a bed would afford that much more effectively. Affordances don’t even require our awareness: a hole can be used to hide in, but it can also be fallen into by the unawares.

The handles affords a specific type of grasping

Again, let’s update that sentence:

Perceiving and action are an embodied part of our cognition that helps us intuitively create an implicit understanding of our world through affordances.

So taking us back to our original question: we need to act and perceive to help us create an understanding of our world through affordances. And when we do, it’s often implicit action formed through generalised intentions rather than plans. And of course, these can only happen through affordances. This is what my research participants wanted to do.

There’s a problem in all this however.

The problem is that computers and the software on it are designed for people who act like computers. Obviously this was worse in the past, but it still remains.

We still ask users to create mental models of information and interaction structures that they can’t possibly grasp without significant experience with our products. And people find it difficult, or at best laborious, to understand the situation that doesn’t reveal itself through the kind of embodied cognition discussed. We force users to build representations and then make them navigate those representations in their mind to understand how an interaction would work. We force them to model it rather than generate implicit understanding through embodied cognition.

It’s much easier to define a structure that expects a person to linearly process concepts rationally into a whole than to apply concepts of intuitive understand through perception/feedback loops, as I’ve discussed.

But the divide of the world into perceiving, thinking and doing is a false one, or at least false enough that it has harmed the efficacy of digital products. This division between perceiving, thinking and doing is an artefact of the society and culture we find ourselves in. There’s no reason it has to be this way. It’s just the computer metaphor.

To be fair, it can be very difficult to create an embodied learning within the realm of digital products. HCI academic Paul Dourish touched on this in his book, Where the Action is. He notes that we implicitly ‘couple’ with things in our world (like a hammer) to get things done through affordances, but it’s very difficult to parse how we ‘couple’ with digital technologies because of the many layers of abstraction. In this way, it can be difficult to parse where the embodied action ‘lies’.

Still, there is a lot we can do to allow for — so let’s remember our sentence and look at some examples of how to implement it:

Perceiving and action are an embodied part of our cognition that helps us intuitively create an implicit understanding of our world through affordances.

Allow for guided doing

Computers and touchscreens are notoriously poor at providing clear affordance of action, given that screens are not tangible in any real sense, and are buried under layers of abstraction and interface. What I call ‘guided doing’ is the act of helping to create an intuitive understanding. By gently guiding someone through an action we allow them to understand the situation and how they are embodied in it.

You can see this in product tours — ours here as an example:

We at Lightful created gentle, stepwise product tours that got users to take the steps to connect their social media accounts and create draft posts. While some users closed the tour, a good portion of our users continued through it. New users who went through the tour posted more using our platform by quite a large margin.

Product tours are not perfect because it’s not just an implicit action-response the user undertakes. Instead users are required to read and ascribe an embodied meaning of the action through words, rather than just through action. However, product tours help by normally blocking out parts of the screen, and focussing on a single step in a way where perceiving and acting are the key activities, rather than explicit thinking. The objective of product tours is not just ‘showing rather than telling’, it’s requiring users to practice actions, integrating intuitive, visceral understanding of the rhythms, affordances and feedback of the product.

At Lightful, we tried explaining our our product , as though that would be sufficient — ‘if they can read about it then they understand it’, we thought. But this wasn’t nearly as effective as just getting someone to use the product in a way that embodied their understanding.

Words can be interpreted very differently. Semantics can’t communicate the implicit, embodied knowledge that embodied cognition brings. And this is vital for someone knowing and liking a product. When we got people to use our product with product tours the knowledge they received was unambiguous — there was an intuitive understanding framed by semantics.

Abstracted play

Abstracted play is the divorcing of the UI layer — the ‘noise’ — from the page to get the user to focus on what is relevant in a simplified, abstracted way.

You can see how Trello does this by creating a simple wireframe of their site and describing in simple words how to use their product. This is part of their onboarding process, in which people are still understanding the affordances.

Trello’s efforts brings affordances into clear view. The perceiving and acting become very simple. Our perception-action-cycle isn’t overwhelmed, trying to make meaning and finding affordances in a busy UI — it’s stripped back so the perception-action is straightforward.

What’s more, the user can see the result of their action in a highly visible manner. As they type, they see the names appear on the Trello columns to the right.

You might call this making the ‘system image’ clearer in Don Norman’s mental model structure.

However we aren’t asking the user to understand ‘system image’ explicitly. The perception/action loop is doing the work. Much like the blind man with the stick, the more ‘transparent’ you can make the correlation between the instrument and the effect, the better the embodied the understanding will be.

Microinteractions

There are so many microinteractions that do nothing to give the user an indication of what is happening. Rather, they look flashy, and pat a visual designer’s ego. Sure, some of them add an aesthetic flair, but many actually get in the way of an embodied understanding. Take a look over at Dribbble for some over-engineered animated microinteractions (I won’t place any here so as not to insult anyone).

Microinteractions should work as signifiers, affordances or feedback. Material design is an aspect of a larger system of microinteractions.

As the material guidelines state

“Motion focuses attention and maintains continuity, through subtle feedback and coherent transitions. As elements appear on screen, they transform and reorganize the environment, with interactions generating new transformations.”

Of course, material design isn’t a microinteraction, it’s more of a design system, but it contains a number of useful microinteractions. These include panels and drawers ‘swiping’ ‘in’ and ‘out’. The user can interact and get immediate feedback which then feeds into future actions.

The problem with material design is that it not always clear what affords what. Can you swipe everything? How do things that slide offscreen re-appear? Affordances, we remember, are possibilities for action.

The best microinteractions are those that are visible, have a clear affordance, and clear feedback when interacted with. Scroll bars are so successful because they require only perceiving and acting to understand. If you didn’t know how scroll bars worked, you could intuit it through action and perception: the scroll bar moves as you go up and down the screen.

Don’t require people to build a model of how things work

In the past 10 years or so, new digital creative tools have overwhelmed existing legacy tools. Adobe and Microsoft’s tools and many other older legacy software tools have been pushed from the spotlight. Sketch and Figma have replaced Illustrator and Photoshop in many areas. Keynote and Google Slides have shown Powerpoint the door. And so on.

Why?

Legacy tools have an underlying structure that belies how they see the user: as a computer, as a non-embodied cognitive agent.

These tools have many modes, invisible to the user. They don’t clearly reveal a user’s action. They overwhelm with unclear affordance in their UIs. They require that a user be taught how the symbolic creates an action (rather than just affording action), and how the model of all of the actions work with one another. It’s a significant cognitive overhead for the user that, in the past, engineers would claim is necessary.

You may argue “But I get Illustrator, it’s so simple”. Well, it’s likely because you have been trained, or watched videos about it, or Googled a great deal to understand the interplay of the modes, settings, tools symbols etc. You cannot pick it up and start using effectively like you would a hammer, Sketch, or Figma.

This symbolic knowledge is predicated on a lot of pre-existing learning

It’s increasingly clear that good design must incorporate a sense of embodied cognition to make tools more immediately useful and usable.

But this principle far far, from the ‘less UI is better’ canard. Indeed, less UI can often hide affordance, make it very difficult for a user to get an embodied understanding of a tool — everything becomes invisible and hidden.

Remember how we were talking about how distinguishing between thought and action was a fool’s errand? Well this should be reflected in tools. If I want to do something, it should just happen in a way where the goal is what is relevant, not the tool to use the achieve goal (ready-to-hand in Heideggerian terminology).

Context sensitivity, awareness of skill level, feedback, and consistent, predictable patterns can all help. When I act, there should be a clear reaction to my actions because I will attempt to both implicitly and explicitly make meaning of my actions regardless — and we should use that to help a user to understand. We shouldn’t ask them to build an enormous, complicated mental model of our tool, then shove them out into it. We should let them poke at it, and show what happens when they do. In that way, the tool can reveal itself to them through an embodied understanding.

One of the most basic features in Sketch, for example is by pressing CTRL, you can visually see how elements interact, their space, their alignment to one another:

There’s no question as to what’s happening — spaces are shown and by moving objects we can see line length and space change. A user does not have to imbibe an entire mental model to understand this interaction.

There’s certainly some highly technical tools where embodied interaction is difficult. Obviously, an aircraft controller won’t be able to poke and prod her away around tools in an embodied way — the entire mental model needs to be understood prior to using the tool. That, however, does not mean that the learning methods for the tool can not be embodied.

The fallacy of separating the mind from the body has a lot of pernicious effects. Crappy digital products are probably the least of the problems associated with it. Still, starting from the ground up can change cultural practices on deeper levels. So, when designing something interactive, ask yourself these questions:

How can I embody the user’s actions?

How can I ensure that users don’t need to fill in the gaps of an interaction model in their mind, and instead represent it all onscreen?

How can I make feedback as reactive as possible to action?

How can I ensure each action leads to a better understanding of the next action?

How could I build my tool in such a way that a user who couldn’t read would understand it?

And we’ll all be well on our way to a more embodied word.

#28
February 22, 2020
Read more

The 2020s will be a reckoning with our past: lessons from Disco Elysium

The world is endable. It may be ending now.

No, seriously. What I mean is that the potential of our world: democratic, open, progressive, free — it can end.

Of course, the natural hubris of looking from within a time period counteracts this narrative, the state of the world appears inevitable and immovable: There’s no way that basic things like democracy can end!

Of course it can. All societies can end — we just don’t believe it. We think of ‘ending’ as a dystopian society decimated by an apocalypse. But that’s not what will happen — it will be the death of a thousand cuts, all in the name of ‘good’.

If we look closely (actually not really — it’s blindingly obvious), the progressive apparatus of society is ending everywhere. In India, Turkey, America, Philippines, Poland, Hungary, Brazil etc.…some places it never had a chance, like Russia and China (and there’s no point in listing theocracies or totalitarian regimes here). And although some academics like Steven Pinker think that society is much better, capitalism and selfishness still create drastic inequality and murders trillions of animals a year.

Democratic society, liberal views, cosmopolitanism, press freedom, independent judiciaries, they’ve all been battered by populism, nationalism and religious zealotry to such a degree that if people from the bright, revolutionary 60’s saw us now, they would think it a joke. The idea of the holistic whole, the idea that we’re all in the together, ‘global citizenship’ — they are all being rounded up in the streets and ousted in performative farces of national victimhood and anthropocentric chauvinism.

The main horror on the horizon, of course, is a fully existential one: climate change.

So this future decade could, in many ways, be the end of things.

In this past decade, one the most important pieces of art I experienced was a video game, surprisingly. Called Disco Elysium, it tells the story of a cop solving a murder case in a fictional world, somewhat like ours. But this is akin to saying the Bible is the story of a carpenter.

The video game is about history, prejudice, nationalism, meaning, longing, and existentialism. Its scope far exceeds what you think may be possible in a video game. The world you find yourself in Disco Elysium is one of a decaying city, held up by an international entente and global capitalism. Factional struggles between unionists, fascists and capitalists lay a background to individual struggles of people trying to flee from torrid pasts and personal struggles.

Yet all of this is a tapestry that has — quite literally — holes in it. The world, you see, has actual holes in it that are growing, swallowing up the world in something ambiguous called the Pale, which is perhaps a void empty of meaning, or perhaps an aggregate of human memory, subsuming the present into the past.

But the residents of this world have put blinders on. Though they are aware of it, they choose to ignore the ending of the world. Indeed a key question of this game is what it means to make meaning in a world that is indifferent, in a world that is dying.

The sharp contrast of this imagined world next to ours can’t help but push the player to reflect on how we, as humans, and as a society, are so utterly unmoored to the the substrate of mattering that undercuts all that we are, and all that we do.

The desire to be something, to create a space of happiness for oneself, even just the preoccupation of oneself as the centre of the universe, divorces us from thinking about what actually matters on the deepest levels. The meaning at large: where are we headed? Why are we so concerned with others like us? Why do we go to work each day? As in minds of the people in Disco Elysium, our minds aren’t concerned with the bigger questions, we too are unaware of the potentiality of our world to end, or indeed the slow ending of our world.

One lesson, however, is clear from Disco Elysium: the past cannot be escaped. Without spoiling too much, even the murder that you are trying to solve ends up being due to the past clawing its way forward through time to pull the figurative trigger.

All of our actions, all of the movements of people, places and things, leads us to right now. The built world, your person, technology, culture — everything — it’s all due to the past. More that the past pushing itself into the present — it directs and constrains our future.

In 1992, Francis Fukuyama declared the “End of History” — meaning that the present would only be conceived by its own logic, not the past’s. The Cold War had ended, and democracy, liberalism, and capitalism reigned supreme. Of course, this wasn’t true — the past has only become more difficult to parse. And of course, democracy and liberalism are crumbling, it seems that only capitalism remains on the upswing (inasmuch as it compatible with populism and nationalism).

It’s clear that, in the wave of populism that indulges in historical grievance and ethnic superiority, history is claiming its territory within the expanse of ‘the Now’. And expansive ‘the Now’ is. The digital landscape is ripe territory for vast, fertile fields of minds afraid of the future, clinging to the past.

For every new wiki, there’s a thousand trolls. For every social enterprise startup, there’s a propaganda bot army.

And climate change? Well the fuse was lit long ago, long before ‘the Digital’ became a thing, so it’s just a matter of how much we can contain the explosion.

What’s more, our indulgence in the past, to do what we have always done and supposed to do, means we cause suffering and death to trillions of humans & animals, and contribute to the ecological destruction of the world.

There isn’t a ‘satisfying’ resolution in Disco Elysium, at least insofar as you expect perfect closure in your narratives. Again, without spoiling too much, the case somewhat solves itself, and you continue on your way as a cop, or you don’t. The world doesn’t care, but it will slowly decay.

The characters mostly don’t come to terms with their pasts, and as such, it dominates them.

Facing our past, too, is a lesson. We have to face our past — we have to honestly come to terms with our grievances, habits, cultures, rituals — and conceptualise how we can not be dominated by them to imagine a better future. Perhaps it’s futile. But being aware of the past, by mulling on how it interweaves into our present, and by calling it out, can help us. And perhaps, sadly, the only way we can do this is by being honest about how the world can end, how the world is ending. Not in an explosion, not like in a movie, but like in Disco Elysium: pulled down in the spiraled embrace of a thousand tentacles lured from the past that we choose to ignore because we think: that’s just how it is.

Happy 2020.

#27
December 31, 2019
Read more

The False Tech Gods Will Not Offer Us Transcendence

In the final scene of El Camino, the new Breaking Bad movie (spoiler upcoming), Jesse drives off on a long, curving highway towards the snowy peaks of Alaska. Via this scene we know his fate: he has succeeded, won, and now will now live a happy life.

Stories often end with this notion. In the closing scenes of a movie, characters quite literally drive or walk into an imagined Utopian state. They transcend their story and now live in a permanent stasis, an enshrined bliss. Their problems are fully resolved so off they go into their final, perfect state.

In books, movies, comics, plays, operas, and everything in between, the story almost invariably ends with an unambiguous finality. “….Happily ever after”, the cliche goes.

This narrative of ‘transcendence’ is written into our lives. If only we could solve the problems in our life, like in those stories we read, we will transcend into a blissful state. Transcendence in terms of a narrative indicates a final state that both surpasses human foibles and demands that no new problems shall arise.

We view technology as offering transcendence as well

This desire for a narrative finality embeds itself in all aspects of life. We read religion, politics and the media as narratives that involve a sense of transcendent finality. Religion speaks of a transcendence to the afterlife, political parties speaks of creating utopias just so long as you vote for them, and the media sculpts stories with endings that offer a cathartic transcendence. The criminal is caught, the hero rewarded, and that is the end of the story: the Just is praised perpetually, the Offender is punished eternally. If there’s more, we don’t hear of it — it isn’t interesting! The nuance of the post-narrative dulls the catharsis.

Our attitude toward technology is much the same.

People like Ray Kurzweil speak of a literal transcendence by moving our squishy brains into silicon chips. He speaks for much of Silicon Valley, who collectively seem agree in the singularity, wherein all human problems will vanish with an exponential technological cascade — true, theological transcendence.

Though these techno-utopians may not always explicitly refer to some form of transcendence, they implicitly suggest it by pointing to the unequivocal bliss that their product or service will engender:

“I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day” .

This is a quote from Loren Larsen, Chief Technology Officer of a company called Hirevue. The company’s offering is a technology that uses facial recognition to weed out candidates that are deemed not suitable based on their facial cues. The quote points to a common idea of believers in technological transcendence — that the human is the problem that tech will fix.

Most technological entrepreneurs, especially those with massive platforms or new technological mediums tout the transcendent. Mark Zuckerberg claims that Facebook could have prevented the Iraq War:

I remember feeling that if more people had a voice to share their experiences, then maybe it coulda gone differently.

In other words, war is something that a technological artifact can overcome without consequence. Our violent nature is something we can transcend (through Facebook). Despite the fact that Facebook was originally used as a means to rate women’s attractiveness.

Even economists felt the same. Keynes felt that we would largely transcend ‘work’ through technology and science, and boredom would be our greatest enemy.

These statements are substantively different from the transcendence proffered by religion. Yet in so many ways these carry similar weighty implications: humans are a problem that can be solved in perpetuity by something. Our violent nature, our bias, even our death of old age — these aren’t problems to be worked out through humans, but by an “other”: technology.

But there is no transcendence, least of all through technology

There is no perfect state.

It’s romantic and often poetic to think like this. It is also, of course, false. There is no perfect state. We solve some problems, but new ones emerge (or we intentionally create them). The story doesn’t end. We don’t transcend the idea of problems or ourselves.

Ours is an existence of perpetual striving whether we are aware of it or not. Philosophers like Nietzsche, Schopenhauer and Sartre understood this, as did many other philosophers.

And technology, of course, has never allowed us any type of transcendence, it just reshapes our relationship to the world. There is no end point; there is no point where a piece of technology solves all our problems.

Certainly qualities of life change through technology — often for the better — but technology brings with it new problems. A spear made it easier to kill our prey, but also one another. The printing press allowed for the dissemination of knowledge, but also falsehoods. Social media allowed people to keep in touch with one another to a wide and instantaneous degree but…surely I don’t need to list the litany of problems with social media.

The key isn’t just “with good comes bad”, it’s that we don’t transcend, we cannot transcend our humanity through technology. Visions often paint humans not having problems but that humans are the problem and that technology will fix us. Humans, as a species, are riven with problems and chaotic impulse, but these won’t be solved through technology (nor anything else for that matter). We, as finite beings moulded by evolution and our world, will always suffer in one way or another.

The Philosopher John Gray summed this up well in his book Straw Dogs:

Technological progress only leaves one problem unsolved: the frailty of human nature. Unfortunately, that problem is insoluble.

So if we do not transcend ourselves through technology, what happens?

What appears with new technologies is new states of being-in-the-world: the key word is new, not better. In these new states of being, new capacities, concepts, and relationships occur. Philosopher of technology, Peter-Paul Verbeek presents us with the example of an ultrasound. Yes, it allows parents to ‘see’ their unborn baby, but it also presents new responsibilities and ways of thinking. It forms a wedge in between ideas of the body of the mother and baby. It detects for symptoms of Down Syndrome. This, then, forces a difficult choice that parents otherwise wouldn’t have: should they continue with pregnancy or not?

In this way tech involves the creation of new capacities, that involve new challenges and problems, all while solving old ones

There is an argument that markings on ancient pots represented what was inside, and later were abstracted away from any physical correlate — the beginning of numbers.

In addition to generating to capacities, relationships and concepts, technology also extends them. Many argue that technology has helped us think and consider new concepts through immediate access to a variety of information. We can compare and contrast information in the physical world in ways that appear closer to thinking — but outside of the boundaries in our brain. Our mind, extended into the world. Is this transcendence?

No, our minds have always been linked to the material world. New capacities slowly emerge by using tools to help us count, create language and form societies. But even as we change — even as we developed language, tools, and society we still deal with stress, pain and anxiety.

Of course, it’s not as though we live in an unending hellscape — new capacities bring both the good and bad. For instance, some argue that the extending of our minds into technology is replacing our ability to remember, which I’ve argued may be happening, but in the course of this change new capacities for making and understanding relationships emerge.

But what about technologies that ‘think for us’ — won’t they help us transcend to new heights?

Some argue that displacing our will to algorithms and AI is good (see our CTO friend above), as they form purer, more effective and objective arbiter of ‘what things are’ (e.g., image recognition), justice (e.g., by matching faces in surveillance videos to database), and user behaviour (e.g. Youtube algorithms). While the displacing of our will is troubling for a variety of reasons, it’s also an illusion that human will is fully displaced, it’s actually just pushed further down the line. It’s the will of the designers of AI and algorithms, and any ignorance or bias they embody, that is conveyed digitally. We see how this repeatedly causes problems.

New issues are created by our humanity, however difficult this is to perceive. This is why it’s vital to carefully evaluate even AI and algorithmic technologies in ways that reveal the human interaction in their creation and usage.

But it’s good to have a vision, isn’t it?

Of course we need visions. We need idealism. They hold us to a goal, stop our efforts from becoming an anodyne, ‘design-by-committee’ mish-mash.

But we have to be honest about if that vision is something that is any more than a fairy tale. Is it, like WeWork’s vision, which crashed down in a comical IPO of empty promises and subsequent job layoffs?

Technology often offers more than just a simple vision because it is very difficult to perceive the impacts it will have, so the more idealistic among us will cling to the positive vision, the utopian, the transcendent.

But there’s a particular line of pessimism that I think is important to consider as we design, build and create. And in all honesty, this pessimism can be beautiful. Much art is devoted to the fallibility and difficulty in our being, in our finiteness.

Pessimism can help us be pragmatic. In his book Future Ethics Cennydd Bowles emphasizes spelling them how we will achieve our corporate visions. How will we achieve the lovely glowing words? He also mentions sci fi and other forms of complex narrative building as actual research. What’s important is that we don’t look at the bright shiny utopia, or the gloomy dystopia, but rather something that has bits of the bad and good.

Visions give us something to look forward to. But it’s important to parse the difference between a vision and an idea of transcendence. A vision defines a new way of being and transcendence implies a new way of not being. In other words: a way that will solve the problem that is ourselves, perpetually. It is certainly good to think about how we can solve very human problems, but the idea that a single piece of technology will do that is, to put it mildly, delusional.

What else can we do?

Focus on the human, the nuance, the complex.

It’s understandable why we don’t do this, though. The advantage of clear , uncomplicated visions is that they can be sold. Investors, employers, purchasers, clients, governments — they all need to be sold on something positive, not something nuanced with potential problems. But as more and more corporations are be directed toward triple bottom lines, and values other than capital generation, the picturesque sales visions may be a thing of the past.

Having an opposition to an idea of transcendence isn’t cynicism, it’s intelligence. It allows you to project your ideas realistically into the future through toolkits like the consequence scanning toolkit from Doteveryone. I wrote about a number of other ways to think about the future in this article.

Challenging unambiguous, transcendent final states isn’t cynical, it’s beautiful

I mentioned earlier that most movies end with an unambiguous finality. Of course, the better (in my opinion) movies end with ambiguity and thematic nuance. In Blade Runner, Deckard’s fate is left unknown (at least until the sequel) — is he safe? Is he human?

Is the classical Japanese Ozu movie Tokyo Story, 4 adult siblings are mostly uninterested in their parents, and seem rather devoid of emotion when their mother dies. One character notices this and asks “Isn’t life disappointing?” “Yes, it is”, wistfully responds one of the only characters who cares about the parents, a widow of one of their children. Despite her altruism, she leaves to an uncertain future, pondering a watch that ticks away her life.

There’s no transcendence in this movie, or, importantly, implied after the end of the movie. People change, and adjust in their new circumstances. Life is disappointing, yes, but also beautiful in its nuance and change — ‘mono no aware’. This is what makes Tokyo Story such a beautiful movie.

So why can’t we think the same way about technology?

#26
October 30, 2019
Read more

UX can no longer keep up with our world: what comes next?

I make my living through the practice of UX. I enjoy doing it. I think it has meaning. It gives me meaning.

Yet UX is beginning to show its age. Its bones are creaking as it struggles to keep up with a technologically saturated, inflamed modernity.

UX trumpets its maxim of “putting users first” as the solution to all ills. “Users first!” it demands at a minimum, and indeed, at a maximum. This foundational ethos is effective and important, but also too narrow, too shallow, too limited.

This is how UX has summoned its own limitations. Whereas once it was seen as ground-breaking and eventually essential, the bar has now been raised, the world has changed, and our outlook has shifted. We can now see where UX is ineffectual and are able imagine practices and theories of design that allow us to transcend UX’s limitations.

The limitations of UX stem from 4 aspects inherent in the practice:

Solipsism

Anthropocentricism

De-mediation

Internalism

If we break these issues down, we can address them — and find out what practice might fill the gaps.

Issue 1: UX is solipsistic

Only a single user ever exists.

One single persona, one person, one user facing a computer, one mobile, one experience. When a UXer designs, it’s only for one person — the user. The user is the centre around which all UX design orbits. ‘The user” is the reason behind and for UX. This is solipsism — the idea that oneself is all that exists. UX is just this — the perception of a product or service from one person’s view. And only that one person.

Unfortunately, the implications of a designed object go far beyond the person using it.

This fact only becomes more true as designed objects increasingly exist in a multi-touchpoint, omnichannel universe.

An example here is Ofo, in which a bike and app are a designed infrastructure in service to one user. The system is user-centred, with a clever app that locates the nearest bike for the user and provides convenient access through a barcode scanner. Yet the larger community of people who see their community cluttered with bikes are not taken into consideration.

Even mobile phones and digital devices aren’t designed to acknowledge the needs of people around the primary user. People yell into their phones, disrupting passersby, or stare into their devices, ignoring how they physically interact with the people around them. This could be solved through a better design, emanating from a more effective design practice.

But UX lacks the scope to think about this in a significant way, especially from the perspective of a digital-first UX designer. Dan Hill suggests “Strategic Design” could remedy these issues — “externalities” — of tech. Strategic Design, he argues, is a framework for holistically designing at the scale of both the city and the individual.

He claims that individual fields within design fail to address design challenges:

Judged from a pure interaction design practice point-of-view, Uber is clearly an exemplary user experience. Yet judged from a wider urban design point-of-view, its impact appears to be hugely damaging, with vast numbers of vehicles incentivised to drive into the middle of cities, apparently leading to increased congestion and reduced public transport use.

He sees UX, urban planning, architecture and other fields orchestrated under a “Strategic Design” conductor, harmonising to address socio-technical challenges.

This is one of many forms of wider service-design oriented practices that push back on a bottom line ethos and instead tow an ethical line, seeking to improve on the now tellingly parochial UX practice.

But with this broadening of scope of consequence to community it’s difficult to parse what is UX, and what is a different field entirely (though perhaps that matters little). A similar UX successor, Transformation Design, for example, employs participatory design techniques which are ostensibly included in UX (but, in my experience, are rarely used).

Yet regardless of the the title, this widening of the horizon of consequence in design is inevitable. But it can’t stop there.

Issue 2: UX is anthropocentric

In the era of the anthropocene, the primary force behind ecological effects are humans. But anthropocentrism has been at play since humans had language. This belief entails that humans are fundamentally different than other animals — transcendent — and able to transcend to even greater heights through religion or science. We’ve embedded this way of thinking in our in our societies, in our language, and in our designed objects.

The notion of transcendence is one in which everything is viewed in a subject-object dichotomy, with humans being the subject and everything else being part of a series of objects. This isn’t true of all cultures of course, with many indigenous peoples viewing humans as a part of a larger system, or animals as subjects in their own right.

Yet these anthropocentric attitudes prevailed and reached their height in the modernist attitudes in the late-20th century with the height of corporate, city, industrial and environmental planning.

The world is tamable, humanity claimed.

It’s now, within the anthropocene we see the effects of our anthropocentricism: climate change, astronomical deaths and suffering of human and non-human animals, and a failing ecology. Our perception of all things as objects which we might control, extract and destroy in order to construct different objects has led us here.

When we design, we have little regard for the subjectivity of a natural ecology involving lifecycles of countless organisms, weather cycles, and geological forces. It’s all just objects involving a system that points to us.

Of course, if you’re designing the navigation menu of an app that sells kitchen utensils you may wonder how your practice involves preserving a severely melting glacier. These problems are bigger than those which can be designed out of, let alone impacted by granular designs of interactions in digital objects.

This is why the successors fields to UX see the practice not only changing its area of focus, but also its scope; the role of the UX designer should become one that escapes the silo of individuated user interactions to a focus on frameworks to incorporate larger, systems-based questions.

Will someone still need to design navigation menus? Yes. But we need to expand, to look beyond humans as the primary subject of affect, and instead examine the wider ecology as subjects in and of themselves. In this way, UX could become a mindset that investigates every decision in a product lifecycle.

Cassie Robinson offers a range of practices addressing this , ranging from ecosystem design to consequence design (many of these also address the solipsism within UX). She offers provoking questions, such as:

  • What could you displace?

  • What are you accelerating?

  • What are you encouraging or incentivising over time?

  • Are you adding health in to this system?

  • How can you give prominence to care in your interactions?

  • How can you repair or maintain this system?

Anab Jain, too, looks to extend the frontiers of design beyond the human, as noted in her excellent talk:

Anab Jain’s fantastic presentation: a call to consider a Post HCI, ecological-first approach

Similarly, UX pioneers IDEO propose a “circular design” method to look at deeper ecological consequences.

Yet IDEO frame this more as a profit-driven exercise:

A new mind-set for business is emerging. It’s worth around a trillion dollars, will drive innovation in tomorrow’s companies, and reshape every part of our lives.

This doesn’t bode well for the long term sustainability of their idea. This is the issue with some successors to UX — they remain anthropocentric in their outlook, seeing financial gains as their motivator, without leveraging legal, political, and economic ways to find value systems other than financial.

There’s no way around it — looking at the bigger pictures won’t always be monetarily beneficial. But approaches that disentangle value from capital are necessary for our very literal survival and well-being, as well as for survival and well being of the animals and ecology we are enmeshed within.

Accordingly, the successor responsibilities of a user experience designer involve collective action in driving change. And not just surface-level changes of an anthropocentric, and accordingly, destructive system — but deeper structural changes altering how we go about deciding what and how to design, and what a ‘good’ design entails.

By some arguments, the application of superficial rather than structural changes is what happened with sustainable development (sometimes referred to as “greenwashing”).

Greenswashing via elkhiki

Even the IEEE Standards Association is waking up to structural changes. It makes bold claims about a form of responsible participant design, which aims to prioritise people and the planet over profit and productivity.

But hasn’t UX always been ultimately antithetical to capitalism anyway? It was always what’s best for people, not capital, at its core. Ultimately, we need to expand on that idea in our collective visions — beyond just the human to the living ecology we happen to be a part of.

Issue 3: UX de-mediates

Traditional UX frameworks inherently view technology as a medium which the user can control and affect: the product is ultimately neutral with respect to the user. That this could be more than a one-way relationship was never cared for, or otherwise considered.

We see designs that are highly usable, but are actively ignorant or uncaring (or both) of the effect they have on the user and the world. We see this in how designers didn’t have the foresight or skill to reflect on what it meant that Facebook was addictive, created filter bubbles, or was able to generate political agency in its users. Yet Facebook is indeed capable of all of these, as recent history has shown.

Technological determinism — the idea that technology dictates how we behave — is not the argument here. Instead it’s what the academics McLuhan, Latour, and most recently Idhe have discussed: technology mediates. Mediation in this sense means technology creating and shaping conceptual attitudes toward how we think about our world, and accordingly, how we behave in it.

Mediation occurs through new human-technology relations. It’s not the technology or the human by themselves, but the new relations that exist between them that create new actions and ways of thinking.

For example, “at work” means different things when have constant access to Slack and work email. Ideas about what it means to plan and think about “going shopping” have changed with how we engage with ecommerce. “Being online” wasn’t a thing 30 years ago, and it meant something different one or even two decades ago compared to what it means now. Indeed every technological artifact — whether we want it to or not — mediates, affording some behaviour and not others, changing how we think and what we think about.

The UX process has no space to scope out how a technology mediates. In this way it is actively de-mediating.

The UX framework wants to think of the product it helps to create as invisible or at least as transparent within a “Jobs to be done,” primary-task type of approach. But it’s not just your tasks that change with a new product: you, now mediated, have a differently structured life, which cascades to your thoughts, which cascades to your actions, which cascades to society.

At most UX has a mild sense of how a user’s behaviour changes with relation to the product — i.e. “What will make them come back to our product?”. We see amoral, shortsighted academics like Nir Eyal and BJ Fogg cultivate this line of thought in their Machiavellian works(“A Guide to Building Habit-Forming Products” — shudder). Of course, there’s no investigation into how behaviours and indeed thoughts change outside of the envisioned product use relationship.

Is a product incentivising unforeseen activities? Are users’ understanding of the world changing based on how the product has affected them? Are old terms given new meaning to them? Are their roles in the world changing? We don’t know.

Design frameworks other than UX fare better in seeking answers to these questions.

Speculative Design is one of the approaches that seeks to understand, among other things, future technological and societal paradigms, and the effects that these may have on people. “Design fictions” are very literally physical, embodied “future objects” that foster debate of possible futures. Participants in design fictions are intended to experience and interrogate how a potential future may impact us, our societies, and our environments. Inherently political, speculative design is a powerful tool for policy makers.

Design fiction playing cards via Garnet

In the study of Human-Computer Interaction, post-phenomenological research investigates the mediating influences that technologies have on people and their relations with the world. Post-phenomenological HCI sees people as interwoven in their environment, investigating the multi-dimensional uses of technology and how that affords different behaviours and thoughts. Peter-Paul Verbeek, a leading proponent of this approach, has an interesting course on Future Learn that I recommend.

Both of these practices are quite a ways from making an impact on design in the private sector. Once again, it’s likely because these practices don’t fit nicely into a process diagram next to accounts and engineering; they are inherently unbounded and political.

Issue 4: UX is internalist:

Do you remember everything you need to know?

No, you do not.

Instead, you often remember where are things you need to know. Important information is in Slack, or your email inbox, or on a note you scribbled and left near your door. These aren’t just reminders, they are your memory, externalised. You implicitly realise this, so you don’t put effort in to remembering.

But your environment functions as more than just your memory. When you are writing, designing, or doing some other creative or information task, what does your environment look like? If you are doing your taxes, you likely have different bills scattered around, if you are designing you likely have design inspiration littered around you. This is an active cognitive process — you are using your eyes to call up information as you need it and integrate it into your thought processes: epistemic action, this is called.

This theory that your mind extends into the world is known as extended cognition.

With the interweaving of our lives with digital technology, the plausibility and explanatory capacity of this theory has only increased.

You offload your directions to your app map. You store your memories in photos . You have browser tabs open that you cross-reference with each other. This ecology floats next you, interweaving with your life, accessible from different touchpoints.

But UX doesn’t care to examine how people think and remember using objects. It states that a person thinks toward an object in the format of

person →object

Yet as extended cognition theorists have been saying for years, we must consider the coupling of person plus environment as single bilateral unit in the format of

person ⟷environment

- a single unit of thought.

This reframing shatters our ability of how we consider the manipulability, transparency, personalisation of tech. Just as we don’t consider the subjectivity of the world around us, we don’t consider how we integrate into this greater subjectivity.

Entirely new affordances can appear by shifting our horizons to consider epistemic action. Think of a set of scrabble pieces in front of you. You may physically move your chance-determined pieces between one another to investigate prospective words. This physical act of thinking creates connections in the forms of words that you may not have seen otherwise.

Now consider a much more complex information environment a user may create in a tightly coupled human-device relationship— what connections might they be able to generate? Everything they’ve read in the past week, each song, their browser history, structured, sorted and fungible in ways that flex and fold and bend together. Users offload, manipulate, contrast, reference, theme and associate within this ecology via their mobile, laptop, or any number of other devices. In doing so a user is able to shift their focus from being directed towards individual content to the relations between content: patterns, associations, themes, etc.

How can we possibly design for this? What are the frameworks that help structure taxonomies? How do we even begin to conceptualise this?

I’ve yet to see any UX framework/process that even begins to address this thought, but philosophers and cognitive scientists have begun putting together conceptual categories for examination.

Andy Clark and later Richard Heersmink have suggested we ask questions about the nature of the human-environment cognitive couplings such as:

  • How reliable is the connection in terms of what is required to maintain it (e.g. electricity, distance etc)?

  • How durable is the connection in the face of stress such as uncoupling or coupling?

  • How can information gathered through the coupling be trusted?

  • How transparent is the process for transmission of information?

  • How easy is it to interpret or understand the information that is transferred?

  • How easy is it and to what extent can we personalise the cognitive coupling environment?

  • How does the cognitive coupling transform our brains?

How this applies to digital environments is likely highly complicated. Yet I haven’t even seen any conceptual frameworks even try to make sense of our personal digital ecologies.

But it’s clear that as we become more tightly coupled with our technologies we must at least attempt to understand how we think with our environments. Because this is already happening. And we have to be able to conceptualise it in order to design for it.

All of these issues are related. They all funnel and intertwine and challenge the foundations of how we think about design.

“UX is dead” was a trite canard that some years ago floated around Twitter and the more mediocre design blogs. Are we here again?

Yes and no. There’s no way UX is going anywhere.

But with the multiplication of factors to consider from both a theoretical and practical perspective, UX has been sent spinning down a path from which it won’t rebound without deep structural change. The role, the scope, even the theoretical underpinnings have to shift in a way that may leave it totally unrecognisable.

But that’s a good thing. Don’t hold on to those post-its too tightly.

#25
August 11, 2019
Read more

We aren’t becoming “dumber” because of Google, but we are becoming cognitively different…

When new technologies come around people worry. Teeth gnash, hands wring. Not that worrying about the effects of new tech is unwarranted, but the worry normally results from changes in a small set of variables. People nervously monitor these variables in lab-based studies, with any change being reason to raise the alarm: a technologically-driven dystopia is at hand!

For instance, lately, there has been a great deal of concern about how the web and digital technology generally is affecting our memory and our thinking habits.

#24
December 3, 2018
Read more

UX must push to see beyond quantification, beyond capitalism

image via: Curtis MacNewton

A number of things are happening now around theme of data and it’s application to humanity:

  • Social media’s algorithms are under fire for manipulating elections and polarising political discourse

  • An unregulated, data driven gig economy is increasingly seen as inhumane and anti-labour

  • Some feel that we are offloading our social and personal lives to ‘black boxes’ that make decisions for us

Data, nominally an invisible entity, is beginning to be felt by all. There’s always been dystopian, kafka-esque concerns about the reduction of humans to data points, but it’s only now that we are beginning to see these concerns be truly and harmfully reflected in nearly every action we take. This isn’t a dystopia, or a totalitarian regime bent on societal control, it’s simply the order of the day for capitalism.

Capitalism seeks capital. And capital can only understand itself through the quantified: it can only be represented by numbers, not by quality. Flattening ‘things-in-the-world’ such as quality, knowledge, concept or people into numbers is hugely advantageous for capitalism because it allows for their processing.

In tech, this is doubly true. Value is quantified, but so are all problems and solutions. The ability to measure, optimise and solutionise is unparalleled. Any any social ill can be ‘solved’ by clever enough application of of 1’s and 0’s, tech claims.

User Experience Specialists, the venerable advocates of the user, are forced to play by the rules of the quantified, the bottom line, data.

But the foundation of UX is not numerical data.

It is people. It is people being-in-the-word. It is their experience.

It user-centred, human-centred design. It is quality. An experience is not a quantity, it’s a quality. An experience is phenomenological, not mathematical.

Yes, we can try to put metrics next to experiences like happiness or frustration, but you don’t feel a “3” on a scale of frustration, you feel what you feel. Given an opening, you might talk about qualities of your experience which may or may not include happiness or frustration but may involve other emotions, themes or observations. How you construe and reflect on meaning from an experience is severely constricted by the quantified researcher-defined parameters.

All you are allowed to be. via (https://pixabay.com/en/emotion-scale-emoji-icon-feedback-3404484/)

In the academic world, this is exemplified in the replication crisis, which sees psychology and its various sub-fields harmed because of the difficulty and slipperiness of measuring people’s experiences(not too mention the ability to manipulate such ‘objective’ standards through means like p-hacking). There’s good-faith efforts to address this, but the problems are glaring and deep-rooted.

In business this is extraordinarily apparent as well. Again and again, when I do UX research and analyse themes or concepts, I’m asked for the “data” that supports my analysis. Of course, the person asking me this means “Show me numbers!”

But being emerged in a contextual inquiry, or conducting qualitative user testing allows you to notice trends and themes by carefully noting the meaning behind people’s actions, words and understandings. Analysis such as this doesn’t result in numbers — numbers may play a part — but the overall analysis looks to understand the depth, breadth, and relations of concepts. And these concepts might move between levels of granularity or rely on a number of variables (facial expressions, tone, distractedness, etc.). All of this means that there is no single number — or there shouldn’t be — in most forms qualitative UX research.

Yet the quantified underpinning of capitalism forms our frame of reference, as the realm of the quantified defines what we can and cannot do. In other words, our creativity and its resultant output are restricted. James Bridle refers to this as ‘computational thinking’. We think in terms of optimising local areas of systems. We think of increasing conversion. We think we can solve social ills with enough 0’s and 1’s.

It’s not, I would argue, a UXers remit to inhabit this ontology, this way of understanding the world.

UXers are ostensibly an advocate for the user, not the business. Indeed, that’s where they are most effective. Yes, UXers paid by the business to make the best possible experience for that product, but the best possible experience for a user and the best possible experience for a product are not the same thing.

Take an example: a user’s goal might not be to purchase a airline ticket, it might be to learn about how often planes leave to a particular location (good luck trying to find this info!), yet enormous UX resources are devoted to making that airline ticket more attractive— even to the detriment of other use needs (or feelings or states). In this way business needs and user needs often conflict. The UXer is the advocate for the user — that’s why they’re there.

But so what? You might say that this is all just definitional, that a UXers job is to make business and user needs together. That’s fine, but don’t mistake the ability to make money as a good user experience. A good user experience doesn’t require a company to make money.

But of course capitalism does. It demands money and it demands metrics to show how an user’s experience is improving their money-making. Money is quantity which only understands other measurable quantities. And a quantity is only measurable when it is becomes a variable that must be made (seemingly) objective and generalisable by defining a set of parameters which determine an instance of that variable. Yet experience is personal, subjective and continuous. It is unbounded.

This is why it can be difficult or inappropriate to think about a single product’s UX. A UXer must erect artificial boundaries around the context of investigation — conversion becomes the ultimate arbiter of an experience, not the actual quality of an experience. And to understand conversion, we have to measure. Our ability to examine someone’s experience of the world degrades because we can’t engage with the full range of experience because of the demands of quantification vis-à-vis capitalism.

For example, the web as a whole, and applications of it, are far more concerned with retrieving information than helping people to manage information, and facilitating the building of personal ecologies of relevant information. Bookmarks have barely changed in two and a half decades. Ideas such as transclusion and Stretchtext that would aid in building personal and global semantic relationships died before they were started — and they were imagined half a century ago.

Ted Nelson’s Transclusion

This is because it’s simply far more profitable to facilitate the finding of content than to help create frameworks to support personalised ecologies of information. But any UXer worth her salt will tell you of the importance of personalisation, wayfinding and sensemaking — all qualities that could be engendered positively much more effectively if we focused on personal, curated informational ecologies. These concepts don’t exist in isolation amid the artificial boundaries of URLs. They cross channels, cross into our brains, and into our lives.

But we can’t look at experiences in this way, because in digital improvement (read: optimisation) only takes place at a hyper local level. Even from a quantitative perspective, this isn’t efficient. Geoffrey West has noted that when we look at the biological world, we sometimes see inefficiencies at the local level, but the picture starts to make a great deal more sense at the global level. Here we see the how local inefficiencies often make sense — in global efficiencies. Things become more optimised at a global level at the expense of local optimums. Of course, we don’t — we can’t — think like that in the quantity-capital paradigm.

Yet even this ‘global’ optimisation thinking is more about quantity rather than quality. Because understanding the human quotient isn’t about optimising globally either — optimising seeks quantification.

This isn’t it to say that understanding of quantity is useless. Such an assertion would be absurd. In tech, quantification can tell us, in dead tones, how much of things, like interactions , downloads or hits. It can tell us about routes taken, objects clicked. It cannot help us with vital issues of experience that exceed the parameters of measurable quantities, such as:

  • How can we help you build your life in the way you want it to be built?

  • What are ways that we aren’t supporting you in doings something that you need supporting with?

  • How can your doing a particular activity in the world make life better for all?

  • What meaning do you make out of your interactions and experiences with an activity you do?

  • What do you understand from your interactions with a particular area of your life?

  • What is the context of your experiences and interactions?

And simply, how can your life be better?

The answers to these questions can’t be bounded the variability of a single — or even multiple — measurable quantities, measured within the use of a single product. Indeed, qualitative answers to these questions may point to the fact that you shouldn’t use a product in question, or might even show that we should scrap certain digital products (given how damaging they can be to our mental well-being).

How can we focus more on these quality-based questions, on the totality of experiences?

How we consume, how we prioritise incentive-based structures overall all others, and how we build our economies needs to change for one. I don’t need to explain why there are millions of other reasons that this needs change as well.

I’m not particularly in favour of any other socio-economic framework, but we have to be able to imagine alternatives. It has to start somewhere, and imagining a quality-based world rather than a quantity based world is a start. It’s a place that UXers know well and are predisposed to.

When we begin to uncouple from the quantified, from capitalism, our horizons shift and our gaze follows, enabling us to see patterns, themes and causal structures that were otherwise invisible.

When we see qualities, we begin to see how things are connected, and how we form meaning in relations to other things, not just through a individualised subject-object dualities.

Husserl was the first to study Phenomenology, which I allude to a lot in this article. People have been thinking about this a long time. I’m hardly the first to discuss this.

We see that experiences aren’t bounded to individual minds, they’re the result of series of subjective events in an undulating temporal, physical and socio-cultural environment. The artificial boundaries that quantification inserts tends to be reductive, removing meaning.

The content and meaning of relationships that you have with human, environmental and technological systems around you reveals the very qualities of your existence.

For example, on an individual level, we use the world to help us remember, think and be creative. Browser tabs are memories embodied. Emails are externalised lists of activities we have to do. How we formulate intent and use our world helps to define us, which can only be explored qualitatively. We can’t think of software and the web as individualised elements with defined parameters, but rather part of systems that are us, that contribute to forming and creating further needs, emotions and states of being.

On a global level, qualities and relationships are unbounded as well, defined through and between systems. Global warming, political polarisation, fake news — these are all issues that require qualitative and systems-based thinking to understand how best to solve.

This isn’t woolly thinking. It’s well researched, involving fields such as philosophy, cognitive science, archaeology, and human-computer interaction and systems theory.

Imagine if UXers and indeed workers of all stripes could work across digital and physical ecosystems to creative qualitatively impactful experiences, rather than increasing the quantifiable measurement of a small part of a single one.

What could we create?

#23
September 10, 2018
Read more

FutureFest 2018 told us to fear the future, rather than be hopeful for it

At FutureFest 2018 water dispensers were powered by fob.

People could move a small fob on a string to a highlighted area on a dispenser to fill up their water bottle. Four people could get water from each dispenser, given that the bulky cubic dispensers had 4 fobs on each side. There were at least 4 of these squat dispensers placed throughout the event, clearly intended to show off some fancy future technology, albeit in a rather silly way.

By the end of the event, all 4 off the dispensers were out of order — only 1 fob on 1 side of a single dispenser worked.

I couldn’t help but wonder if this was intentional, given the pessimistic tone towards technology that existed at Future Fest.

Movers and shakers, futurists and artists of renown were all present at this established London Festival. The aim of the event was to “put control back into the hands of the people” and to build “bold solutions to this era’s biggest challenges.”

But the theme at Futurefest was one of trepidation and cynicism. Indeed, the cynicism was about the now as much as the future. Data, more often that not, was seen as the enemy. The Big 5 were the invisible villain. They were utterly invisible in that they were seen to be all powerful and everywhere, even in areas of your life that you would neither expect nor sanction. But they were also invisible in that they had no representation whatsoever at the Festival, which had the effect of making many of the debates somewhat dull. This also meant that finger-pointing tended to be the order of the day, rather than collaboration.

The enemy…

Writer and Speaker Douglas Rushkoff repeatedly slammed everything from Artificial Intelligence to quantification as a very inhuman enemy. In his polemics however, there was a noticeable absence of concrete examples of how this was the case, except in his own anecdotes, which seemed less interesting than he perhaps imagined. “Do we really own our phones?!” he exclaimed, implying that we were bound by a bevy of privacy contracts. While this is true, it undercuts far more interesting questions of how the concept of ownership changes, and indeed why the concept of ‘ownership’ has meaning at all in this day and age.

Evgeny Morozov attacked big data and AI as well, but in a more nuanced way, claiming that we should collectivise and pool our data, choosing who may access it and under what terms. Still, there was little he offered as how this could go about changing — no actionable examples were given. There was little to discuss as well, given that there wasn’t anyone there who could explain what the difficulty with his solution might be.

Academics, too seemed off-put by the spectre off AI and big data, especially as instantiated by Google & Facebook. Much hang-wringing was expressed from Professors Noel Sharkly and Rebecca Allen. Yet their arguments were often fairly poorly articulated; concerns ranging from not wanting physical augmentation to broad concerns about AI were present, but little in the way of thought-provoking solutions were posited. Brilliant people both, but their rather ambiguous hand-wavy concerns did little to advance conversations or provoke thought.

Surprisingly, Nick Clegg seemed to offer a perspective that seemed to mirror my own: he claimed this ubiquitous doomsaying, present from both the left and right, prevented long term solutions to potential threats from technology and tech companies. A positive attitude towards technology, he claimed, could help embed legislation and political programs to develop and harness technology. A sensibility of fear, he claimed, meant that it was much more likely that successive governments would overturn programs aimed toward embracing technological development.

This dearth of solutions and lack of representation of this invisible ‘other’ tended to set the tone, which meant that most talks were fairly predictable in tone and content.

One solution I did see came from Anab Jain. Her solution was perhaps more a way of discovering solutions, rather than a solution itself, however. She and her agency, Superflux, promoted speculative design: the process by which ‘design fictions’ are articulated through provocative futuristic artefacts which elicit useful feedback from participants in the research. She nicely explored this with Mantis, her AI global risk startup, which she revealed to be fake (a speculative design) after her presentation (much to the chagrin and interest of the audience).

The fake ‘Mantis Systems’ provoked thought and interest from participants, as was the goal

But I think there is much to what Nick Clegg said about the political fear that now seems embedded in our discussion of technology. This fear seems especially articulated in a (good) book I am currently reading: The New Dark Age by James Bridle. In it, he claims that it is nearly impossible to understand the vast and invisible computation occurring that governs our society. He claims that new metaphors are needed to grasp, if not understand, these forces. While he makes many good points, his gloomy outlooks predisposes us against agents and organisations that may have a positive outlook towards technology — even if he claims that he is not anti-technology.

But this attitude reflects the sharp divide in the discourse towards technology. There is the critics — sharp-edged commentators on the dystopian possibilities of tech: Zeynep Tufekci, Adam Greenfield, Douglas Rushkoff and many others. On the other side are your Silicon Valley technologists — Mark Zuckerberg, Peter Thiel and any number of startup founders, as well as journalists such as Kevin Kelly.

This antagonistic divide does little to help us. Both sides have cogent arguments, but few people encompass both sides. The critics tend to recognise technological advantages only begrudgingly, with an ever-present subsequent “but…” and the technologists tend to be tone deaf, responding to humanistic problems with technology rather than anticipating them.

This is only exacerbated when, in places like Future Fest, the angle is slanted far toward one side than the other. Pointing fingers at vague threats tends not to be a useful enterprise.

Ultimately, defined collaboration of technologists and critics is the only way we can smooth the bumpy present out into a comfortable future.

#22
July 8, 2018
Read more
  Newer archives Older archives  
Powered by Buttondown, the easiest way to start and grow your newsletter.