DisAssemble

Archive

Oh cool! Another post about AI!

AI is an exciting and rapidly evolving technology that has the potential to transform our world in so many ways. But let's be real, AI is not perfect. In fact, there are some limitations to AI that are worth discussing.

First of all, AI is only as good as the data it's trained on. If the training data contains biases or inaccuracies, the AI system will reflect these same biases and inaccuracies in its outputs. This can lead to unfair or incorrect results and decisions, which is definitely a problem.

Another thing about AI is that it just doesn't have that human touch. AI systems can only make decisions based on the data and algorithms provided to them, and can't account for unique or complex situations that often require a human's intuition and creativity. This can sometimes lead to suboptimal outcomes, and let's be honest, we don't want that.

And then there's the issue of transparency. Sometimes, AI algorithms can be difficult to understand or explain, which can be a problem when it comes to making important decisions based on AI outputs. This can be especially concerning when it comes to sensitive issues like employment, criminal justice, or healthcare.

#61
April 2, 2023
Read more

what's the point

a stupid AI image trying to write 'what's th epoint' on random ai images

As you reach the threshold of middle age, you begin to question purpose. Your purpose. Our purpose. Or, at least I do, and I am.

David Graeber, in his book Bullshit Jobs, condemns the vast majority of jobs as being useless, as "a form of paid employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence even though, as part of the conditions of employment, the employee feels obliged to pretend that this is not the case.”

It’s an incisive analysis of our jobs, and how they contribute to our own self worth.

#60
January 23, 2023
Read more

An *object* lesson

In my last post I waged an unforgiving, brutal war on the subject-object divide. I apologise for the carnage. I do hope you have recovered.

This divide, for those who don’t know or perhaps couldn’t get through my last post (😭), is a conceptual framework for approaching the world. It is similar and oft-synonomous to the mind-body divide, also known as Cartesianism. As I noted, this conception structures our thought such that we conceive of our consciousness as untouched by physical world; we are viewed as abstract entities who are able to rationally evaluate the world of objects (but these objects in turn do not affect us). Objects are conceptualised as inert - they can be dislocated from their context without effect.

So, in contrast to the subject-divide, I suggested that objects exist with us and our cognition in complicated ways, such that they:

  • change our goals

  • shape what we do

  • can literally be our thoughts

#59
November 15, 2022
Read more

There is no subject

You’re reading DisAssemble, a philosophy of tech newsletter aimed at those interested in creating better digital products.


We are taught that there is a subject and there is an object.

#58
August 25, 2022
Read more

The billionaires will save us

More money = better than. But better at what? Better at everything.

See, if you have that special brand of entrepreneurial get-up-and-go, maybe have the right business connections, or just know how to exploit workers to make your business succeed, you just might be great at everything.

#57
June 24, 2022
Read more

We need to un-fatten computation

As always, you are reading DisAssemble, a philosophy of tech newsletter aimed at those interested in creating better digital products and human-digital interactions.

I want to apologise for the delay in the latest newsletter, which is only now appearing more than 3 months after my last. With the war in Ukraine, I asked myself “does writing about the philosophy of technology with regards to designing new and better technologies matter?” It didn’t feel as though it did, at the time. It felt vacuous, self-indulgent, ineffectual.

I realise it is true, however, that relative to the horror that regularly occurs in much of the world, all philosophising, all musing, can be argued as self-indulgent and inconsequential. Does that mean we shouldn’t do it? When contrasted with direct action, philosophising can feel pointless. But I think that abstract thought, paired with context, can enable action (as I discuss in this newsletter). And action stemming from deep thought about human-technology interactions is perhaps more important than ever, given the extent to which the Ukraine conflict, and indeed many other conflicts, are mediated by - and indeed often driven by - digital technology.

So I know I will continue to write and speak about this topic. I hope you’ll continue to join me. And please let me know your thoughts and feedback (you can reply to this email, or email me: vikram at lightful dot com).

#56
April 18, 2022
Read more

We need to un-flatten computation

You’re reading DisAssemble, a triweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


Bureaucracies curse me with a special kind of anxiety. A cold well of dread presses down inside my stomach whenever I have to deal with some byzantine telecom, or the government. Filling out paperwork gives me heart palpitations.

It’s not just me, I think. For most, bureaucracies are frustrating, impenetrable and most of all, forcefully categorical. You are in tax category A, or B, you are married, or unmarried, you are a citizen, you are an immigrant. Kafkaesque? Yes, intrinsically. But not merely because bureaucracies are byzantine, but also because they flatten.

#55
January 22, 2022
Read more

The best books about tech that *aren't* about tech

You’re reading DisAssemble, a triweeklyish philosophy of tech newsletter aimed at those interested in creating better digital products.


Books and articles about how to design and build technology are always about just that. They’re always geared toward ever-more effective ways to get to get the best business outcomes; that is, make more money.

These texts never question why we build technologies, what technologies mean and do to people, or how we have come to build them the way we that we do. This parochialism is immensely troubling to me, and part of the reason why I started this newsletter.

#54
December 22, 2021
Read more

Can businesses be both human-centred and profit-centred?

You’re reading DisAssemble, a triweeklyish philosophy of tech newsletter aimed at those interested in creating better digital products.


A screenshot of the CX wikipedia article that shows it has multiple issues

I hate the term "CX". It stands for "Customer Experience", in case you don’t know. Even Wikipedia ⬆️ thinks it’s bullshit.

#53
November 24, 2021
Read more

Can design & tech question modernity?

You’re reading DisAssemble, a triweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


There are these two young fish swimming along, and they happen to meet an older fish swimming the other way, who nods at them and says, “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, “What the hell is water?”

#52
October 29, 2021
Read more

Is it possible to design emancipatory technology?

You’re reading DisAssemble, a biweeklyish philosophy of tech newsletter aimed at those interested in creating better digital products.


“If you could do anything, what would it be?”

The well-meaning but naive words of the school guidance counsellor. In my secondary school’s ‘Career and Personal Planning’ class these words were presented to me with a certain reverence; they were a talisman that would guide me to success and fulfilment.

#51
September 27, 2021
Read more

A discussion with Heather Wiltse about things that aren't things

You’re reading DisAssemble, a biweeklyish philosophy of tech newsletter aimed at those interested in creating better digital products.


A few times in this newsletter, I’ve brought up a term - ‘fluid assemblages’. Sadly, this fascinating concept is not my own, but rather Heather Wiltse and Johan Redström’s, who coined it in their fantastic book Changing Things: The Future of Objects in a Digital World.

#50
September 4, 2021
Read more

how to design plz?

You’re reading DisAssemble, a biweeklyish philosophy of tech newsletter aimed at those interested in creating better digital products.


I start writing my newsletter. I am certain what I am going to write about. Certain.

#49
August 13, 2021
Read more

Menopause is a problem. Let's fix it. (they said)

You’re reading DisAssemble, a biweeklyish philosophy of tech newsletter aimed at those interested in creating better digital products.


A moonshaped device around a women's wrist and around her neck

Oh this?

#48
July 25, 2021
Read more

On finding problems for solutions

You’re reading DisAssemble, a biweeklyish philosophy of tech newsletter aimed at those interested in creating better digital products.


woman in blue long sleeve shirt using silver macbook
It was actually a man but I couldn’t find a royalty-free pic of a man on the phone. Via magnet.me

"As a UX designer, it will be your job to explore use cases for the technology that the engineers create. The engineers create the AI solutions - you'll help in figuring out how to apply these solutions," the in-house recruiter said to me.

#47
July 3, 2021
Read more

A diatribe against a mindset

You’re reading DisAssemble, a biweeklyish philosophy of tech newsletter aimed at those interested in creating better digital products.


They are apolitical.

Or they think they are. They say they only believe in progress. Progress, for them, is conceived by innovation and birthed by capital.

#46
June 10, 2021
Read more

I will not apologise for starting this post with a poem

You’re reading DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


I am so afraid of people's words.

Everything they pronounce is so clear

#45
May 16, 2021
Read more

oh no another client project what do i do

You’re reading DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


Whenever I start a project with a client I feel a creeping anxiety.

How can I possibly understand the domain that the client works in? Why would my presence, that of an ignorant interloper, be of any value?

#44
April 24, 2021
Read more

The story so far!

You’re reading DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


I recently lapsed in keeping DisAssemble updated (I aim to make it biweekly) because I moved house, and an interview I am conducting for the newsletter is taking longer than usual (but it'll be great when it comes).

#43
March 27, 2021
Read more

More than tools: making meaning from digital stuff

You’re reading DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


More than thirty years ago French theorist Jean Baudrillard said that photography was a:

#42
January 26, 2021
Read more

This newsletter is your memory

You’re reading DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


File:Sportsfile (Web Summit) (22790692681).jpg - Wikimedia Commons
#41
January 3, 2021
Read more

We design our tools which design our jobs which design us. Or: why Excel sucks

You’re reading DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.


Excel For Windows 3.0 Ad

I am repelled by Microsoft products.

#40
December 15, 2020
Read more

Why are affordances important? More questions with Jenny L. Davis

This is part of DisAssemble, a biweekly philosophy of tech newsletter aimed at those interested in creating better digital products.

In the last issue of DisAssemble, Jenny L. Davis, a social psychologist and technology theorist, answered questions about what affordances are.

In this second part of my interview with her, we focus on why affordances are important to those involved in designing and building tech.

She recently published the excellent How Artifacts Afford: The Power and Politics of Everyday Things. I strongly recommend you read it - her answers below offer a taste of what it spells out, namely a powerful and unique framework on why and how affordances matter.

#39
November 24, 2020
Read more

What are affordances? An interview with Jenny L. Davis

This is part of Disassemble, a philosophy of tech newsletter aimed at those interested in digital products.


“When a man is tired of London, he is tired of life; for there is in London all that life can afford."

— Samuel Johnson. 

#38
November 11, 2020
Read more

How to design yourself: a primer

This is part of Disassemble, a philosophy of tech newsletter.

File:Strandbeest--Full-Walking-Animation.gif

#37
October 25, 2020
Read more

Google Maps sucks | you out of thought

This is part of Disassemble, a philosophy of tech newsletter.

#36
October 7, 2020
Read more

You're part of it, my friend

This is part of Disassemble, a philosophy of tech newsletter.


The Fall of the Magician by Pieter van der Heyden
The Fall of the Magician
#35
September 11, 2020
Read more

Design, Phenomenology, Capitalism & sucking at newsletter titles

This is part of Disassemble, a philosophy of tech newsletter.

In Defence of Marxism

In Capitalist Realism, cultural theorist and philosopher Mark Fisher noted that capitalism is:

#34
August 26, 2020
Read more

The Essentialism Pandemic

And I am whatever you say I am

If I wasn't, then why would I say I am?

In the paper, the news, every day I am

Radio won't even play my jam'

#33
August 2, 2020
Read more

A little bit about this newsletter

We should be happy at how much we have to read. Reading should make one better. Yet writing on the web tends to take the form of surface-level explorations of the world, in that it concerns itself with the immediately factual. A play-by-play description; a trip report; a contrast and compare - these are the most common forms of writing. Writing doesn't even go by 'writing' on the web. It goes by 'content, an ugly word. It suggests a meaningless delivery packaged to fit within a pre-existing frame.

Writing about technology suffers the most. It is design advice based on simplistic laws or principles. It is assessments of technology based on practical application. It is snippets of quotes from blindsided experts. 

None of this is good enough. The bulk of writing - be it news, reviews, or advice - doesn't attempt to apply concepts from elsewhere to help us us understand technologies and being as historical, as situated, as designed, as part of a framework that can be better understood with any number of conceptual lenses. Most of all, it doesn't explain how we can leverage these lenses to design and build technologies that change what we want to change. 

Here's an article: on how Google Docs is used by activists. It's an article with good 'content'. You should read it. Honestly. Its primary point is that Google Docs is easy to use; therefore, it's used by activists. That's interesting and informative.

But it doesn't discuss meaning: meaning for designers, meaning for the users, meaning for the reader. To do this you need conceptual lenses. 

Johan Redstrom and Heather Wiltse would call Google Docs fluid a assemblage - a symbolic material that is assembled on an as-needed basis. What does it mean that these dynamic assemblages are becoming a go-to resources for activists? How does the structure and meaning change based on the unique qualities (e.g. openness) of the medium?  How is it coded or afforded for certain activities but not others?

The philosopher Peter-Paul Verbeek talks about multistability - the way that humans co-opt technology for our own use. Is that what is happening with Google Docs, and if so how does the medium disempower and empower this? What does it mean that this technology is not being used as intended? 

These are the types of questions this newsletter will to wrestle with. It aims to flip the discussion table over to uncover the concepts beneath. All the stuff academics are talking about; all the stuff people have been thinking about for years, but aren't applying to our lived technological world. 

Ben Kraal recently started a newsletter called '1992'. Its aim is to examine academic papers from 1992 with the intention of applying them to UX practices today because there is so much we can learn from that which has already been written. This newsletter will do something similar, but will take a wider stance -- wider in terms of sources and wider in terms of application. 

Key topics this newsletter will deal with include:

  • Materialism

  • Systems thinking

  • The 4Es of cognition (Embedded, Extended, Enacted, and Embodied)

  • Design thinking

  • Ethics

  • Quantification and qualification

  • Design and User Research

  • Futurism

  • Semiotics

  • Ecological perception

  • (Post) Phenomenology

  • Modernism, post-modernism

If you don't know what these concepts are, I will define them as part of my efforts to display how they have enormous impacts on how we design and use technology, and indeed how we assemble into and through technology. 

See you soon.

#32
July 27, 2020
Read more

Untangling the technological human

File:Mechanics-bank-arm-hammer-tn1.jpg

Welcome to DisAssemble by me, Vikram Singh. I am a UX Designer, User Researcher, and writer based in London.

This is a weekly newsletter that untangles technological human. I use the philosophy of technology and a variety of theoretical lenses to untangle this system so that this clarity can help us build a better world

Sign up below if you’d like. I promise I won’t spam you.

Subscribe now

And, tell your friends!

#31
June 15, 2020
Read more

The Post-Covid March to Remote Worker Surveillance

via Claudio Shwarz

I run a Philosophy and Ethics in Technology salon in London. Its members are individuals who are involved in many different fields, but all have a special interest in technology. Each month we tackle issues and questions relating to technology. This month we discussed the topic:

“Watching Your Workers: How Surveillance Technology Can Change Remote Working”

Some insightful themes and solutions manifested themselves, which are worth sharing here.

A New Capacity for Spying

One of the things that is striking about changing the paradigm of work is that new ‘capacities’ occur. Managers can now easily spy (I won’t use quotes for that word!) on employees using a variety of methods — by tracking their typing, seeing their screens, or a plethora of other methods. This is related to the idea that the Philosopher Peter Paul-Verbeek discusses — that our relationship to our world changes, not just through technological extension of existing abilities, but also because the technology and society allow for whole new behaviours and behaviour choices to appear. In this case, the opportunity to monitor employees.

In our salon we discussed how new capacities as engendered by the mixture of new social dynamics and technology allows digital surveillance to happen. Social dynamics of trust, transparency, and habits have changed. And technology allows for surveillance. The dynamics on how someone is monitored, as facilitated through the type of work they were doing (seemingly easily digitally quantified work), and the medium they were using (i.e. a computer), is simply fundamentally different from how work was prior to digital technology.

In this new dynamic, questions around “can we?” become less relevant. The question becomes “should we?” — or, often, it is not questioned at all, it is just done.

An unpalatable frontier

Even within the umbrella of “should we” comes the question of palatability. Is worker surveillance palatable to the employer and employee? It shouldn’t be surprising that for a myriad of reasons this is unpalatable to the employee, but the effect on the employer can be a questionable on as well.

In an article we discussed, a NYT employee installed a time and screen tracking software and asked his manager to use it to monitor him. The manager did indeed do that, but began to feel ‘icky’. This is a major issue — new capacities for technological action on the part of managers appear, and managers themselves have to overcome their own ethical boundaries.

A depersonalised human

It’s not just that it’s icky, however; it’s that it is far easier to depersonalise the human at the other end of the computer. Our salon discussed how it is far easier to compress employees into a quantitative outlay of metrics with these technologies. Things like mouse movements and keyboard activity can be tracked. This is a dangerous precedent, as we noted that these aren’t reflective of work output. Designing, for example, may involve sketching on paper or just thinking, perhaps away from the computer. Coding may involve a lot of reading which may be perceived as inactivity. Moreover, because data is intrinsically reductive, it is easy to fool, and also likely would be subject to abuse and corruption. We noted how it would be easy to create invective to compete on these small metrics rather than other, likely more qualitative outputs (team building, learning etc).

Indeed, one of the articles we read was about Taylorism, which is managerial strategy, created a century ago, which worked by:

“breaking down tasks into inputs, outputs, processes and procedures that can be mathematically analysed and transformed into recipes for efficient production.”

Needless to say, this resulted in people being treated like machines, with employers carefully timing each action and squeezing efficiency out people by making them complete mindless tasks.

We felt that this ‘ depersonalised human’ is a distinct danger with surveillance tech. Interestingly, some of us mentioned that we are already feel like we we are being depersonalised. Some of us mentioned calendars and standups as being perhaps used for purposes they weren’t meant to, that is, ‘evidence’ of productivity.

The new abnormal

And these existing, creeping forms of depersonalisation point to the problem of normalisation — the worry that if this behaviour becomes normalised, the ‘ickiness’ will dissipate. If these technologies are treated as something everyone uses, then people won’t feel as ‘icky’ doing it, given that this type of spying feels natural. Indeed, treating people as digital objects or resources could begin to feel normal, as economic theory and Taylorism did in the non-digital world (e.g. ‘human resources’).

It follows that it’s only unnatural because it’s not currently how workers are treated online (at least, mostly), but there’s no reason why it couldn’t happen, especially with the precedent set by Taylorism.

We also discussed the panopticon of Jeremy Bentham, a prison in which prisoners’ cells were situated around a central hub of guards. The prisoners never knew if they were being watched. In a digital, globalised society we have become used to being watched. The philosopher Foucault used the panopticon as a metaphor for society. It wasn’t always normal to be observed, he noted, it’s only through the nation state and institutions supporting this behaviour that this became an expected state to be in.

Presidio Modelo, a panopticon

If surveillance tech is supported by institutions as remote work marches forward, then the abnormality of digital surveillance could become the norm.

A panopticon for the untrustworthy

A lack of trust from managers to employees is certainly one way it could become normal. We discussed how employers are being pushed by some of these ‘time-tracking’ companies (such as Time Doctor), and even by the capitalist system at large to not trust employees. We discussed the idea that this made little sense — people should be trusted, as for one, they likely would not be in the company they were in if they had no interest in contributing (at least for employees with employment mobility). Additionally, if someone isn’t working or not contributing, it may be difficult to understand why this is the case, and surveillance may be the ‘go-to’ solution for deeper issues. This is a psychological, organisational and societal issue, which can be challenging to parse.

The underclass always loses

But of course, poorer people and those deemed ‘unskilled’ are almost always trusted less. We discussed how individuals who have more to lose, or who are treated as more disposable will be less likely to protest against these surveillance methods. They can also often viewed by those in power as grifters. Indeed, due to remote working, they, like in many areas of society disproportionally lose with new systems of power.

So, how can we ensure that these themes don’t come about? We discussed some solutions, below.

Aggregate, don’t individualise

We felt any surveillance tech, if it must be used, shouldn’t be individualised in a way where individuals could be spied upon or surveilled in ways that do not account for their qualitative output. Methods such as screenshots or keystroke tracking are rife with not only ethical issues, but also are ineffectual. Instead, tracking the aggregate of workers to find key patterns is a useful way to understand how people behave, and what tools, methods or contexts may be useful for improving workers’ lives and how they can contribute to any organisation.

Champion workers’ rights

We thought ‘knowing your rights’ was a vital step to defend against this surveillance, yet workers rights as per their work computers are sadly limited. In the UK, it’s perfectly fine for employers to monitor employees emails, web history and emails. The company just needs to tell employees (in fact it doesn’t always — EPA guidance allows for covert surveillance of employees). Current laws in the Data Protection Act are pathetically limited, with ‘guidance’ just suggested:

“If e-mails and/or internet access are, or are likely to be, monitored, consider, preferably using an impact assessment, whether the benefits justify the adverse impact.”

Given the issues discussed, and the onset of remote work, this is something that needs addressing.

Champion change

The only way that these laws can change is through championing change. We discussed how just talking about it — whether in person or on the internet — is vital. Monitoring may seem normal as something that goes hand in hand with remote work, and challenging this narrative will require a great deal of discussion at all levels. Even challenging a surveillance paradigm through metaphors can be of help. Noting that conversations in offices are not monitored, even though the company owns the building is useful analogous framing.

Perhaps more than that, work computers are certainly company property, but what occurs on them on platforms that aren’t related to work should not be — computers are now far too intertwined in our lives, like infrastructure. In Changing Things: The Future of Objects in a Digital World, the authors liken digital content to a sort of ‘fluid assemblage’ that is made from a wide variety of different technologies, systems and data, into what the user sees on screen. How digital content is owned may need a conceptual change, and it’s likely that only through actively and loudly championing change can we make this happen.

P.S. If you’re interested in joining our Philosophy and Ethics in Tech Salon, email me at vikramsinghbc at gmail dot com! All are welcome!

#30
June 6, 2020
Read more

Facebook is already an arbiter of truth — it even creates truth

We get a warm and fuzzy feeling when we dwell on concepts like truth, love, and freedom. They seem so immutably transcendental — these concepts have no single physical correlate. Instead, we feel like we can point to them high above us as vague yet unchanging figures. Still, we strive to reach them — perfect forms for our capture (or our dismissal, if they are negative concepts). Plato, in his Theory of Forms, would argue that love has a perfect, unachievable form; all love in our world is a mere shadow.

Plato and Aristotle discussing something *more perfect*

#29
June 2, 2020
Read more

Perceiving and acting are forms of thought — product design needs to recognise this

Umberto Boccioni, 1913, Dynamism of a Cyclist

I spend my working days at a company that builds a social media management platform for charities. We recently conducted user testing on landing pages that advertised our product and kicked off our onboarding process. The idea was to explicitly ‘get across’ what our platform was like prior to having users sign up and actually use the platform. We wanted users to ‘get it’, and understand the advantages of our platform without actually having to use it first (as signing up can be a barrier for some people).

But in testing our advertising and landing pages, we received a lot of comments like:

“I wanted to know what the tool feels like”

“I just want to get to grips with it”

“I want to just have a bit of a play around”

People seemed to want a visceral experience with the tool. We tried clearly explaining what our platform was like in videos, descriptions, and images. We represented the platform and what it does in explicit detail. But it wasn’t enough. The participants had an almost indescribable urge for tangible experience, to know what each step of our tool felt like. They couldn’t put their finger on it, they just needed to use the platform.

Why do people feel like this? Why do people need to try out tools to ‘know’ them, even if they’ve seen them represented in explicit detail?

We like to think that we are in essence just brains floating ‘outside’ the world as impartial observers, with sensory apparatus like our eyes inputting data that we can process and act upon. We consider our cognition — our ability to understand — to be akin to computer processing.

So, when we talk about our cognition, we say things like “I need to process that”. We analogise the brain as hardware and thoughts as software, as though we are in essence an electronic machine. Importantly, we also consider thinking a linear sequence of perceiving, planning, doing, and interpreting, much like a computer program. We input data into our mind, process it, make a plan, then enact it, and interpret the results. You might call this a ‘computationalist’ theory of mind. Of course, it’s more than a theory, it’s a sociological metaphor. Metaphors are extremely powerful; the philosophers Lakoff and Johnson argue that we understand our world through metaphors.

Accordingly, a great deal of tools we use have been designed in such a way to reflect this metaphor of our cognition.

But we are not computers.

The way we go about knowing the world is fundamentally different.

We have bodies. We evolved with bodies. We evolved with our environment.

As our brains are parts of our bodies, they evolved with the rest of our bodies, and alongside our environment as well. Our ability to think wasn’t ‘created’ and it certainly wasn’t ‘created’ with an end goal in mind, such as processing information.

Think of our cognition, then, as being embodied — as part of our bodies, as a thing that has a context, a materiality, and a history of development. This means our cognition isn’t just thinking with the brain, it’s a systematic whole that involves perceiving and acting in and on the world.

Our perception is linked to interpretation — seeing faces in clouds, not noticing changes (‘in-attentional blindness’). Even basic things like recognising shapes, shadows, edges, movement — these are constructed as a perceptive act. We see the world not just subjectively, not just from a different angle as other people, but as a unique, on-the-fly construction. Our perception is attuned to interpret sensory input in a way that constructs meaning, based on past experience and from our biological evolution (we are attuned to recognise faces, for example). But we do not consciously think any of this out — rather, it is anticipatory, immediate, and implicit. Yet it is sensible to say that perception is part of cognition, in that it is a part of how we enact our individualised sense of the world.

We use our actions to alter the world to help us think. We organise our world to help us remember where things are, or that we have to do something: a note by the door; all forks in the drawer by the fridge; clean clothes in that basket not that one. Action reveals, organises and groups — it interacts with how we think about our world. Acting on the world can take the burden off our brain — and in doing so it becomes a cognitive act (the Academic David Kirsch referred to these as ‘epistemic actions’ — actions intended to facilitate information processing rather than pragmatic result).

These two elements — action and perception — are tied very closely to one another as well. The philosopher Merleau-Ponty gave the example of a blind man using a long stick to help him navigate his world through touch. The stick becomes ‘transparent’ to the man — he stops being aware of the stick as a separate object in space, but instead his focus is on how the stick interacts with objects in space. Perception and action are intertwined in an act of cognition.The same is true of all objects we interact with when we use them as tools, as well as our bodies.

This man is not focusing on the stick, but the the feel of his surroundings via an embodied stick

So, let’s start a sentence that builds on this point:

Perceiving and action are part of cognition.

Great. But it isn’t just that action and perception are a part of cognition, they are creative acts that feedback into themselves.

It’s perhaps easiest to understand this by comparing our cognition to a computer’s processing. You don’t plan your actions then enact them robotically the way a computer would. You just act , you just perceive— your actions aren’t analogous to you explicitly thinking: “Now I’m going to look to my left; next, I’ll reach over with my left hand to grasp a magazine”. While we are aware to varying degrees of how our body is engaged with the world, we are to a greater degree reflecting on wants, desires, feelings, etc, and that output of manifests as actions and perception. Ours is a generalised intent rather than a specific plan.

What’s more, as you act/perceive, the feedback from you doing it informs the next action/perception activity you undertake. Think how you explore what you are saying as you are saying it; when you are drawing, the act of drawing helps you to understand the shape and detail of the drawing as you are drawing it. Each action is an expression of cognition, of what you are thinking. Each act is a feedback loop that is inseparable from the next act. We do something and in that doing we learn more about what we are doing.

The anthropologist/philosopher Lambros Malafouris has argued that, in this way, cognition cannot be divided from our world “‘material culture is potentially co-extensive and consubstantial with the mind”.

So, normally, our immediate actions aren’t explicit. They are responsive, instinctual, implicit activity — more of a vague intention that a plan. Much like Daniel Kahneman’s System 1 thinking, we act and perceive without carefully modelling each activity we are going to do, and then planning how each activity is going to ‘run’ on the world. We just perceive and act to help us create an understanding.

Kahneman and Tversky’s System 1 and System 2. Via Eva-Lotta Lamm

This gets very abstract in certain actions which seemingly have no relation to what we are thinking about. Think about gesturing — people think it’s a way of communicating, but that’s very often not the case. Blind people gesture, for example.

This is why my research participants earlier couldn’t specify exactly what they meant — it’s very challenging to express how the combination of action and perception can help you understand things. It’s an intuitive understanding that isn’t just about impartially observing how things work, but implicitly understanding a process or tool by conducting a sort of acting/perceiving loop upon it. And it’s worth noticing that this is different from ‘practice’ — practice is about improving on an existing knowledge base, not creating an initial experience of embodied knowledge.

Let’s update that sentence:

Perceiving and action are an embodied part of our cognition that helps us intuitively create an implicit understanding of our world.

But of course, we can’t just create a world to understand out of nothing. Our world only allows for explorations that ‘afford’ it. This idea was pioneered by JJ Gibson who also coined the term ‘affordances’. Affordance, in his reckoning, simply meant a situation that enabled a possibility for for action. A stick can be used to hit someone with, or to point with, or a sensory tool for our previously mentioned blind friend. But different objects afford different actions better than others. Stairs afford stepping given their shape — you would be hard pressed to do something like lie down on them, a bed would afford that much more effectively. Affordances don’t even require our awareness: a hole can be used to hide in, but it can also be fallen into by the unawares.

The handles affords a specific type of grasping

Again, let’s update that sentence:

Perceiving and action are an embodied part of our cognition that helps us intuitively create an implicit understanding of our world through affordances.

So taking us back to our original question: we need to act and perceive to help us create an understanding of our world through affordances. And when we do, it’s often implicit action formed through generalised intentions rather than plans. And of course, these can only happen through affordances. This is what my research participants wanted to do.

There’s a problem in all this however.

The problem is that computers and the software on it are designed for people who act like computers. Obviously this was worse in the past, but it still remains.

We still ask users to create mental models of information and interaction structures that they can’t possibly grasp without significant experience with our products. And people find it difficult, or at best laborious, to understand the situation that doesn’t reveal itself through the kind of embodied cognition discussed. We force users to build representations and then make them navigate those representations in their mind to understand how an interaction would work. We force them to model it rather than generate implicit understanding through embodied cognition.

It’s much easier to define a structure that expects a person to linearly process concepts rationally into a whole than to apply concepts of intuitive understand through perception/feedback loops, as I’ve discussed.

But the divide of the world into perceiving, thinking and doing is a false one, or at least false enough that it has harmed the efficacy of digital products. This division between perceiving, thinking and doing is an artefact of the society and culture we find ourselves in. There’s no reason it has to be this way. It’s just the computer metaphor.

To be fair, it can be very difficult to create an embodied learning within the realm of digital products. HCI academic Paul Dourish touched on this in his book, Where the Action is. He notes that we implicitly ‘couple’ with things in our world (like a hammer) to get things done through affordances, but it’s very difficult to parse how we ‘couple’ with digital technologies because of the many layers of abstraction. In this way, it can be difficult to parse where the embodied action ‘lies’.

Still, there is a lot we can do to allow for — so let’s remember our sentence and look at some examples of how to implement it:

Perceiving and action are an embodied part of our cognition that helps us intuitively create an implicit understanding of our world through affordances.

Allow for guided doing

Computers and touchscreens are notoriously poor at providing clear affordance of action, given that screens are not tangible in any real sense, and are buried under layers of abstraction and interface. What I call ‘guided doing’ is the act of helping to create an intuitive understanding. By gently guiding someone through an action we allow them to understand the situation and how they are embodied in it.

You can see this in product tours — ours here as an example:

We at Lightful created gentle, stepwise product tours that got users to take the steps to connect their social media accounts and create draft posts. While some users closed the tour, a good portion of our users continued through it. New users who went through the tour posted more using our platform by quite a large margin.

Product tours are not perfect because it’s not just an implicit action-response the user undertakes. Instead users are required to read and ascribe an embodied meaning of the action through words, rather than just through action. However, product tours help by normally blocking out parts of the screen, and focussing on a single step in a way where perceiving and acting are the key activities, rather than explicit thinking. The objective of product tours is not just ‘showing rather than telling’, it’s requiring users to practice actions, integrating intuitive, visceral understanding of the rhythms, affordances and feedback of the product.

At Lightful, we tried explaining our our product , as though that would be sufficient — ‘if they can read about it then they understand it’, we thought. But this wasn’t nearly as effective as just getting someone to use the product in a way that embodied their understanding.

Words can be interpreted very differently. Semantics can’t communicate the implicit, embodied knowledge that embodied cognition brings. And this is vital for someone knowing and liking a product. When we got people to use our product with product tours the knowledge they received was unambiguous — there was an intuitive understanding framed by semantics.

Abstracted play

Abstracted play is the divorcing of the UI layer — the ‘noise’ — from the page to get the user to focus on what is relevant in a simplified, abstracted way.

You can see how Trello does this by creating a simple wireframe of their site and describing in simple words how to use their product. This is part of their onboarding process, in which people are still understanding the affordances.

Trello’s efforts brings affordances into clear view. The perceiving and acting become very simple. Our perception-action-cycle isn’t overwhelmed, trying to make meaning and finding affordances in a busy UI — it’s stripped back so the perception-action is straightforward.

What’s more, the user can see the result of their action in a highly visible manner. As they type, they see the names appear on the Trello columns to the right.

You might call this making the ‘system image’ clearer in Don Norman’s mental model structure.

However we aren’t asking the user to understand ‘system image’ explicitly. The perception/action loop is doing the work. Much like the blind man with the stick, the more ‘transparent’ you can make the correlation between the instrument and the effect, the better the embodied the understanding will be.

Microinteractions

There are so many microinteractions that do nothing to give the user an indication of what is happening. Rather, they look flashy, and pat a visual designer’s ego. Sure, some of them add an aesthetic flair, but many actually get in the way of an embodied understanding. Take a look over at Dribbble for some over-engineered animated microinteractions (I won’t place any here so as not to insult anyone).

Microinteractions should work as signifiers, affordances or feedback. Material design is an aspect of a larger system of microinteractions.

As the material guidelines state

“Motion focuses attention and maintains continuity, through subtle feedback and coherent transitions. As elements appear on screen, they transform and reorganize the environment, with interactions generating new transformations.”

Of course, material design isn’t a microinteraction, it’s more of a design system, but it contains a number of useful microinteractions. These include panels and drawers ‘swiping’ ‘in’ and ‘out’. The user can interact and get immediate feedback which then feeds into future actions.

The problem with material design is that it not always clear what affords what. Can you swipe everything? How do things that slide offscreen re-appear? Affordances, we remember, are possibilities for action.

The best microinteractions are those that are visible, have a clear affordance, and clear feedback when interacted with. Scroll bars are so successful because they require only perceiving and acting to understand. If you didn’t know how scroll bars worked, you could intuit it through action and perception: the scroll bar moves as you go up and down the screen.

Don’t require people to build a model of how things work

In the past 10 years or so, new digital creative tools have overwhelmed existing legacy tools. Adobe and Microsoft’s tools and many other older legacy software tools have been pushed from the spotlight. Sketch and Figma have replaced Illustrator and Photoshop in many areas. Keynote and Google Slides have shown Powerpoint the door. And so on.

Why?

Legacy tools have an underlying structure that belies how they see the user: as a computer, as a non-embodied cognitive agent.

These tools have many modes, invisible to the user. They don’t clearly reveal a user’s action. They overwhelm with unclear affordance in their UIs. They require that a user be taught how the symbolic creates an action (rather than just affording action), and how the model of all of the actions work with one another. It’s a significant cognitive overhead for the user that, in the past, engineers would claim is necessary.

You may argue “But I get Illustrator, it’s so simple”. Well, it’s likely because you have been trained, or watched videos about it, or Googled a great deal to understand the interplay of the modes, settings, tools symbols etc. You cannot pick it up and start using effectively like you would a hammer, Sketch, or Figma.

This symbolic knowledge is predicated on a lot of pre-existing learning

It’s increasingly clear that good design must incorporate a sense of embodied cognition to make tools more immediately useful and usable.

But this principle far far, from the ‘less UI is better’ canard. Indeed, less UI can often hide affordance, make it very difficult for a user to get an embodied understanding of a tool — everything becomes invisible and hidden.

Remember how we were talking about how distinguishing between thought and action was a fool’s errand? Well this should be reflected in tools. If I want to do something, it should just happen in a way where the goal is what is relevant, not the tool to use the achieve goal (ready-to-hand in Heideggerian terminology).

Context sensitivity, awareness of skill level, feedback, and consistent, predictable patterns can all help. When I act, there should be a clear reaction to my actions because I will attempt to both implicitly and explicitly make meaning of my actions regardless — and we should use that to help a user to understand. We shouldn’t ask them to build an enormous, complicated mental model of our tool, then shove them out into it. We should let them poke at it, and show what happens when they do. In that way, the tool can reveal itself to them through an embodied understanding.

One of the most basic features in Sketch, for example is by pressing CTRL, you can visually see how elements interact, their space, their alignment to one another:

There’s no question as to what’s happening — spaces are shown and by moving objects we can see line length and space change. A user does not have to imbibe an entire mental model to understand this interaction.

There’s certainly some highly technical tools where embodied interaction is difficult. Obviously, an aircraft controller won’t be able to poke and prod her away around tools in an embodied way — the entire mental model needs to be understood prior to using the tool. That, however, does not mean that the learning methods for the tool can not be embodied.

The fallacy of separating the mind from the body has a lot of pernicious effects. Crappy digital products are probably the least of the problems associated with it. Still, starting from the ground up can change cultural practices on deeper levels. So, when designing something interactive, ask yourself these questions:

How can I embody the user’s actions?

How can I ensure that users don’t need to fill in the gaps of an interaction model in their mind, and instead represent it all onscreen?

How can I make feedback as reactive as possible to action?

How can I ensure each action leads to a better understanding of the next action?

How could I build my tool in such a way that a user who couldn’t read would understand it?

And we’ll all be well on our way to a more embodied word.

#28
February 22, 2020
Read more

The 2020s will be a reckoning with our past: lessons from Disco Elysium

The world is endable. It may be ending now.

No, seriously. What I mean is that the potential of our world: democratic, open, progressive, free — it can end.

Of course, the natural hubris of looking from within a time period counteracts this narrative, the state of the world appears inevitable and immovable: There’s no way that basic things like democracy can end!

Of course it can. All societies can end — we just don’t believe it. We think of ‘ending’ as a dystopian society decimated by an apocalypse. But that’s not what will happen — it will be the death of a thousand cuts, all in the name of ‘good’.

If we look closely (actually not really — it’s blindingly obvious), the progressive apparatus of society is ending everywhere. In India, Turkey, America, Philippines, Poland, Hungary, Brazil etc.…some places it never had a chance, like Russia and China (and there’s no point in listing theocracies or totalitarian regimes here). And although some academics like Steven Pinker think that society is much better, capitalism and selfishness still create drastic inequality and murders trillions of animals a year.

Democratic society, liberal views, cosmopolitanism, press freedom, independent judiciaries, they’ve all been battered by populism, nationalism and religious zealotry to such a degree that if people from the bright, revolutionary 60’s saw us now, they would think it a joke. The idea of the holistic whole, the idea that we’re all in the together, ‘global citizenship’ — they are all being rounded up in the streets and ousted in performative farces of national victimhood and anthropocentric chauvinism.

The main horror on the horizon, of course, is a fully existential one: climate change.

So this future decade could, in many ways, be the end of things.

In this past decade, one the most important pieces of art I experienced was a video game, surprisingly. Called Disco Elysium, it tells the story of a cop solving a murder case in a fictional world, somewhat like ours. But this is akin to saying the Bible is the story of a carpenter.

The video game is about history, prejudice, nationalism, meaning, longing, and existentialism. Its scope far exceeds what you think may be possible in a video game. The world you find yourself in Disco Elysium is one of a decaying city, held up by an international entente and global capitalism. Factional struggles between unionists, fascists and capitalists lay a background to individual struggles of people trying to flee from torrid pasts and personal struggles.

Yet all of this is a tapestry that has — quite literally — holes in it. The world, you see, has actual holes in it that are growing, swallowing up the world in something ambiguous called the Pale, which is perhaps a void empty of meaning, or perhaps an aggregate of human memory, subsuming the present into the past.

But the residents of this world have put blinders on. Though they are aware of it, they choose to ignore the ending of the world. Indeed a key question of this game is what it means to make meaning in a world that is indifferent, in a world that is dying.

The sharp contrast of this imagined world next to ours can’t help but push the player to reflect on how we, as humans, and as a society, are so utterly unmoored to the the substrate of mattering that undercuts all that we are, and all that we do.

The desire to be something, to create a space of happiness for oneself, even just the preoccupation of oneself as the centre of the universe, divorces us from thinking about what actually matters on the deepest levels. The meaning at large: where are we headed? Why are we so concerned with others like us? Why do we go to work each day? As in minds of the people in Disco Elysium, our minds aren’t concerned with the bigger questions, we too are unaware of the potentiality of our world to end, or indeed the slow ending of our world.

One lesson, however, is clear from Disco Elysium: the past cannot be escaped. Without spoiling too much, even the murder that you are trying to solve ends up being due to the past clawing its way forward through time to pull the figurative trigger.

All of our actions, all of the movements of people, places and things, leads us to right now. The built world, your person, technology, culture — everything — it’s all due to the past. More that the past pushing itself into the present — it directs and constrains our future.

In 1992, Francis Fukuyama declared the “End of History” — meaning that the present would only be conceived by its own logic, not the past’s. The Cold War had ended, and democracy, liberalism, and capitalism reigned supreme. Of course, this wasn’t true — the past has only become more difficult to parse. And of course, democracy and liberalism are crumbling, it seems that only capitalism remains on the upswing (inasmuch as it compatible with populism and nationalism).

It’s clear that, in the wave of populism that indulges in historical grievance and ethnic superiority, history is claiming its territory within the expanse of ‘the Now’. And expansive ‘the Now’ is. The digital landscape is ripe territory for vast, fertile fields of minds afraid of the future, clinging to the past.

For every new wiki, there’s a thousand trolls. For every social enterprise startup, there’s a propaganda bot army.

And climate change? Well the fuse was lit long ago, long before ‘the Digital’ became a thing, so it’s just a matter of how much we can contain the explosion.

What’s more, our indulgence in the past, to do what we have always done and supposed to do, means we cause suffering and death to trillions of humans & animals, and contribute to the ecological destruction of the world.

There isn’t a ‘satisfying’ resolution in Disco Elysium, at least insofar as you expect perfect closure in your narratives. Again, without spoiling too much, the case somewhat solves itself, and you continue on your way as a cop, or you don’t. The world doesn’t care, but it will slowly decay.

The characters mostly don’t come to terms with their pasts, and as such, it dominates them.

Facing our past, too, is a lesson. We have to face our past — we have to honestly come to terms with our grievances, habits, cultures, rituals — and conceptualise how we can not be dominated by them to imagine a better future. Perhaps it’s futile. But being aware of the past, by mulling on how it interweaves into our present, and by calling it out, can help us. And perhaps, sadly, the only way we can do this is by being honest about how the world can end, how the world is ending. Not in an explosion, not like in a movie, but like in Disco Elysium: pulled down in the spiraled embrace of a thousand tentacles lured from the past that we choose to ignore because we think: that’s just how it is.

Happy 2020.

#27
December 31, 2019
Read more

The False Tech Gods Will Not Offer Us Transcendence

In the final scene of El Camino, the new Breaking Bad movie (spoiler upcoming), Jesse drives off on a long, curving highway towards the snowy peaks of Alaska. Via this scene we know his fate: he has succeeded, won, and now will now live a happy life.

Stories often end with this notion. In the closing scenes of a movie, characters quite literally drive or walk into an imagined Utopian state. They transcend their story and now live in a permanent stasis, an enshrined bliss. Their problems are fully resolved so off they go into their final, perfect state.

In books, movies, comics, plays, operas, and everything in between, the story almost invariably ends with an unambiguous finality. “….Happily ever after”, the cliche goes.

This narrative of ‘transcendence’ is written into our lives. If only we could solve the problems in our life, like in those stories we read, we will transcend into a blissful state. Transcendence in terms of a narrative indicates a final state that both surpasses human foibles and demands that no new problems shall arise.

We view technology as offering transcendence as well

This desire for a narrative finality embeds itself in all aspects of life. We read religion, politics and the media as narratives that involve a sense of transcendent finality. Religion speaks of a transcendence to the afterlife, political parties speaks of creating utopias just so long as you vote for them, and the media sculpts stories with endings that offer a cathartic transcendence. The criminal is caught, the hero rewarded, and that is the end of the story: the Just is praised perpetually, the Offender is punished eternally. If there’s more, we don’t hear of it — it isn’t interesting! The nuance of the post-narrative dulls the catharsis.

Our attitude toward technology is much the same.

People like Ray Kurzweil speak of a literal transcendence by moving our squishy brains into silicon chips. He speaks for much of Silicon Valley, who collectively seem agree in the singularity, wherein all human problems will vanish with an exponential technological cascade — true, theological transcendence.

Though these techno-utopians may not always explicitly refer to some form of transcendence, they implicitly suggest it by pointing to the unequivocal bliss that their product or service will engender:

“I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day” .

This is a quote from Loren Larsen, Chief Technology Officer of a company called Hirevue. The company’s offering is a technology that uses facial recognition to weed out candidates that are deemed not suitable based on their facial cues. The quote points to a common idea of believers in technological transcendence — that the human is the problem that tech will fix.

Most technological entrepreneurs, especially those with massive platforms or new technological mediums tout the transcendent. Mark Zuckerberg claims that Facebook could have prevented the Iraq War:

I remember feeling that if more people had a voice to share their experiences, then maybe it coulda gone differently.

In other words, war is something that a technological artifact can overcome without consequence. Our violent nature is something we can transcend (through Facebook). Despite the fact that Facebook was originally used as a means to rate women’s attractiveness.

Even economists felt the same. Keynes felt that we would largely transcend ‘work’ through technology and science, and boredom would be our greatest enemy.

These statements are substantively different from the transcendence proffered by religion. Yet in so many ways these carry similar weighty implications: humans are a problem that can be solved in perpetuity by something. Our violent nature, our bias, even our death of old age — these aren’t problems to be worked out through humans, but by an “other”: technology.

But there is no transcendence, least of all through technology

There is no perfect state.

It’s romantic and often poetic to think like this. It is also, of course, false. There is no perfect state. We solve some problems, but new ones emerge (or we intentionally create them). The story doesn’t end. We don’t transcend the idea of problems or ourselves.

Ours is an existence of perpetual striving whether we are aware of it or not. Philosophers like Nietzsche, Schopenhauer and Sartre understood this, as did many other philosophers.

And technology, of course, has never allowed us any type of transcendence, it just reshapes our relationship to the world. There is no end point; there is no point where a piece of technology solves all our problems.

Certainly qualities of life change through technology — often for the better — but technology brings with it new problems. A spear made it easier to kill our prey, but also one another. The printing press allowed for the dissemination of knowledge, but also falsehoods. Social media allowed people to keep in touch with one another to a wide and instantaneous degree but…surely I don’t need to list the litany of problems with social media.

The key isn’t just “with good comes bad”, it’s that we don’t transcend, we cannot transcend our humanity through technology. Visions often paint humans not having problems but that humans are the problem and that technology will fix us. Humans, as a species, are riven with problems and chaotic impulse, but these won’t be solved through technology (nor anything else for that matter). We, as finite beings moulded by evolution and our world, will always suffer in one way or another.

The Philosopher John Gray summed this up well in his book Straw Dogs:

Technological progress only leaves one problem unsolved: the frailty of human nature. Unfortunately, that problem is insoluble.

So if we do not transcend ourselves through technology, what happens?

What appears with new technologies is new states of being-in-the-world: the key word is new, not better. In these new states of being, new capacities, concepts, and relationships occur. Philosopher of technology, Peter-Paul Verbeek presents us with the example of an ultrasound. Yes, it allows parents to ‘see’ their unborn baby, but it also presents new responsibilities and ways of thinking. It forms a wedge in between ideas of the body of the mother and baby. It detects for symptoms of Down Syndrome. This, then, forces a difficult choice that parents otherwise wouldn’t have: should they continue with pregnancy or not?

In this way tech involves the creation of new capacities, that involve new challenges and problems, all while solving old ones

There is an argument that markings on ancient pots represented what was inside, and later were abstracted away from any physical correlate — the beginning of numbers.

In addition to generating to capacities, relationships and concepts, technology also extends them. Many argue that technology has helped us think and consider new concepts through immediate access to a variety of information. We can compare and contrast information in the physical world in ways that appear closer to thinking — but outside of the boundaries in our brain. Our mind, extended into the world. Is this transcendence?

No, our minds have always been linked to the material world. New capacities slowly emerge by using tools to help us count, create language and form societies. But even as we change — even as we developed language, tools, and society we still deal with stress, pain and anxiety.

Of course, it’s not as though we live in an unending hellscape — new capacities bring both the good and bad. For instance, some argue that the extending of our minds into technology is replacing our ability to remember, which I’ve argued may be happening, but in the course of this change new capacities for making and understanding relationships emerge.

But what about technologies that ‘think for us’ — won’t they help us transcend to new heights?

Some argue that displacing our will to algorithms and AI is good (see our CTO friend above), as they form purer, more effective and objective arbiter of ‘what things are’ (e.g., image recognition), justice (e.g., by matching faces in surveillance videos to database), and user behaviour (e.g. Youtube algorithms). While the displacing of our will is troubling for a variety of reasons, it’s also an illusion that human will is fully displaced, it’s actually just pushed further down the line. It’s the will of the designers of AI and algorithms, and any ignorance or bias they embody, that is conveyed digitally. We see how this repeatedly causes problems.

New issues are created by our humanity, however difficult this is to perceive. This is why it’s vital to carefully evaluate even AI and algorithmic technologies in ways that reveal the human interaction in their creation and usage.

But it’s good to have a vision, isn’t it?

Of course we need visions. We need idealism. They hold us to a goal, stop our efforts from becoming an anodyne, ‘design-by-committee’ mish-mash.

But we have to be honest about if that vision is something that is any more than a fairy tale. Is it, like WeWork’s vision, which crashed down in a comical IPO of empty promises and subsequent job layoffs?

Technology often offers more than just a simple vision because it is very difficult to perceive the impacts it will have, so the more idealistic among us will cling to the positive vision, the utopian, the transcendent.

But there’s a particular line of pessimism that I think is important to consider as we design, build and create. And in all honesty, this pessimism can be beautiful. Much art is devoted to the fallibility and difficulty in our being, in our finiteness.

Pessimism can help us be pragmatic. In his book Future Ethics Cennydd Bowles emphasizes spelling them how we will achieve our corporate visions. How will we achieve the lovely glowing words? He also mentions sci fi and other forms of complex narrative building as actual research. What’s important is that we don’t look at the bright shiny utopia, or the gloomy dystopia, but rather something that has bits of the bad and good.

Visions give us something to look forward to. But it’s important to parse the difference between a vision and an idea of transcendence. A vision defines a new way of being and transcendence implies a new way of not being. In other words: a way that will solve the problem that is ourselves, perpetually. It is certainly good to think about how we can solve very human problems, but the idea that a single piece of technology will do that is, to put it mildly, delusional.

What else can we do?

Focus on the human, the nuance, the complex.

It’s understandable why we don’t do this, though. The advantage of clear , uncomplicated visions is that they can be sold. Investors, employers, purchasers, clients, governments — they all need to be sold on something positive, not something nuanced with potential problems. But as more and more corporations are be directed toward triple bottom lines, and values other than capital generation, the picturesque sales visions may be a thing of the past.

Having an opposition to an idea of transcendence isn’t cynicism, it’s intelligence. It allows you to project your ideas realistically into the future through toolkits like the consequence scanning toolkit from Doteveryone. I wrote about a number of other ways to think about the future in this article.

Challenging unambiguous, transcendent final states isn’t cynical, it’s beautiful

I mentioned earlier that most movies end with an unambiguous finality. Of course, the better (in my opinion) movies end with ambiguity and thematic nuance. In Blade Runner, Deckard’s fate is left unknown (at least until the sequel) — is he safe? Is he human?

Is the classical Japanese Ozu movie Tokyo Story, 4 adult siblings are mostly uninterested in their parents, and seem rather devoid of emotion when their mother dies. One character notices this and asks “Isn’t life disappointing?” “Yes, it is”, wistfully responds one of the only characters who cares about the parents, a widow of one of their children. Despite her altruism, she leaves to an uncertain future, pondering a watch that ticks away her life.

There’s no transcendence in this movie, or, importantly, implied after the end of the movie. People change, and adjust in their new circumstances. Life is disappointing, yes, but also beautiful in its nuance and change — ‘mono no aware’. This is what makes Tokyo Story such a beautiful movie.

So why can’t we think the same way about technology?

#26
October 30, 2019
Read more

UX can no longer keep up with our world: what comes next?

I make my living through the practice of UX. I enjoy doing it. I think it has meaning. It gives me meaning.

Yet UX is beginning to show its age. Its bones are creaking as it struggles to keep up with a technologically saturated, inflamed modernity.

UX trumpets its maxim of “putting users first” as the solution to all ills. “Users first!” it demands at a minimum, and indeed, at a maximum. This foundational ethos is effective and important, but also too narrow, too shallow, too limited.

This is how UX has summoned its own limitations. Whereas once it was seen as ground-breaking and eventually essential, the bar has now been raised, the world has changed, and our outlook has shifted. We can now see where UX is ineffectual and are able imagine practices and theories of design that allow us to transcend UX’s limitations.

The limitations of UX stem from 4 aspects inherent in the practice:

Solipsism

Anthropocentricism

De-mediation

Internalism

If we break these issues down, we can address them — and find out what practice might fill the gaps.

Issue 1: UX is solipsistic

Only a single user ever exists.

One single persona, one person, one user facing a computer, one mobile, one experience. When a UXer designs, it’s only for one person — the user. The user is the centre around which all UX design orbits. ‘The user” is the reason behind and for UX. This is solipsism — the idea that oneself is all that exists. UX is just this — the perception of a product or service from one person’s view. And only that one person.

Unfortunately, the implications of a designed object go far beyond the person using it.

This fact only becomes more true as designed objects increasingly exist in a multi-touchpoint, omnichannel universe.

An example here is Ofo, in which a bike and app are a designed infrastructure in service to one user. The system is user-centred, with a clever app that locates the nearest bike for the user and provides convenient access through a barcode scanner. Yet the larger community of people who see their community cluttered with bikes are not taken into consideration.

Even mobile phones and digital devices aren’t designed to acknowledge the needs of people around the primary user. People yell into their phones, disrupting passersby, or stare into their devices, ignoring how they physically interact with the people around them. This could be solved through a better design, emanating from a more effective design practice.

But UX lacks the scope to think about this in a significant way, especially from the perspective of a digital-first UX designer. Dan Hill suggests “Strategic Design” could remedy these issues — “externalities” — of tech. Strategic Design, he argues, is a framework for holistically designing at the scale of both the city and the individual.

He claims that individual fields within design fail to address design challenges:

Judged from a pure interaction design practice point-of-view, Uber is clearly an exemplary user experience. Yet judged from a wider urban design point-of-view, its impact appears to be hugely damaging, with vast numbers of vehicles incentivised to drive into the middle of cities, apparently leading to increased congestion and reduced public transport use.

He sees UX, urban planning, architecture and other fields orchestrated under a “Strategic Design” conductor, harmonising to address socio-technical challenges.

This is one of many forms of wider service-design oriented practices that push back on a bottom line ethos and instead tow an ethical line, seeking to improve on the now tellingly parochial UX practice.

But with this broadening of scope of consequence to community it’s difficult to parse what is UX, and what is a different field entirely (though perhaps that matters little). A similar UX successor, Transformation Design, for example, employs participatory design techniques which are ostensibly included in UX (but, in my experience, are rarely used).

Yet regardless of the the title, this widening of the horizon of consequence in design is inevitable. But it can’t stop there.

Issue 2: UX is anthropocentric

In the era of the anthropocene, the primary force behind ecological effects are humans. But anthropocentrism has been at play since humans had language. This belief entails that humans are fundamentally different than other animals — transcendent — and able to transcend to even greater heights through religion or science. We’ve embedded this way of thinking in our in our societies, in our language, and in our designed objects.

The notion of transcendence is one in which everything is viewed in a subject-object dichotomy, with humans being the subject and everything else being part of a series of objects. This isn’t true of all cultures of course, with many indigenous peoples viewing humans as a part of a larger system, or animals as subjects in their own right.

Yet these anthropocentric attitudes prevailed and reached their height in the modernist attitudes in the late-20th century with the height of corporate, city, industrial and environmental planning.

The world is tamable, humanity claimed.

It’s now, within the anthropocene we see the effects of our anthropocentricism: climate change, astronomical deaths and suffering of human and non-human animals, and a failing ecology. Our perception of all things as objects which we might control, extract and destroy in order to construct different objects has led us here.

When we design, we have little regard for the subjectivity of a natural ecology involving lifecycles of countless organisms, weather cycles, and geological forces. It’s all just objects involving a system that points to us.

Of course, if you’re designing the navigation menu of an app that sells kitchen utensils you may wonder how your practice involves preserving a severely melting glacier. These problems are bigger than those which can be designed out of, let alone impacted by granular designs of interactions in digital objects.

This is why the successors fields to UX see the practice not only changing its area of focus, but also its scope; the role of the UX designer should become one that escapes the silo of individuated user interactions to a focus on frameworks to incorporate larger, systems-based questions.

Will someone still need to design navigation menus? Yes. But we need to expand, to look beyond humans as the primary subject of affect, and instead examine the wider ecology as subjects in and of themselves. In this way, UX could become a mindset that investigates every decision in a product lifecycle.

Cassie Robinson offers a range of practices addressing this , ranging from ecosystem design to consequence design (many of these also address the solipsism within UX). She offers provoking questions, such as:

  • What could you displace?

  • What are you accelerating?

  • What are you encouraging or incentivising over time?

  • Are you adding health in to this system?

  • How can you give prominence to care in your interactions?

  • How can you repair or maintain this system?

Anab Jain, too, looks to extend the frontiers of design beyond the human, as noted in her excellent talk:

Anab Jain’s fantastic presentation: a call to consider a Post HCI, ecological-first approach

Similarly, UX pioneers IDEO propose a “circular design” method to look at deeper ecological consequences.

Yet IDEO frame this more as a profit-driven exercise:

A new mind-set for business is emerging. It’s worth around a trillion dollars, will drive innovation in tomorrow’s companies, and reshape every part of our lives.

This doesn’t bode well for the long term sustainability of their idea. This is the issue with some successors to UX — they remain anthropocentric in their outlook, seeing financial gains as their motivator, without leveraging legal, political, and economic ways to find value systems other than financial.

There’s no way around it — looking at the bigger pictures won’t always be monetarily beneficial. But approaches that disentangle value from capital are necessary for our very literal survival and well-being, as well as for survival and well being of the animals and ecology we are enmeshed within.

Accordingly, the successor responsibilities of a user experience designer involve collective action in driving change. And not just surface-level changes of an anthropocentric, and accordingly, destructive system — but deeper structural changes altering how we go about deciding what and how to design, and what a ‘good’ design entails.

By some arguments, the application of superficial rather than structural changes is what happened with sustainable development (sometimes referred to as “greenwashing”).

Greenswashing via elkhiki

Even the IEEE Standards Association is waking up to structural changes. It makes bold claims about a form of responsible participant design, which aims to prioritise people and the planet over profit and productivity.

But hasn’t UX always been ultimately antithetical to capitalism anyway? It was always what’s best for people, not capital, at its core. Ultimately, we need to expand on that idea in our collective visions — beyond just the human to the living ecology we happen to be a part of.

Issue 3: UX de-mediates

Traditional UX frameworks inherently view technology as a medium which the user can control and affect: the product is ultimately neutral with respect to the user. That this could be more than a one-way relationship was never cared for, or otherwise considered.

We see designs that are highly usable, but are actively ignorant or uncaring (or both) of the effect they have on the user and the world. We see this in how designers didn’t have the foresight or skill to reflect on what it meant that Facebook was addictive, created filter bubbles, or was able to generate political agency in its users. Yet Facebook is indeed capable of all of these, as recent history has shown.

Technological determinism — the idea that technology dictates how we behave — is not the argument here. Instead it’s what the academics McLuhan, Latour, and most recently Idhe have discussed: technology mediates. Mediation in this sense means technology creating and shaping conceptual attitudes toward how we think about our world, and accordingly, how we behave in it.

Mediation occurs through new human-technology relations. It’s not the technology or the human by themselves, but the new relations that exist between them that create new actions and ways of thinking.

For example, “at work” means different things when have constant access to Slack and work email. Ideas about what it means to plan and think about “going shopping” have changed with how we engage with ecommerce. “Being online” wasn’t a thing 30 years ago, and it meant something different one or even two decades ago compared to what it means now. Indeed every technological artifact — whether we want it to or not — mediates, affording some behaviour and not others, changing how we think and what we think about.

The UX process has no space to scope out how a technology mediates. In this way it is actively de-mediating.

The UX framework wants to think of the product it helps to create as invisible or at least as transparent within a “Jobs to be done,” primary-task type of approach. But it’s not just your tasks that change with a new product: you, now mediated, have a differently structured life, which cascades to your thoughts, which cascades to your actions, which cascades to society.

At most UX has a mild sense of how a user’s behaviour changes with relation to the product — i.e. “What will make them come back to our product?”. We see amoral, shortsighted academics like Nir Eyal and BJ Fogg cultivate this line of thought in their Machiavellian works(“A Guide to Building Habit-Forming Products” — shudder). Of course, there’s no investigation into how behaviours and indeed thoughts change outside of the envisioned product use relationship.

Is a product incentivising unforeseen activities? Are users’ understanding of the world changing based on how the product has affected them? Are old terms given new meaning to them? Are their roles in the world changing? We don’t know.

Design frameworks other than UX fare better in seeking answers to these questions.

Speculative Design is one of the approaches that seeks to understand, among other things, future technological and societal paradigms, and the effects that these may have on people. “Design fictions” are very literally physical, embodied “future objects” that foster debate of possible futures. Participants in design fictions are intended to experience and interrogate how a potential future may impact us, our societies, and our environments. Inherently political, speculative design is a powerful tool for policy makers.

Design fiction playing cards via Garnet

In the study of Human-Computer Interaction, post-phenomenological research investigates the mediating influences that technologies have on people and their relations with the world. Post-phenomenological HCI sees people as interwoven in their environment, investigating the multi-dimensional uses of technology and how that affords different behaviours and thoughts. Peter-Paul Verbeek, a leading proponent of this approach, has an interesting course on Future Learn that I recommend.

Both of these practices are quite a ways from making an impact on design in the private sector. Once again, it’s likely because these practices don’t fit nicely into a process diagram next to accounts and engineering; they are inherently unbounded and political.

Issue 4: UX is internalist:

Do you remember everything you need to know?

No, you do not.

Instead, you often remember where are things you need to know. Important information is in Slack, or your email inbox, or on a note you scribbled and left near your door. These aren’t just reminders, they are your memory, externalised. You implicitly realise this, so you don’t put effort in to remembering.

But your environment functions as more than just your memory. When you are writing, designing, or doing some other creative or information task, what does your environment look like? If you are doing your taxes, you likely have different bills scattered around, if you are designing you likely have design inspiration littered around you. This is an active cognitive process — you are using your eyes to call up information as you need it and integrate it into your thought processes: epistemic action, this is called.

This theory that your mind extends into the world is known as extended cognition.

With the interweaving of our lives with digital technology, the plausibility and explanatory capacity of this theory has only increased.

You offload your directions to your app map. You store your memories in photos . You have browser tabs open that you cross-reference with each other. This ecology floats next you, interweaving with your life, accessible from different touchpoints.

But UX doesn’t care to examine how people think and remember using objects. It states that a person thinks toward an object in the format of

person →object

Yet as extended cognition theorists have been saying for years, we must consider the coupling of person plus environment as single bilateral unit in the format of

person ⟷environment

- a single unit of thought.

This reframing shatters our ability of how we consider the manipulability, transparency, personalisation of tech. Just as we don’t consider the subjectivity of the world around us, we don’t consider how we integrate into this greater subjectivity.

Entirely new affordances can appear by shifting our horizons to consider epistemic action. Think of a set of scrabble pieces in front of you. You may physically move your chance-determined pieces between one another to investigate prospective words. This physical act of thinking creates connections in the forms of words that you may not have seen otherwise.

Now consider a much more complex information environment a user may create in a tightly coupled human-device relationship— what connections might they be able to generate? Everything they’ve read in the past week, each song, their browser history, structured, sorted and fungible in ways that flex and fold and bend together. Users offload, manipulate, contrast, reference, theme and associate within this ecology via their mobile, laptop, or any number of other devices. In doing so a user is able to shift their focus from being directed towards individual content to the relations between content: patterns, associations, themes, etc.

How can we possibly design for this? What are the frameworks that help structure taxonomies? How do we even begin to conceptualise this?

I’ve yet to see any UX framework/process that even begins to address this thought, but philosophers and cognitive scientists have begun putting together conceptual categories for examination.

Andy Clark and later Richard Heersmink have suggested we ask questions about the nature of the human-environment cognitive couplings such as:

  • How reliable is the connection in terms of what is required to maintain it (e.g. electricity, distance etc)?

  • How durable is the connection in the face of stress such as uncoupling or coupling?

  • How can information gathered through the coupling be trusted?

  • How transparent is the process for transmission of information?

  • How easy is it to interpret or understand the information that is transferred?

  • How easy is it and to what extent can we personalise the cognitive coupling environment?

  • How does the cognitive coupling transform our brains?

How this applies to digital environments is likely highly complicated. Yet I haven’t even seen any conceptual frameworks even try to make sense of our personal digital ecologies.

But it’s clear that as we become more tightly coupled with our technologies we must at least attempt to understand how we think with our environments. Because this is already happening. And we have to be able to conceptualise it in order to design for it.

All of these issues are related. They all funnel and intertwine and challenge the foundations of how we think about design.

“UX is dead” was a trite canard that some years ago floated around Twitter and the more mediocre design blogs. Are we here again?

Yes and no. There’s no way UX is going anywhere.

But with the multiplication of factors to consider from both a theoretical and practical perspective, UX has been sent spinning down a path from which it won’t rebound without deep structural change. The role, the scope, even the theoretical underpinnings have to shift in a way that may leave it totally unrecognisable.

But that’s a good thing. Don’t hold on to those post-its too tightly.

#25
August 11, 2019
Read more

We aren’t becoming “dumber” because of Google, but we are becoming cognitively different…

When new technologies come around people worry. Teeth gnash, hands wring. Not that worrying about the effects of new tech is unwarranted, but the worry normally results from changes in a small set of variables. People nervously monitor these variables in lab-based studies, with any change being reason to raise the alarm: a technologically-driven dystopia is at hand!

For instance, lately, there has been a great deal of concern about how the web and digital technology generally is affecting our memory and our thinking habits.

#24
December 3, 2018
Read more

UX must push to see beyond quantification, beyond capitalism

image via: Curtis MacNewton

A number of things are happening now around theme of data and it’s application to humanity:

  • Social media’s algorithms are under fire for manipulating elections and polarising political discourse

  • An unregulated, data driven gig economy is increasingly seen as inhumane and anti-labour

  • Some feel that we are offloading our social and personal lives to ‘black boxes’ that make decisions for us

Data, nominally an invisible entity, is beginning to be felt by all. There’s always been dystopian, kafka-esque concerns about the reduction of humans to data points, but it’s only now that we are beginning to see these concerns be truly and harmfully reflected in nearly every action we take. This isn’t a dystopia, or a totalitarian regime bent on societal control, it’s simply the order of the day for capitalism.

Capitalism seeks capital. And capital can only understand itself through the quantified: it can only be represented by numbers, not by quality. Flattening ‘things-in-the-world’ such as quality, knowledge, concept or people into numbers is hugely advantageous for capitalism because it allows for their processing.

In tech, this is doubly true. Value is quantified, but so are all problems and solutions. The ability to measure, optimise and solutionise is unparalleled. Any any social ill can be ‘solved’ by clever enough application of of 1’s and 0’s, tech claims.

User Experience Specialists, the venerable advocates of the user, are forced to play by the rules of the quantified, the bottom line, data.

But the foundation of UX is not numerical data.

It is people. It is people being-in-the-word. It is their experience.

It user-centred, human-centred design. It is quality. An experience is not a quantity, it’s a quality. An experience is phenomenological, not mathematical.

Yes, we can try to put metrics next to experiences like happiness or frustration, but you don’t feel a “3” on a scale of frustration, you feel what you feel. Given an opening, you might talk about qualities of your experience which may or may not include happiness or frustration but may involve other emotions, themes or observations. How you construe and reflect on meaning from an experience is severely constricted by the quantified researcher-defined parameters.

All you are allowed to be. via (https://pixabay.com/en/emotion-scale-emoji-icon-feedback-3404484/)

In the academic world, this is exemplified in the replication crisis, which sees psychology and its various sub-fields harmed because of the difficulty and slipperiness of measuring people’s experiences(not too mention the ability to manipulate such ‘objective’ standards through means like p-hacking). There’s good-faith efforts to address this, but the problems are glaring and deep-rooted.

In business this is extraordinarily apparent as well. Again and again, when I do UX research and analyse themes or concepts, I’m asked for the “data” that supports my analysis. Of course, the person asking me this means “Show me numbers!”

But being emerged in a contextual inquiry, or conducting qualitative user testing allows you to notice trends and themes by carefully noting the meaning behind people’s actions, words and understandings. Analysis such as this doesn’t result in numbers — numbers may play a part — but the overall analysis looks to understand the depth, breadth, and relations of concepts. And these concepts might move between levels of granularity or rely on a number of variables (facial expressions, tone, distractedness, etc.). All of this means that there is no single number — or there shouldn’t be — in most forms qualitative UX research.

Yet the quantified underpinning of capitalism forms our frame of reference, as the realm of the quantified defines what we can and cannot do. In other words, our creativity and its resultant output are restricted. James Bridle refers to this as ‘computational thinking’. We think in terms of optimising local areas of systems. We think of increasing conversion. We think we can solve social ills with enough 0’s and 1’s.

It’s not, I would argue, a UXers remit to inhabit this ontology, this way of understanding the world.

UXers are ostensibly an advocate for the user, not the business. Indeed, that’s where they are most effective. Yes, UXers paid by the business to make the best possible experience for that product, but the best possible experience for a user and the best possible experience for a product are not the same thing.

Take an example: a user’s goal might not be to purchase a airline ticket, it might be to learn about how often planes leave to a particular location (good luck trying to find this info!), yet enormous UX resources are devoted to making that airline ticket more attractive— even to the detriment of other use needs (or feelings or states). In this way business needs and user needs often conflict. The UXer is the advocate for the user — that’s why they’re there.

But so what? You might say that this is all just definitional, that a UXers job is to make business and user needs together. That’s fine, but don’t mistake the ability to make money as a good user experience. A good user experience doesn’t require a company to make money.

But of course capitalism does. It demands money and it demands metrics to show how an user’s experience is improving their money-making. Money is quantity which only understands other measurable quantities. And a quantity is only measurable when it is becomes a variable that must be made (seemingly) objective and generalisable by defining a set of parameters which determine an instance of that variable. Yet experience is personal, subjective and continuous. It is unbounded.

This is why it can be difficult or inappropriate to think about a single product’s UX. A UXer must erect artificial boundaries around the context of investigation — conversion becomes the ultimate arbiter of an experience, not the actual quality of an experience. And to understand conversion, we have to measure. Our ability to examine someone’s experience of the world degrades because we can’t engage with the full range of experience because of the demands of quantification vis-à-vis capitalism.

For example, the web as a whole, and applications of it, are far more concerned with retrieving information than helping people to manage information, and facilitating the building of personal ecologies of relevant information. Bookmarks have barely changed in two and a half decades. Ideas such as transclusion and Stretchtext that would aid in building personal and global semantic relationships died before they were started — and they were imagined half a century ago.

Ted Nelson’s Transclusion

This is because it’s simply far more profitable to facilitate the finding of content than to help create frameworks to support personalised ecologies of information. But any UXer worth her salt will tell you of the importance of personalisation, wayfinding and sensemaking — all qualities that could be engendered positively much more effectively if we focused on personal, curated informational ecologies. These concepts don’t exist in isolation amid the artificial boundaries of URLs. They cross channels, cross into our brains, and into our lives.

But we can’t look at experiences in this way, because in digital improvement (read: optimisation) only takes place at a hyper local level. Even from a quantitative perspective, this isn’t efficient. Geoffrey West has noted that when we look at the biological world, we sometimes see inefficiencies at the local level, but the picture starts to make a great deal more sense at the global level. Here we see the how local inefficiencies often make sense — in global efficiencies. Things become more optimised at a global level at the expense of local optimums. Of course, we don’t — we can’t — think like that in the quantity-capital paradigm.

Yet even this ‘global’ optimisation thinking is more about quantity rather than quality. Because understanding the human quotient isn’t about optimising globally either — optimising seeks quantification.

This isn’t it to say that understanding of quantity is useless. Such an assertion would be absurd. In tech, quantification can tell us, in dead tones, how much of things, like interactions , downloads or hits. It can tell us about routes taken, objects clicked. It cannot help us with vital issues of experience that exceed the parameters of measurable quantities, such as:

  • How can we help you build your life in the way you want it to be built?

  • What are ways that we aren’t supporting you in doings something that you need supporting with?

  • How can your doing a particular activity in the world make life better for all?

  • What meaning do you make out of your interactions and experiences with an activity you do?

  • What do you understand from your interactions with a particular area of your life?

  • What is the context of your experiences and interactions?

And simply, how can your life be better?

The answers to these questions can’t be bounded the variability of a single — or even multiple — measurable quantities, measured within the use of a single product. Indeed, qualitative answers to these questions may point to the fact that you shouldn’t use a product in question, or might even show that we should scrap certain digital products (given how damaging they can be to our mental well-being).

How can we focus more on these quality-based questions, on the totality of experiences?

How we consume, how we prioritise incentive-based structures overall all others, and how we build our economies needs to change for one. I don’t need to explain why there are millions of other reasons that this needs change as well.

I’m not particularly in favour of any other socio-economic framework, but we have to be able to imagine alternatives. It has to start somewhere, and imagining a quality-based world rather than a quantity based world is a start. It’s a place that UXers know well and are predisposed to.

When we begin to uncouple from the quantified, from capitalism, our horizons shift and our gaze follows, enabling us to see patterns, themes and causal structures that were otherwise invisible.

When we see qualities, we begin to see how things are connected, and how we form meaning in relations to other things, not just through a individualised subject-object dualities.

Husserl was the first to study Phenomenology, which I allude to a lot in this article. People have been thinking about this a long time. I’m hardly the first to discuss this.

We see that experiences aren’t bounded to individual minds, they’re the result of series of subjective events in an undulating temporal, physical and socio-cultural environment. The artificial boundaries that quantification inserts tends to be reductive, removing meaning.

The content and meaning of relationships that you have with human, environmental and technological systems around you reveals the very qualities of your existence.

For example, on an individual level, we use the world to help us remember, think and be creative. Browser tabs are memories embodied. Emails are externalised lists of activities we have to do. How we formulate intent and use our world helps to define us, which can only be explored qualitatively. We can’t think of software and the web as individualised elements with defined parameters, but rather part of systems that are us, that contribute to forming and creating further needs, emotions and states of being.

On a global level, qualities and relationships are unbounded as well, defined through and between systems. Global warming, political polarisation, fake news — these are all issues that require qualitative and systems-based thinking to understand how best to solve.

This isn’t woolly thinking. It’s well researched, involving fields such as philosophy, cognitive science, archaeology, and human-computer interaction and systems theory.

Imagine if UXers and indeed workers of all stripes could work across digital and physical ecosystems to creative qualitatively impactful experiences, rather than increasing the quantifiable measurement of a small part of a single one.

What could we create?

#23
September 10, 2018
Read more

FutureFest 2018 told us to fear the future, rather than be hopeful for it

At FutureFest 2018 water dispensers were powered by fob.

People could move a small fob on a string to a highlighted area on a dispenser to fill up their water bottle. Four people could get water from each dispenser, given that the bulky cubic dispensers had 4 fobs on each side. There were at least 4 of these squat dispensers placed throughout the event, clearly intended to show off some fancy future technology, albeit in a rather silly way.

By the end of the event, all 4 off the dispensers were out of order — only 1 fob on 1 side of a single dispenser worked.

I couldn’t help but wonder if this was intentional, given the pessimistic tone towards technology that existed at Future Fest.

Movers and shakers, futurists and artists of renown were all present at this established London Festival. The aim of the event was to “put control back into the hands of the people” and to build “bold solutions to this era’s biggest challenges.”

But the theme at Futurefest was one of trepidation and cynicism. Indeed, the cynicism was about the now as much as the future. Data, more often that not, was seen as the enemy. The Big 5 were the invisible villain. They were utterly invisible in that they were seen to be all powerful and everywhere, even in areas of your life that you would neither expect nor sanction. But they were also invisible in that they had no representation whatsoever at the Festival, which had the effect of making many of the debates somewhat dull. This also meant that finger-pointing tended to be the order of the day, rather than collaboration.

The enemy…

Writer and Speaker Douglas Rushkoff repeatedly slammed everything from Artificial Intelligence to quantification as a very inhuman enemy. In his polemics however, there was a noticeable absence of concrete examples of how this was the case, except in his own anecdotes, which seemed less interesting than he perhaps imagined. “Do we really own our phones?!” he exclaimed, implying that we were bound by a bevy of privacy contracts. While this is true, it undercuts far more interesting questions of how the concept of ownership changes, and indeed why the concept of ‘ownership’ has meaning at all in this day and age.

Evgeny Morozov attacked big data and AI as well, but in a more nuanced way, claiming that we should collectivise and pool our data, choosing who may access it and under what terms. Still, there was little he offered as how this could go about changing — no actionable examples were given. There was little to discuss as well, given that there wasn’t anyone there who could explain what the difficulty with his solution might be.

Academics, too seemed off-put by the spectre off AI and big data, especially as instantiated by Google & Facebook. Much hang-wringing was expressed from Professors Noel Sharkly and Rebecca Allen. Yet their arguments were often fairly poorly articulated; concerns ranging from not wanting physical augmentation to broad concerns about AI were present, but little in the way of thought-provoking solutions were posited. Brilliant people both, but their rather ambiguous hand-wavy concerns did little to advance conversations or provoke thought.

Surprisingly, Nick Clegg seemed to offer a perspective that seemed to mirror my own: he claimed this ubiquitous doomsaying, present from both the left and right, prevented long term solutions to potential threats from technology and tech companies. A positive attitude towards technology, he claimed, could help embed legislation and political programs to develop and harness technology. A sensibility of fear, he claimed, meant that it was much more likely that successive governments would overturn programs aimed toward embracing technological development.

This dearth of solutions and lack of representation of this invisible ‘other’ tended to set the tone, which meant that most talks were fairly predictable in tone and content.

One solution I did see came from Anab Jain. Her solution was perhaps more a way of discovering solutions, rather than a solution itself, however. She and her agency, Superflux, promoted speculative design: the process by which ‘design fictions’ are articulated through provocative futuristic artefacts which elicit useful feedback from participants in the research. She nicely explored this with Mantis, her AI global risk startup, which she revealed to be fake (a speculative design) after her presentation (much to the chagrin and interest of the audience).

The fake ‘Mantis Systems’ provoked thought and interest from participants, as was the goal

But I think there is much to what Nick Clegg said about the political fear that now seems embedded in our discussion of technology. This fear seems especially articulated in a (good) book I am currently reading: The New Dark Age by James Bridle. In it, he claims that it is nearly impossible to understand the vast and invisible computation occurring that governs our society. He claims that new metaphors are needed to grasp, if not understand, these forces. While he makes many good points, his gloomy outlooks predisposes us against agents and organisations that may have a positive outlook towards technology — even if he claims that he is not anti-technology.

But this attitude reflects the sharp divide in the discourse towards technology. There is the critics — sharp-edged commentators on the dystopian possibilities of tech: Zeynep Tufekci, Adam Greenfield, Douglas Rushkoff and many others. On the other side are your Silicon Valley technologists — Mark Zuckerberg, Peter Thiel and any number of startup founders, as well as journalists such as Kevin Kelly.

This antagonistic divide does little to help us. Both sides have cogent arguments, but few people encompass both sides. The critics tend to recognise technological advantages only begrudgingly, with an ever-present subsequent “but…” and the technologists tend to be tone deaf, responding to humanistic problems with technology rather than anticipating them.

This is only exacerbated when, in places like Future Fest, the angle is slanted far toward one side than the other. Pointing fingers at vague threats tends not to be a useful enterprise.

Ultimately, defined collaboration of technologists and critics is the only way we can smooth the bumpy present out into a comfortable future.

#22
July 8, 2018
Read more

Solutions to misinformation need human-centered design

Designing news for the modern consumer can help overcome misinformation. Photo by Mike Ackerman

Where can we find the solution to the spread of digital misinformation? In technology? Media literacy? Fact-checking? Legislation?

There’s no question that these are useful entry points to attack the problem of misinformation — but what of the root of the problem? The root of misinformation at any given time involves our relationship, both conceptually and practically, with the news. We’re the readers, we’re the ones who misinformation is for. If we want to attack the problem at stem and root, we have to step back and consider our ‘experience’ of the news.

Most proposed solutions to misinformation seem to lack this perspective. This may cause, and indeed has caused, solutions to misinformation to be ineffective. Case in point, current solutions seem to be operating with the following premise: users think news is a repository of factual information about current events.

Solutions inheriting this premise very reasonably attempt to address this problem by increasing people’s media literacy through fact checking and displaying the outcomes of said fact-checking. There have been many approaches like this:

  • The Credibility Coalition are working on implementing ‘credibility indicators.’ These indicators attempt to show how credible a news story and a source is. This endeavour is at an early stage, but in application, it would seemingly involve some sort of visual indicator to the user, which would note that a particular news source is trustworthy, untrustworthy, or somewhere in between.

  • The Trust Project also provides an indicator system, this time on news organization’s ethics and other standards for fairness and accuracy. It appears as a logo on news organisations who have been verified by the Project.

  • Even UX-first solutions have honed in on technologically-centred solutions. In this article, UX architect Jason Salowitz presents a credibility framework for news stories. He discusses a ‘validation engine’ and a number of indicators that could help users determine the validity of particular articles and news sources.

These solutions are like an objective judge of the news’ validity, determining the ‘truthiness’ of an article or validity of a news organisation.

But if we pull back, if we think with a human-centred approach, we can begin questioning the efficacy of these solutions: do they integrate with how people live their lives, and meld with their conceptualisation of the news?

So what would a human-centred view of news engagement tell us? Let’s investigate, and in doing so, we can question whether our view of the news as an abstract reporting of facts is accurate. It will also help us generate some UX takeaways that should be considered in misinformation solutions.

Principle 1: Readers don’t make a particular ‘intent’ to consume news:

In previous eras, engaging with the news was causally related to an intent to look at the news. You had to choose to pick up a newspaper or turn to a news TV program. Now, news is typically posts on social media, comments on posts, and chat messages to one another. ‘News’ tends to live as an ever-present entity that takes almost no effort to view.

This means any solutions need to mesh well with the embedded experience of the news. Solution frameworks should engage the reader in a similar manner as the news. They should be embedded in our everyday experience, not abstracted from it.

Principle 2: We are accustomed to receiving news without context

As noted, news often manifests as tweets, posts or comments that frame or respond to news articles. In this way, ‘news’ is separated from the requisite factual bedding that news stories have historically had in media such as newspapers and television. Compounding this de-contextualisation, 60 percent of people don’t read past the headline. News organisations have responded to the atomisation of news and corresponding user habits by making news articles shorter and punchier than ever, often in the form of bullet points or inflammatory headlines.

This means solutions shouldn’t provide context or other useful information through vague approaches which require users to continually chase down facts and figures and rationales. If we expect that people won’t read past the headlines, it’s unrealistic to expect users to innately want to understand the broader context of why a particular story is treated as misinformation.

This also means that asking the user to understand complicated mental models are likely going to be ineffective. Self-imposed and external time pressures mean that solutions need to do what they do quickly.

Principle 3: We use the news to formulate our identity

We are more partisan than ever. Filter bubbles, the immediacy of news, user comments, memes and mobile phones have all contributed to this state of affairs. We don’t have to dig very deeply to understand who represents our views and who does not.

This means that solutions can’t simply ignore or act in contradiction to a user’s associative group structure, rather they must work within the parameters of peoples’ tribes. This isn’t to say that these tribes are good or useful, but merely that they exist and need to be accounted for. Solutions that tend towards partisanship — or even hint at it — will likely be unsuccessful.

So how might these principles be incorporated into solutions? Here’s just a few ways.

Facilitate opportunities for providing context & serendipity

Incorporating more credible articles next to less credible articles can help educate readers not only on a more authentic description of events, but can help users to understand what accurate stories ‘look like.’ An objective isn’t necessarily only to show more plausible contrasting accounts, but to get people to explore out of their comfort zones. This is a form of what is known in information studies as serendipitously finding information. Users can ‘accidentally’ come across information that is of value to them in a way that is embedded in their existing news consumption.

This has been proven successful previously, as a highly detailed and insightful report from the Shorenstein Center notes:

Experimental research by Leticia Bode in 2015 suggested that when a Facebook post that includes mis-information is immediately contextualised in their ‘related stories’ feature underneath, misperceptions are significantly reduced.

Facebook’s related articles feature

Utilise social proof

Encouraging contextual exploration and serendipity is useful, but it doesn’t mean that what users are discovering necessarily embeds satisfactorily with their beliefs, identity, and their associative group.

Therefore, a credibility framework can be enhanced by nudging users to explore content through the provision that other people like them are also looking outside of a single information source. No one wants to feel as though they are less knowledgeable or competent than others, so messages noting that others are looking at related content could prove valuable.

Here’s a quick wireframe of how social proof and contextual articles could work together:

Including related content and articles can “nudge” users to explore more information. Mockup by Vikram Singh.

A story is rarely presented without a social layer, given that news is already filtered and editorialised by friends and people you follow. Accordingly, this approach embeds well with a user’s experience.

Engage users in the meta narrative

Misinformation thrives on ignorance and a lack of context. As such, we want users to be able to understand the broader picture of a news story, but without overloading them or categorically shutting down their political perspective, so that they can better navigate away from misinformation.

For example, take a look at a new site entitled “Kialo”, which hosts debates by topic. Each topic has arguments for and against, with each of these arguments containing sub-arguments for and against the arguments (and so on, deeper into specific sub-arguments). Each argument and sub-argument is voted on.

Here’s how the topic (the grey box) of whether the US should pay reparations for slavery is structured, with arguments and sub arguments — green ‘for’ a orange ‘against’:

Kialo thus encourages users to navigate away from a single information source using a tree structure.

Users are able to explore the totality of a topic in a familiar format (most people are used to tree structures). In a validation framework, if we’re able to harness not only validity of content but also theme, something like this would be an exceptionally powerful way to fight misinformation.

Here’s a wireframe of how it might look:

A validation framework could include a variety of related articles on the same topic. Mockup by Vikram Singh.

Of course, this could get unwieldy and confusing to a user. This approach would likely need to be strictly limited to the amount of articles present, with only those of the highest credibility appearing. Primarily, this could act as a contextual element next to articles that have poor credibility ratings. In this way, you can work with with highly partisan users and their associative groups.

Conduct User Research

Users ‘in the wild’ consume news in ways that are unpredictable to the creators of news and the designers of news experiences. Our hubris leads us to imagine that we can control how people will use a system we create — but we can’t design particular a experience, we can only design for it.

The only real way to understand if solutions to misinformation are effective is to continually test them with real users and iterate on the solutions based on their feedback.

The Trust Project did some interviews to understand how people consume the news, and despite being fairly difficult to parse, their report has some good information. Unfortunately they committed the sin of letting users design the solutions rather than observing how users use the news, or watching how users use prototype solutions (get users to do, not tell):

The Trust Project’s Research Report

I’m not so clever to know I have all the answers to misinformation, but I do believe we are not thinking broadly enough. Solutions to misinformation thus far may only be effective towards people with high digital literacy and educational backgrounds, rather than to users/readers at large.

So I’d love to hear your opinions on how solutions to fake news can be better integrated into our daily experience, and your opinions on my suggested solutions. Ultimately, we’re all victims of misinformation, even if we aren’t consumers of it.

#21
January 18, 2018
Read more

The problems with the solutions to fake news — Part II: The UX

How can we effectively embed solutions to fake news into daily life?

This was the essence of the question I was asking in Part 1 of this series, where I dug into the theoretical underpinnings of our relationship with news. I’d like to answer that question here, by combining a user-centred approach with the principles for solutions I outlined previously, which indicated that solutions must:

  • Mesh well with the experience of the news. Solution frameworks should engage the reader in a similar manner as the news.

  • Be embodied in a way that is both easily understandable and easily to conceptualise for the reader.

  • Not require the understanding of new mental models or actors that could provoke questions of authority and trustworthiness for any new concepts involved in the solution framework.

  • Not disrupt readers sense of self-identity

  • Fit in with reader’s associative group structure

Again, existing solutions — while generally excellent, haven’t seemed to address most of these aspects. It may be the case that solutions are not a stage where they are able to consider these aspects, but if they continue to ignore them, it’s very unlikely that solutions to fake news will be successful.

Aside from increasing literacy, solutions to the problem that is fake news have generally centred around measuring and indicating the credibility or trustworthiness of news articles or sources.

Facebook’s ‘disputed’ label

It certainly is difficult to imagine a successful future fighting fake news without a validation/credibility framework. But the perspective from which concerns are mounted often don’t seem to consider the wider paradigm of how we interact with, perceive and experience news.

The Trust Project’s ‘Trust Mark’

As I noted previously, it is difficult to understand how and why most users would care about these trust indicators, let alone trust them. Why, for instance, would a steel worker in Texas or a waiter in Nigeria engage in trust indicators the way we want them to?

Do we honestly think that credibility indicators are targeting the right people, those who often have low digital literacy and high partisanship?

So the question remains as to how we can we improve these credibility/trustworthiness solutions.

I’d like to offer a series of solutions here that integrate with the above-mentioned principles. They are:

  • Facilitate opportunities for discovery & serendipity

  • Utilise social proof

  • Engage people in a meta story

  • Conduct continued user research

The idea behind these solutions is that people will be able to make use of otherwise abstract credibility indicators, which are seemingly on the way to be provided without context or narrative. By considering the following mechanisms, we can encourage users to engage in trajectories that make fake news ineffectual.

Note that these mechanisms rely on an underlying framework of credibility of articles — this isn’t about how to establish credibility, but rather how to present credibility.

Facilitate opportunities for discovery & serendipity

On their own, trustworthiness indicators are devoid of context.

Why is an article trustworthy? Says who? What part of it is trustworthy? Is the trustworthiness indicator trustworthy?

A solution is to provide context to help the user in terms of definitions, further evidence, and further debates on validity, but we then run the risk of overloading the user with cognitive labour (put simply: people are lazy), and could potentially cause them to avoid acknowledging the indicators whatsoever. This is what Facebook has done with their “About the publication” efforts.

Facebook’s trust indicator project requires users to dig down into the background of an article

However context can be provided by showing other related articles. Varying accounts of phenomena can provide context of why one account may be more factually questionable than another. The objective isn’t necessarily to show contrasting accounts, but to get people to explore out of their comfort zones.

In this way, discovery & serendipity is of huge value. Discovery is providing users the opportunity to find new information, and serendipity is encouraging to read something useful they wouldn’t otherwise. This fits with the engaging nature of news and are easy to conceptualise, as they are familiar mechanisms. We’re all familiar with “Related” pieces of media — situated next to videos, articles and songs.

There’s been much talk about algorithmically related content channeling users to ever more radical content. This is not to be taken lightly. That’s why only articles rated as ‘high credibility’ should be shown in discovery mechanisms.

This has been proven successful previously, as a highly detailed and insightful report from the Shorenstein Centre notes:

Experimental research by Leticia Bode in 2015 suggested that when a Facebook post that includes mis-information is immediately contextualised in their ‘related stories’ feature underneath, misperceptions are significantly reduced.

How Facebook used Discovery

Utilise social proof

Encouraging discovery is very useful, but it doesn’t necessarily fit with someone’s life, with their identity, and with their associative group identity.

Therefore, nudging users to explore content through the provision of evidence that others are looking outside of a singular information source can help embed discovery into a user’s life. No one wants to feel as though they are less knowledgable or competent than others, so messages noting that others are looking at more and ancillary content could prove valuable.

This ‘social proof’ works well because it activates the intersubjectivity (the meaning we make together) and a feeling of trust in others. We know that people rarely make decisions about identity themselves, it’s a collective enterprise. Additionally, should any system of indicators be linked to social feeds, it could indicate how many of your friends read these “adjacent” articles.

Social proof could manifest as language that encourages discovery, like:

“Most people who viewed this article, also viewed on this one”.

“Users who read this article were interested in this article, which provides a different account”.

“This is a complex topic. Here are other accounts that are very popular with users”

“[username] read the article listed below”

Here’s a quick mock up of how social proof and discovery could work together:

News is already filtered and editorialised by friends and people you follow. A story is rarely presented with a social layer. Accordingly, this approach meshes well with a user’s experience.

Engage users in the meta story

Any additional layer on the web needs to be incorporated into our existing mental models and associations. Who’s doing the assessing of fake news, and what bits are being assessed? But more than that it also needs to be incorporated into the narrative, the actual story that people tell themselves, both about how a credibility scheme fits into the meta story of news, and how it fits into their lives.

It’s easy for the players in the abstract layers of digital ecosystems to become vague and amorphous. I’ve written extensively how people ‘satisfice’, that is, they take the first best option or assumption for what a thing is. It’s fair to say that users will assume the worst if they aren’t give a strong sense of who the key players are and how they interact with their digital life-world, given the cynicism engendered from a digital framework that presents worst in politicians, the media and digital marketers.

This is a very difficult problem, especially in that it speaks to larger questions about identity and narratology. But it provides opportunities as well: How can we allow people to situate themselves in the story, with the actors in the story, with the tellers of the story?

In the tagging of an article or news source as credible or non-credible, it seems to me that it is just as easy to tag this article as specifically situated within a dialogue. Put simply: what’s being argued here and by whom?

Imagine theming articles by topic, or by granularity of premise.

As an example, I recently came across a new site entitled “Kialo”, which hosts debates by topic. Each topic has arguments for and against, with each of these arguments containing sub-arguments for and against the arguments (and so on, deeper into specific sub-arguments). Each argument and sub-argument is voted on.

I find this to be an intelligent yet simple way of organising arguments. It’s visually easily understandable and could translate well to a large ecosystem of news.

Here’s how the topic (the grey box) of whether the US should pay reparations for slavery is structured, with arguments and sub arguments — green ‘for’ a orange ‘against’:

Imagine one of these trees for each news topic, with each news article being arguments and sub arguments. Rather than voting, articles could be shown by credibility. Less credible articles could simply drop-off the chart.

Of course, this could very easily get unwieldy and confusing to a user. This approach would need careful curation and would likely need to be strictly limited to the amount of articles present. Indeed, something like this would be suitable for only the highest credibility articles.

Primarily, this could act as a ‘discovery’ element next to articles that have poor credibility ratings. Or it could be integrated next to low credibility articles about to be posted: “This article is about [topic], but is of low credibility. These articles have higher credibility”.

The advantage is that you can show both sides of an argument — but only shows high credibility posts. In this way, you can work with an associative group structure, with partisan users.

It’s also a very simple, visible structure, that is easy to conceputalise. Of course, it’s still doesn’t show who is doing the credibility decision making or why particular articles are shown. Making this visible while keeping cognitive overhead to a minimum is doubtlessly a challenging task, and one that I don’t have a strong idea for at this time.

Yet the hope here is to present consumers of fake news with a familiar tree-like framework with related articles that are bi-partisan and are of a high quality.

There’s little worse than genuine effort creating in ineffectual results. That’s why suggestions like the mechanisms I illustrate here need to be taken seriously in the creation of credibility indicators.

But perhaps most important is the need to research solutions to fake news with users.

Ultimately people make use of spaces how they want. Fake news is an incredibly basic concept yet it was not predicted and thus defended against by anyone in any meaningful way. Facebook, Twitter and indeed the world were caught unawares.

This is simply because people make use of spaces to create places and activities that we can’t perceive. We can only understand this by conducting research with people, by observing them and by seeing where trends are occurring.

Our hubris leads us to imagine that we can control how people will use a system we create — but we can’t design particular a experience, we can only design for it. In other words, users will create the places, we can only seek to encourage the creation of places with certain qualities.

So ultimately, all the recommendations I’ve listed here are moot if they are not tested first. But this goes for all solutions, as well.

The Trust Project clearly did some interviews to understand how people consume the news, and despite being difficult to parse and rather unstructured there is some good information. Unfortunately they tend to commit the ultimate sin of letting users design the solutions rather than observing how users use the news, or watching how users use prototype solutions (get users to do, not tell):

An example of how the Trust Project got users to design solutions, rather than observing how they interacted with solutions or with the news generally

Exploratory, formative, and evaluative user research need to be continually conducted on any and all proposed solutions to fake news.

But there’s plenty more we can do. I’m not so clever to know I have all the answers, but I do think we are not thinking widely enough. Solutions to fake news certainly seem to predicated on solutions that we think would be effective to us, rather than effective to users/readers writ large.

So I’d love to hear your opinions on how solutions to fake news can be better integrated into our daily experience, and your opinions on my suggested solutions. Because ultimately we’re all victims of fake news, even if we aren’t consumers of it.

#20
December 10, 2017
Read more

The problems with the solutions to fake news — Part I

People know this is fake. Why would they still read it?

Despite all the ugly ramification of fake news, it has been heartening to see a herculean effort being amassed against it. The majority of efforts, however, have been directed at data, verification, and literacy.

What these solutions don’t seem to consider is our conceptual relationship with news.

Is the news still ‘the news’ to us? How do we interact, intellectually, emotionally and physically with news?

We’re all operating with the idea that people have the same idea of news as they have had in the past: news is the factual information about current events. Solutions to the fake news phenomena approach this problem with this conceptual framework.

Take this article from Jason Salowitz. I certainly do think there’s some good ideas in it. Jason’s clearly put a lot of effort into how the solution would work. In it, he discusses a way to “UX the F***out of Fake News”. He discusses a number of examples that could help users determine the validity of particular articles and news sources.

There’s also a number of organisations that are aiming to determine the validity of news stories and present this information to users: the Credibility Coalition and the Trust Project are working on ‘credibility indicators’. These indicators attempt to show how a validation engine would define how credible a new story and a news source is. Usually, this includes some sort of visual indicator to the user, noting that a particular news source is trustworthy, untrustworthy, or somewhere in between. Or, it shows a mark of authority, that a particular source is ‘trusted’. In this way, it’s sort of like an objective adjudicator of the news’ validity.

A screenshot from Joe Salowitz’s article about Fake News

But would these work? Would people trust, use, or even care about these indicators?

Facebook’s ‘disputed’ label

Jason and others see fake news framed within technology and reporting, not within the user/reader. In their eyes making it clear what news is credible, and what is not is the key to success. The experience, according to these and others, is predicated on the assumption that an abstractly assessed news source would be effective (and affective…) to the reader.

Yet this view does not take into the account the peculiarities of individual reader’s experiences. Would user’s trust these indicators themselves? Who is the arbiter of the indicators? Would users see bias in the indicators themselves and move to other platforms?

Imagine a steel-worker in Indiana, glancing at his phone, or a teenager in Newcastle, or a water carrier in Gujarat — would each of them truly understand, care about, or engage in the intended way with the status indicators of a particular article?

But more than that, would they believe trust/credibility indicators ? In an article in Vanity Fair, Maya Kosoff talks about hyper-partisan fake news articles:

But the very readers such articles are aimed at — those who subscribe to the theories they disprove — are arguably the least amenable to them. If a reader has already decided to trust a site like [Alex] Jones’s over The New York Times, for example, then Snopes’ efforts will do about as much to sway them as Facebook’s new trust indicator.

Do we honestly believe that most people are going to dig down into understanding trust indicators, into believing them, or are they more likely to just ignore them and click ‘post’?

These are explicit challenges that need addressing.

We have to start addressing them by considering the way we engage with the news. Each of us creates our news bubble: The idea of a ‘daily me’ — of a newspaper customised to a person’s individual needs — has been around a long time. However that concept has gone off the deep end — we now have the capability to control our feeds of information to the degree that we can largely exist solely in echo chambers.

But our individualised news experience is more than just a filter, a ‘daily me’.

If we drill down past the idea of a personalised filter to the theoretical underpinnings of what is happening in a user’s experience of news we can get a better perspective of how we engage with the news, and what factors solutions to the fake news phenomena we need to consider.

Let’s start by establishing the structure of this underpinning, as I see it, then dig down into it. Here it is, in a sentence:

The news is part of our tightly bound ecosystem of knowledge……

so we don’t make particular intentions to look at the news……

meaning news providers begin to alter their news accordingly……

so we in turn conceive of the news itself differently……

and begin to define ourselves in the news.

Let’s break that down:

The news is part of our tightly bound ecosystem of knowledge…

We like to think of the news as an abstract entity which we intentionally engage with.

But consider the last time you checked Twitter, Facebook or Google anything. It was likely part of your ambient level behaviour, like sipping a coffee or checking the time.

In this way, we build up technological and informational ecosystem for ourselves that is quite literally a part of ourselves. Like a lost limb, a lost phone is a incessant, noticeable absence. Our phones, which have news as part of their techno-social structure, are embedded in our daily behaviour. Our daily behaviour is thus indistinct from a web of information, and of feeds of information. When we want any information we have it directly at hand.

…so we don’t make particular intentions to look at the news…

What does it mean if it is embedded in our daily behaviour?

It means that we consume news differently.

Previously we would have to make conscious efforts to pick up a newspaper, or turn on the television. Even on the web, prior to social media, we would have to explore news websites. Now, the degree to which you need to decide that you are ‘looking at the news’ is of an extremely low order. Indeed, absent mindedly looking at a gadget in your pocket, seeing the headline of an email newsletter, receiving a Whatsapp message with a link from a friend or any other numerous, highly passive events can be considered being exposed to — and digesting — news.

…meaning news providers begin to alter their news accordingly…

Individualised information ecosystems have changed how news is presented and structured. News articles are increasingly brief, with bullet points bringing the facts to the fore, and videos reducing the need to even read. But beyond that, news consumption is regulated in a piecemeal disconnected fashion, with ‘news’ being headlines (many people don’t click on the full article — I’m not going to link any studies here, it’s really depressing — there are just so many), tweets, editorials, or vlogs. Beyond that, news is filtered and editorialised by our friends, family and strangers. Media has atomised into a wide array of formats such that it’s difficult to discern what news is and what news is not.

…so we in turn conceive of the news itself differently…

It’s one thing if we have a different relationship to the news, it’s another if we think about news differently.

Because of the piecemeal nature of the news and because it is so embedded in our lives, we begin to conceptualise news differently.

Think about it like this: Your email client, when you first used it, might have been perceived as simply a receptacle of communication. Certainly that’s how most people perceived it. However, through your usage of this email (especially work email), you may come to associate each email as an item that needs doing. In this way, your email may be perceived to you not as a receptacle of information, but as a to-do list. Google is clearly aware of this, with reminders, theme bundling and checkboxes all forming a structure closer to a to list rather than a communication mechanism.

This is called enactive cognition, our doing with something changes how we think of that thing. In the case of the news — the ‘doing’ is simply being a person. We conceptualise the news through our repeated access to it, given it is an embedded, atomised element in our ecosystem. Simply because it is consumed and situated in our lives differently from the broadsheet stack landing on our doorstep, we think of it in a functionally very different manner.

So in this way we’ve started thinking of the news not as news but as something else. But what?

…and begin to define ourselves in the news.

Let’s consider.

We’ve said how people curate their information ecology to be what they want. What do people want? To be the people they want to be.

The news, as it stands, is an expression of self. This ‘self’ is validated through your everyday actions. How you in-group identify: what you wear, what you like and the books you read are all expressions of who you are but more importantly who you see yourself as. What tweets you read, who you follow, what news you agree with, how you feel when you read posts, what you ignore — all of these embedded activities are solidifying you as a person.

This is know as self-categorisation. We accentuate the differences between our group — how we identify — as well as the similarities within our group. The self categorisation is dependent on the situation, however. If I’m a cardiologist and in a room with another cardiologist and an ophthalmologist, I’ll identify as a cardiologist. But if a lawyer enters the room, I’ll be more inclined to categorise myself with the other two medical professionals as ‘doctor’.

Self-categorisation comparison effect

As our groups get tighter so does ‘who we are’. The points of comparison to others become hot button issues — easily identified shibboleths in the form of key words: Trump, SJW, woke, rape culture (and different keywords for different countries, cultures and subcultures). These and numerous others either in themselves are identifiers, or are representations of two sides.

So, within my information ecosystem, I am constantly exposed to expressions that are in opposition to my ‘self’ category. Normally, it’s easiest to simply cull those feeds, those that are the flag bearers of the ‘other side’.

The media acutely preys on this by aligning itself with categories and writing headlines and stories that are biased against outgroups.

It’s incredibly difficult not to have an opinion on these atomised micro-dialectics that enter your information ecosystem on a minute-by-minute basis, are filled with vitriol, signalling and feedback mechanisms.

That opinion, that expression, makes the news a tool of our expression.

So, in review, reworded a bit differently than before:

  • The ‘news’ as it stands is embedded in our everyday activity in tight but rich information ecosystems

  • Which require very little intention to look at

  • Meaning news accords itself with this embeddedness and low intention-activity

  • So we think of news differently

  • And see ourselves categorised and defined through and in the news

What’s being discussed here is by no means ground-breaking. Indeed, it’s well know (I’m merely attempting to pull some threads together) but not well applied, especially with regards to the fake news phenomenon.

Just recently an extremely important and valuable report from the Shorestein Centre on fake news was released that repeated many of the points I make here:

…we must recognize that communication plays a fundamental role in representing shared beliefs. It is not just information, but drama — “a portrayal of the contending forces in the world.

This tribal mentality partly explains why many social media users distribute dis-information when they don’t necessarily trust the veracity of the information they are sharing: they would like to conform and belong to a group, and they ‘perform’ accordingly

Check them out — they do good work

This is why it seems problematic to call this phenomena a ‘filter bubble’: It doesn’t describe the full breadth of precisely the phenomena at work here. What that term does illustrate though is the nature of the problem: you can’t gently break a bubble. Once it pops, the whole thing disappears — but popping the whole thing would cause chaos, it would mean effectively destroying the news. A key then, is to consider how this ecosystem, this bubble, could be massaged such that users could be exposed to more credible information.

So what’s the solution?

We must consider how the news fits into users’ information ecosystems. Solutions must sit within our ecosystems, and not be abstracted away from them. Solutions also must:

  • Involve little to no effort on the part of the reader to understand. The best solutions should be largely indistinct from the news itself in terms of implementation.

  • Mesh well with the experience of the news. Solution frameworks should engage the reader in a similar manner as the news.

  • Be embodied in a way that is both easily understandable and easily to conceptualise for the reader.

  • Not require the understanding of new mental models or actors that could provoke questions of authority and trustworthiness for any new concepts involved in the solution framework.

  • Fit in with reader’s associative group structure

It’s excellent that there are so many efforts to fight fake news. Many of these, such as the Credibility Coalition, The Trust Project and others are well structured and thoughtful. Yet most don’t seem to be taking into account the life-embedded nature of news.

I believe this requires careful UX design adhering to the principles I discussed, and I will discuss how this could look in Part 2.

#19
December 2, 2017
Read more

Interesting. What do you mean “score” here? Score from a user testing perspective?

Interesting. What do you mean “score” here? Score from a user testing perspective?

#18
November 12, 2017
Read more

The Information Architecture of Time

The Mirage of Time (Yves Tanguy)

We build our filing systems based on metadata. This metadata can often be changed: the author, the type of file, the tags, and so forth. But one thing that can’t be altered about a file is its timestamp. Time is a fastidious, stern data point that refuses to be altered. Or if it is altered, it loses meaning — the original ‘time’ of a digital artifact is of the utmost importance to us.

This became particularly apparent to me when I was dealing with Spotify. As I streamed and liked music, I realised something: my music ‘library’ is a mere a chronological list of when I ‘liked’ particular songs.

These are all just lists of when items were saved

Unfortunately, this creates a rather poorly organised structure. In a list of “liked” items, there is no relation in terms of theme or any other metadata — when it was liked is the sole data point of reference. What’s more, if you accidentally unlike a liked item it is impossible to place it back where it previously was.

Now, perhaps you’re a more spontaneous information architect than me, and you group your songs into playlists. But I don’t — the act of filing and sorting, I’ve always felt, removes you from the task at hand (in this case, listening to music) and forces you to shift your focus away from what you’re doing to the act of filing.

There’s enough HCI practitioners who rally against this manual form of filing to make me feel like I’m not alone. Indeed, the Principle of Least Effort indicates that we are innately driven to find the past of least resistance in our business of living. And why shouldn’t we? The focus of our behaviour shouldn’t be on filing our life, it should be about living our life.

So those of us who don’t file our songs are forced to rely on knowing where one is by recalling when we liked that song. It’s a bit odd, making a “place” in a list out of time. So how does placemaking work when situated using only time?

We certainly don’t think “I liked that one Youtube video June 23rd, 2015”. We simply don’t think in the geometry of mathematical time, but spatially, relativistically, emotionally and episodically.

When I scan through my list of songs I know the relative time of it. I don’t know the time in an explicit numerical sense, but I can place the time of it relative to other songs and how far I have to scroll.

So, each song’s proximity to another song can help to give it a “place”. Each song has a relative distance to another of which I am at least vaguely aware. And length of time is paralleled as the distance of a scroll — the further the scroll, the more distant in the past. It’s rather odd, if you think about it, we literally create a ‘physical’ object out of time. In a way, we reify ‘time’, assigning it distance. Again, however, this distance is relative, in this case to the total amount of songs liked.

Yet this is very different from how you would look at other media that are chronologically related to you. For example, if you were looking at photos, you wouldn’t need the context of a list, or other photos, you’d know from the visual content: the clothes, the quality of the picture, the people you were with, etc. tells you when it is from. Perhaps you also feel an emotional connection to the picture, which may also help situate you.

Lists of ‘liked’ media also have an episodic-emotional layer. This layer sits on top of the relative/distance layer. It’s an emotional resonance we have with the media we imbibe and save.

For example, if you were looking at a list of your Youtube videos, you might see a bunch of videos about crocheting. You might recall that time, 2 years ago, when you were trying to learn this dark art. You gave up on it and feel a slight regret.

A point in time when I was listening to 80’s music

Another example: I was recently looking in my library of songs for “Bigmouth Strikes Again” by the Smiths. When I came across the above songs, I thought (implicitly) “oh yeah, I’m in the bit when I was listening to 80’s Post punk, it must be near here...”. I remember the ‘episode’ of my life when I was listening to 80’s post-punk, it helps situate my memories, forming a feedback loop with these songs.

Despite its shortcomings, a chronological filing system is something that we are very familiar with. When Instagram changed their feed from being chronologically ordered to one based on a cryptic algorithm, users freaked out. Indeed, cells of insurgent users have banded together to fight these algorithms by attempting to like each others posts in the hopes that they will have the visibility they once had.

So, a chronological list provides an important situatedness. However it doesn’t provide a good structure for exploring or grouping your music. In other words, it has both disadvantages and advantages — how can we limit the disadvantages?

Lets’ consider the scope of what we mean by a “liked” entity. Each thing liked isn’t just an expression of a preference. It represents a series of data points about you — a topic, a band or perhaps a person you were interested in. A song that represented a feeling you had about someone. A video that connects to your love of physics.

Each of these form something deeper than simply you performing a “like”. Each liked entity represents a confluence of mental, emotional and socio-cultural characteristics.

Let’s draw a parallel to language. Like a list of songs, language is also constructed using a chronological sequence of signs.

In linguistics, you’d call each word a paradigm. A paradigm (again, in the linguistics nomenclature — it has different meaning elsewhere) is a word that can be replaced with another word.

“ A sign enters into paradigmatic relations with all the signs which can also occur in the same context but not at the same time” — Langholz Leyore

So in the sentence “I like to be around cats”, cats could be replaced with other words which hold certain similarities and can grammatically fit. So cats could be replaced with “dogs”, “people” or even “fire”(!).

In linguistics, a sequence of paradigms forms a sentence, creating what’s called a syntagm. “I like to be around cats” is a basic syntagm, constructed of chosen paradigms. It’s a chain of words that adhere to an appropriate grammatical rules to create meaning.

I could change a single word (paradigm) and the sentence (syntagym) would have a slightly different meaning — “I like to be around cats”, “I like to be around fire”(!).

from: Differencebetween.com

So, let’s think of each song as a paradigm. A single song that is like other songs, that could be replaced with other songs.

And, let’s think of a library of liked songs (paradigms) as a syntagm.

This list of songs, like a syntagm, adheres to rules (of chronology and individual activity) and as such, provide meaning.

But we can break down our library into smaller syntagms. Much as a novel is one long syntagm made of smaller ones, so to is a library of songs made up of small groupings.

But what are these groupings?

Users often like songs, or videos in bunches. For example, a friend might tell you about a band, and you might like a bunch of songs from that band all at once. You might also be in a melancholy mood, and like a bunch of sing-songwriter music. These groupings then, could easily be identified by a system — tagging it as a syntagm.

Of course, from the user’s perspective, this grouping could be labelled in a less technical way — “group” or something metaphorical, like “suite”.

The advantage of this is that we can find similar syntagyms, or similar paradigms that could go in that paradigm/syntagym. This isn’t just generally “related” music. It’s a grouping as defined episodically — that is, a chronological segment.

Much like we can change “cat” to “dog” in a sentence so too can we switch out one or more songs for another that is structurally or thematically similar.

How syntagms might show in Spotify

In other words: in our list of liked music, if we were to change some of the music to similar music, it would change the content of that list, but that list would still have meaning, albeit slightly different from what the user knew before.

What’s vital however, is that the sequence of syntagms stays put, that the user is situated in their chronology of songs. A book only has meaning if it’s sytagms are ordered in a manner that provides meaning to the reader. In a similar fashion, the sequence of liked songs, or videos has to stay static.

This is why it’s so vital that the order of the syntagyms should not be manipulated — the user has to stay in context of their chronologically “liked” songs, because it provides meaning, both episodically and distance-relativistically, as noted previously. Don’t mess with the user’s ‘book’. Related songs, videos, etc. do this, removing the user from the chronology.

Display-wise syntagms would thus be required to be placed within the context of the chronology of “liked” entities, either by replacing syntagms, or through some sort of progressive disclosure — accordions for instance. The mock up above shows one way this may look,

I know it’s perfectly possible to like one song at a time. For example if you are using a “Discover” or “Recommended for you” feature, then the songs may have no relation other than being generally related to your preferences. So these songs can be treated as syntagms in and of themselves.

Is our chronology so important as to be the prime intra-connector of our libraries? Well, perhaps not, general themes can be determined, or genres. But these don’t relate to a user’s activity or chronology, missing out on leveraging what is in essence our trace on the digital world.

#17
October 25, 2017
Read more

Mimesis: Beyond mental models in HCI

Before we think, we use metaphor to conceive — how can we use this understanding in UX and HCI?

Speeding Train (Treno in corsa), 1922, Ivo Pannaggi.

#16
September 29, 2017
Read more

Beyond Mental Models: Tackling Complexity in Interaction Part II

In the first part of this series, I explored how mental models are insufficient for fully understanding human cognitive behaviour in digital systems — especially websites.

A main sticking point, I argued, was that interacting with digital systems is not a fully cognitive experience that is constituent of abstract models.

There’s a further step to take, however. Inasmuch as we don’t, or aren’t always capable of mentally modelling systems, we nonetheless merge with systems in deep ways, ways that are fundamentally enough to actually be part of our cognition. It’s tempting to imagine this as a science fiction conceit — our brains amalgamated with computers, our sentience spanning cables and microchips. But that’s not at all what I mean.

We regularly offload our cognition to the environment — especially our digital environment. Indeed, I’ve written at length about this in other articles. In a sense, we form such tight feedback loops with out environment that they become part of our extended mind.

Indeed, it’s difficult to consider “thinking” as occurring anywhere that somewhere in between the neurons in your brain.

Pick up your phone. Open Chrome or Safari, or whatever browser you are using — how many tabs do you have open? I’m guessing dozens. Each one of them is an environmental cognitive artifact. Each tab contains information that you fundamentally know you have access to at any given moment, and each tab also acts as a reminder or a sign of further downstream knowledge that you have access to, either in your head, or within the phone itself. Importantly, you know that that particular information is there (to varying degrees) and you can rely on it be readily accessible.

As such, interacting with out environment in this particular way — our extended mind — is no longer interacting with the environment as one would swing a hammer or catch a ball. Rather, we interact with out environment to uncover thoughts or memories that we have stored externally, much as you would shift and explore thoughts in your head to reveal more thoughts or memories.

Epistemic action in Tetris: studies have shown people find it much easier and more useful to flip shapes on screen rather in their mind to see if they’ll fit.

This is what is known as “epistemic action”. Importantly, the systems we use, especially websites, are areas of epistemic activity as much as they are systems that we use for a task. Epistemic activity is an activity of revealing information to yourself, rather than an activity that you do for a particular task. Looking at a piece of paper to read a phone number or opening a Word file to recall a password are examples of epistemic activity.

We think about and alter our informational environment, forming a feedback loop, much as we would through thinking about and altering our own thoughts. Each though or memory in turn spurs further thoughts. Where the thinking takes place is irrelevant — what is of concern is the function the activity has in revealing information.

But to the question at hand — can and do we mentally model this epistemic activity, this extended mind?

Let’s consider: As I write this article, I have a number of browser tabs open, including the ebook, New Science of the Mind, my Onenote file with my written notes, as well as a number of other tabs about the same topic. As much as I use them for reference, they are also there as reminders for topics that I can integrate into this article. My mental model of how these systems work is largely irrelevant here because they are so implicit in my behaviour that I treat them as extensions of myself.

When you are considering a piece of information, let’s say where you should travel to Italy, you aren’t considering the structure of the webpage or the notepad or the book about Italy, you are thinking about your task, and the information involved. At this point, these feedback systems are the furthest thing from disembodied containers of information that you mentally model.

I mentioned coupling in the last article — maintaining and managing the chain of things that allow us to do something. Managing each external cognitive artifact requires that you couple with it well. As noted, coupling isn’t something that you do consciously. You don’t consciously “couple” with your own thoughts, hence you don’t actively couple with cognitive artifacts. You just cognate using your thoughts, you don’t say, “I’m going to think this thought”.

So here mental models are again insufficient in describing what, in this case, a website means to us. So the question still remains: how can we better model how we couple with digital systems, especially websites?

More on that in Part III.

#15
August 17, 2017
Read more

Beyond Mental Models: Tackling Complexity in Interaction Part I

How we interact with computers is bewilderingly complicated.

A shallow examination into of our most basic digital behaviour reveals this utter complexity.

To open a document, we have to understand what globs of pixels mean that somehow indicate the structure of an invisible filing system. We need to understand how (double)clicking on a particular bundle of pixels labelled with particular text will move the present state of the system to a different part of the invisible filing system.

Yet using a computer is second nature to us and thus the cocktail of perception and cognitive processing involved is utterly invisible.

Watching an older person interact with computers (especially some time ago when computers were a newer phenomena) shines a spotlight on the complexity the digital native takes for granted. Elderly people will trepidatiously approach a computer, carefully examining each element. They misunderstand the conceptual metaphors on screen. They struggle to understand what is interactive. It’s only through usage that we becomes sufficiently integrated with that of “the digital” for us to use it seamlessly. Much like a rock climber able to navigate a seemingly impassable wall of rock by seeing hand and footholds where the rest of us would see jagged stone, we’re able to understand the meaning behind a wall of pixels, navigating our way through and across it.

How does this happen, this implicit understanding?

Norman’s description of the mental model

Is it a singular, unified cognitive process, rational and well disembodied that we we sculpt and adhere to whenever we engage in using a system? This is what Don Norman’s mental models suggests. Stating that we formulate our behaviour towards interactive systems by mentally modelling how that system works, Norman sees us as rational, disembodied actors. While useful for understanding whether the broad framework of a basic application is sensible, the model fails to account for numerous other factors:

  • we only mentally model what we perceive, and we may not perceive an entire system and thus be unable to model it, especially when it comes to webpages where we non-sequentially perceived sections attract our interest

  • the messaging involved, including sales messaging, may affect a user’s view of the system

  • we rarely have the time, inclination or mental state to rationally create a model of system

  • on most websites functionality isn’t the main purpose of a website for a user, it is our task at hand

Look at this website for the Porsche 911. What would be a user’s mental model of it? Would the user stand back and create rational mental model of each structure on the page’s elements before they scroll through? Or would they scan the page for information of interest, not taking the time form a clear, disembodied structure of what they are looking at?

As another example, take my usage of this very site. When I choose to name an article I click a button at the top of the page and a menu appears allowing me to write in the name of the post, its subtitle and description. Medium autosaves posts when you write. I expect the fields in this menu also to be autosaved when I click away.

What I see when click the edit post name button

That’s not the case. There’s a save button and every single time I edit the fields, I forget about the save button.

I don’t see the Save button in the menu because I don’t take the time to model how the menu works — I make assumptions, I think about my actions, not about the structure of the system that is being presented to me.

Human-computer interaction, like any type of human action, is to varying degrees not a fully cognitive experience. We act using tools, rather than thinking about the tools.

At the risk of overusing an example from Heidegger many people probably are already aware of, consider a hammer: you don’t think about the hammer when you use it, you just use it to do a task. You see that it affords hammering (i.e. the shape and structure of it allows for hammering), so you hammer. There is no higher level cognition there, it’s a mere sensory perception coupled with your desire to do something. You don’t need to think about the hammer when you use it, you are thinking about what you are trying to get done; in this way Heidegger calls the hammer ready-at-hand. Indeed, you only reflect on whether it is a hammer if, upon using it, you realise it’s a non-fully functional hammer. Heidegger called this being present-at-hand with the hammer.

Our tasks are collocated and necessarily part of the objects and our world. This is called having intentionality toward something. That is, we act towards something, our thoughts are engaged towards a particular object or activity. When you think about clicking “buy” on a computer screen, you aren’t thinking about the clicking of the button, you are thinking “I’m ordering this package”. Thinking about intentionality is important because it helps us consider our actions not as abstracted away from our goals.

(As a side note, it’s an important connection to something else I wrote about — the extended mind thesis: whether the structure you act through is outside of your brain or thoughts inside your brain is often irrelevant, the task at hand is more relevant.)

What I’ve been discussing is the concept of embodied interaction. It was formulated by Paul Dourish in the late 90’s. It has a strong philosophical foundation, engendered under philosophers such as Husserl, Heidegger, Gibson and Merleau-Ponty.

Husserl was one of the first to study the nature of our experience

Maintaining and managing what these and other philosophers describe as intentionality is a process in and of itself. We don’t just recognise a particular set of objects at hand and use them for our actions. We need to make them effective, to manage this chain of physicality.

To do this, we engage in what these philosophers have called coupling.

I’ll let Paul Dourish himself describe what coupling is:

“As I move a mouse, the mouse itself is the focus of my attention; sometimes I am directed instead toward the cursor that it controls on the screen; at other times, I am directed toward the button I want to push, the e-mail message I want to send, or the lunch engagement I am trying to make.”

So, coupling in interactive systems is not simply a matter of mapping a user’s immediate concerns onto the appropriate level of technical description. Coupling is a more complex phenomenon through which, first, users can select, from out of the variety of effective entities offered to them, the ones that are relevant to their immediate activity and, second, can put those together in order to effect action. Coupling allows us to revise and reconfigure our relationship toward the world in use, turning it into a set of tools to accomplish different tasks.”

Coupling then, is how we continually balance and make use of the physical world, our intentionality. We couple with series of objects to varying level of at-handedness to fulfil our needs.

As noted previously, how we interact with computers is extraordinarily complicated, so modelling coupling would be extraordinarily difficult. It’s nearly impossible to develop a structured calculus that incorporates every existing variable. You’d have to model the level of perception or cognition of each element of an interaction (mouse, graphics on screen etc), and determine whether each conceptualisation was more ready-at-hand or present-at hand —and that’s all just for a single step in any given task.

At a basic level, we can say that without a doubt we couple through an amalgam of ready-at-hand and present-at-hand conceptions. This framework activates in the triggering of our intents.

I’m going to suggest it is worth engaging in activities that involve examining whether our systems of coupling can align properly with interactive systems.

A basic example can illustrate how this is relevant in the most fundamental of tasks: reading a webpage requires you to understand the words, obviously, but it also requires you to be aligned with the structure of how the words are presented (the format, layout etc), how to see more words (e.g. scrolling) and what the system is trying to tell you (e.g, “you should read this article”).

More on that in Part II.

#14
August 5, 2017
Read more

A Semiotic Approach to the Digital, Part II: Over-interpreting the Digital

Please take a look at Part I here — it’s not necessary, but it will give you a good background on the sign-making theory of Charles Sanders Pierce.

Part II: Over-interpreting the digital

A groundhog, emerging from a long winter, peeps out of its burrow, seeking — seemingly — to detect the weather. The climate it finds will determine the weather for the weeks to come. Should it be sunny out, this new trend will continue. But should it be cloudy (somehow cloudy enough for a groundhog to see it’s shadow mind you) the groundhog will scurry back into its burrow and winter will persist for 6 more weeks.

What does his behaviour portend? A lot, actually.

I don’t claim to be an expert on a groundhog’s meteorological acumen or general behaviour, but it certainly seems suspect that a groundhog will:

  • check the weather at a particular time

  • at a particular place

  • and that these actions will be definitively predictive of the forthcoming climate.

Absurd, comical, whimsical — fine, “literal” isn’t one of the adjectives you would use to describe Groundhog Day’s rituals. But the holiday is built from our ritualistic approach to our interpretation of signs. It’s an example of how threadbare we can make the association of a sign to it’s object.

We are so insistent in inserting signs into the phenomena around us that we actually layer semiotic behaviour on top of itself: the groundhog sees a sign, which then becomes a sign for us to interpret.

Needless to say, we are susceptible to over-interpreting our world.

This is damaging, because sometimes a sign actually isn’t a sign for anything at all, and other times it is a sign for something wildly different that what we think. But, more than either of these:

We far to often create an erroneous chain of meaning from a single sign

This is caused in no small part to our insatiable appetite for interpretation.

One of the founders of semiotics — Charles Sanders Pierce — saw us as actors who would inexorably see all the phenomena around us as a series of signs. He felt this way during his lifetime — and he lived the better part of his life in the 1800's.

How would he feel now, with the utter saturation of our mental space with information from every conceivable angle about every conceivable topic?

Take this picture:

It was paraded around after the recent London attacks as evidence of Muslims’ general disinterest in the plight of those terrorised. For the purposes of this article, I’m unconcerned with the politics or truth of this statement. What I am concerned with is the poor sign-making process that a conclusion like this results from.

Initially, this was just seen as a photo of the event itself— Pierce would classify this as a type of index because it indicates (or points) to an actual event that happened. However, it is by no means just an index. Commentators began to see this as a visual metaphor — what Pierce would term as a type of icon (think of a folder icon as a visual metaphor for a folder). In this case, each aspect of the picture had a metaphorical correlate: the woman is a visual representation of the Muslim population as a whole, the fallen man represents the effects of terrorism, and the other people represent the Western public. Hence, the overall metaphor would be one of Muslim indifference toward the terrorist attack.

However, over time, as this gained more and more views, and accrued more and more meaning through discussion of what it “means” writ large, it took on a new form.

People began to no longer seen this image primarily as an index of an event that happened, or as a visual metaphor, but as an object within the socio-political landscape than has an affective meaning. It became a cultural artifact embodying a dialectical entity within the zeitgeist; in plain English: a collection of pixels that represent a current topic in society. It had become a symbol. A symbol, if you’ll recall bears no visual resemblance to what it represents, but rather represents its object through cultural consensus.

Depending on your viewpoint, the entire picture may be a symbol of anti-immigration, of the racism of the West, or in my case, our over-eagerness to treat individual pieces of digital media as representative of society as a whole.

The above screenshot of a video— not even necessarily the full video — is a sign as well. Very likely, you have seen the video. A professor being interviewed via Skype on the BBC is interrupted by his children. Quite comical, yes, but it began to accrue semiotic content.

Again, a basic interpretation would be that is a video of an event that happened — an index. However, it gathered steam as people identified individual elements as representative of a greater whole: a visual metaphor (an icon, as noted earlier). The man’s actions in the video were construed as a metaphor for his indifference toward children. But again, as it gained meaning through exposure and discourse, it became a symbol. Pictures of it, indeed it’s very mention, gained particular meaning.

Some the video as a whole as a sign for how men and women may differently treat children. Others saw it as a sign of how our work/private lives are no longer separate and what we should do to prepare. Once again, it inexorably became a symbol. This act of semiosis— while generally not nearly as toxic as that of the Muslim woman walking on London Bridge, still merits us examining the accuracy of our interpretations.

But why are we so keen to so infinitely interpret all that we see?

Famed, late semiotician and novelist Umberto Eco would call this unlimited semiosis, meaning that what we interpret always leads to further interpretants. Philosopher Jacques Lacan would argue that this successive chain of interpretants would accrue until we reached a final interpretant called the master signifier, that is, a deep concept that a person identifies with. Rather than seeing each sign as just a simple index of an event, or even a metaphor for an event, each sign accrues meaning — becomes a sign for further meaning (or interpretants) — through constant discussion and overwrought semiosis. At each step each sign bears less and less resemblance to the actual indexical object of the sign. Of course, as they stray further are further away from the initial objects, the signs usually take the form of a symbol.

An object points to a sign, which then becomes a chain of other interpretations (the “d” and “I” represent dynamic and immediate, but don’t worry about that for now, unless you’re interested)

Then, these final interpretants that reflect our most closely held beliefs are used to structure our ontological assumptions and our orientation in the world. These final interpretants allow us to see the world through the structures that we know, that we think of as important. It saves us cognitive labour of building a new ontological structure with which to understand the world, and thus provides us with actionable outcomes as these interpretations build towards or against our most important beliefs.

Building on this is the media, social media, and the various other appendages of the digital that all seek to reinforce this semiotic/ontological structure. The media follows and encourages these semiotic chains, never letting us forget that what we are looking at is anything less than the master signifier.

The problems with this, of course, cannot be overstated. Every mob, every pointed finger, and every reductive argument is borne from seeing something that isn’t there. The lingering question, of course, is what can we do about this, and can we do anything that won’t inhibit otherwise useful semiotic interpretation.

The battle’s a tough one — there’s not doubt about it. But there are some activities we can do.

Balance is the required tool against our unlimited semiosis as a guard against find meaning in specious contexts. It’s a personal tool that requires us to consider — to ask — for if this is truly representative. What if an image, video, or other piece of media went viral that represented the opposite of our beliefs? Would we feel that it was still representative? Carefully considering whether we would feel the same if the semiotic activity was working against us allows us to be removed from our interpretations.

Context is a vital tool in our fight against our over interpretation. It’s far too easy to look at any piece of media and assume conclusions. But what about the context? What were the people feeling? Who else was there? What didn’t we see/hear? Reserving our impulse to impose meaning until we understand context means that we can see the world for what it is: connected, ambiguous and something we can only understand through the appropriate context-sensitive perspectives.

There’s an automatic, active workflow that works for us, but that also against us. It’s the mechanisms of our brain which seek to automatically find meanin. We often don’t actively try to make meanings, rather it is a learned (what Daniel Kahneman would called system 1) behaviour that triggers without our conscious knowledge. Much like reading, it’s often an activity that fires within our brain that doesn’t require an active will to do. We have to use the more logical part of our brain (what Kahneman would call system 2) to understand whether these often automatic interpretations are indeed valid.

It’s work, it’s tough, but it’s more important than it’s ever been.

#13
June 15, 2017
Read more

The Extended Mind of HCI, Part I: Thinking Through Tabs

Part I: Thinking through tabs

Our own bodies and minds allow us to experience the world, but we are also inexorably bound by our bodies. Our bodies of experience — the apparatus of ‘us’ — are both our mediator and our prison. In a fundamental sense, we are unable to extend our physical and mental physiology into our environment.

In some distant future, we likely won’t be so bound by our biology. Our minds and bodies will likely be deconstructed and reconstructed among the universe. It’s difficult to imagine a scenario where this isn’t the case, in fact. Transhumanists have long felt this way.

“Humans+”, the transhumanist idea that we will be more than human

They see a a future in which our minds, enhanced by computers, biology, and artificial intelligence will be scarcely recognisable. Our bodies, formerly structurally bound by an epidermal layer, will be porous, extending and encompassing appendages of our choosing.

But I submit — as do many a philosopher — that this future is already here. Our cognition already extends beyond the barrier of our skull.

The evidence for this is abundant, as evidenced by something that is perhaps just a glance away from you now: browser tabs.

Browsers tabs have long been seen as a useful way of collocating informational experiences such that they are easily accessed. A browser’s tabs lowers the amount of activity required to locate a pre-existing information source and negates the need to end engagement with a particular information source. Of course, windows previously have had this ability, but tabs are less distributed across the computer ecosystem — they are more immediate representations of information artifacts.

But in the digital climate we find ourselves in, tabs also act as what is known as external cognition or computational offloading. What these terms collectively indicate is a method for using external representations to reduce the amount of cognitive effort required by a particular agent — usually a person. Essentially, using this definition, tabs are more than just a method to easily re-access information, they act as reminders for what you were doing.

Notes to oneself are the most typical examples of external cognition.

Like writing on a sticky note, putting your keys near the door so you won’t forget them, or highlighting some text of importance, external cognition relies on the environment to help you cognate.

But the extended mind thesis takes this a step further. It would say that:

Tabs act as processes that form cognitive feedback loops based on on epistemic action.

What in god’s name do I mean by that?

It’s a well established that we take incomplete pictures of information — when we glance at a wallpaper full of pictures of identical Marilyn Monroes, we don’t encode every Marilyn Monroe — a full composite picture — rather we satisfice to get an overall understanding of what is being represented. We know that the wallpaper is “a series of images of Marilyn Monroe”; we don’t take a high resolution image in our mind. If we were asked to recall the wallpaper and examine it in our mind to determine which Marilyn was different, we’d fail miserably.

You’re not a camera, you don’t take high resolution pictures.

To fill the gaps in these incomplete pictures, we use what’s known as epistemic action to get information “just in time”. Epistemic (i.e. about knowledge or its validation) action is the act of manipulating that which will help us with the mental activity of a task. It is distinguished from pragmatic action, which is the activity that is the completing of the task. Turning a puzzle piece around to check and see if the shape will fit (rather than turning it around in our mind) is an epistemic action. Placing the piece in the puzzle is a pragmatic action.

Put another way: we use epistemic action as a way of by transforming the structures in our environment so we can sample them. So, if we are doing our taxes, we might have some papers around us, because it’s easier to glance at a paper with a number on it than to remember the numbers, especially if there are many of them. I might glance at a piece of paper, or move one closer to me, or I might highlight a transaction I am uncertain of. I’ll also glance at the wallpaper of Marilyn Monroes to remember which one is different.

But another principle — the principle of ecological assembly — states that we recruit resources to cognitively sample only and as justifiably as necessary. So lets take the example of browser tabs — sometimes it may do just to look at a tab heading to remember pertinent information that that tab contains, or what that tab represents. At other times, it may require actually clicking the tab to get the information required. Both of these; looking, and looking and clicking, are epistemic activities that involve sampling the environment to get just enough of what information is required. We need just what is required- just enough to fill in our incomplete picture. (Incidentally, the requirement just to look at the tabs headings to recall the pertinent information within requires an effective form of semiosis on the part of both the tab and the interpreter — see my article on semiotics in HCI).

But is information you haven’t read on tabs part of your extended mind? Well, what matters is that you know what that tabs contain and you endorse it as effectively true. I have used this example in a previous article, but imagine Otto who has Alzheimer’s and can’t remember where the Museum is. However, he carries a notebook around with him all the time and knows that the information is in his notebook. In this case, we’d say that the notebook is part of his extended mind as he knows — or believes — the whereabouts of the museum is in there. Were it in his head or in his book, the process of retrieving the information is functionally the same (that is, it performs the same function), if not technically.

So, with tabs, I might create an environment (let’s called it by its proper academic name — “cognitive niche”) where each tab relates to something I am thinking about.This cognitive niche may not necessarily be one I designed through an ontological structure (i.e. sorted by certain self-defined categories) but rather may be one created by chance. So I may have a series of tabs sequenced next to one another randomly — perhaps only defined by the chronology by which I opened each tab. But I am of the creator of this niche, and can adjust as need be.

Now, if this cognitive niche were in your head in the form of memories, what would be the difference? We sample and manipulate memories in a similar way to the way we sample and manipulate tabs. We regularly have incomplete pictures in our minds and have to consider and recall a variety of thoughts. The thesis here then is that browsers tabs are part of our mind in terms of function. That is to say, what has the information doesn’t matter, it matters that it performs a particular function for us.

But, you might say, isn’t this boundless? Surely we can say any activity or interaction with the physical world is extended cognition. Browsing a library, shopping, or even talking to people might be included. And where would it end? Wouldn’t the chain of cognition continue increasing until the entirety of the internet or even the world is part of our extended cognition?

Yet there’s a number of important factors that differentiate tabs, specifically:

  • Assumptions about the personal availability of information

  • Extremely low levels of epistemic activity result in information

  • Ecological assembly integration through multiple dimensions

  1. Personal availability of information

Information in a browser tab is extremely accessible. It sits within a digital environment, it can be carried in a laptop (and in a phone, though in a different format). It is predictably accessible as well — moreso than a memory. Unlike a book or a piece of paper it can have multiple instantiations, appearing in multiple different mobile and non-mobile iterations. It’s reasonable to assume that a group of books spread around you, with bookmarks and notes in each may too act an extended mind, but relative to tabs, a group of books would be much less easily accessible, and thus it’s a lesser form of the extended mind. Tabs have a high quotient of “being at hand”, moreso than any preexisting cognitive niche.

2. Low levels of epistemic effort

Information within tabs is available with extremely low levels of effort to recover. Tab names are initially accessible through a simple saccade and fixation of the eye — less time than it takes to access most memories. This has been well studied by Ballard, who found that most people would flick their eyes to a figure than try to remember it when solving a problem involving that figure. Additionally, accessing the tab takes less than a second as well — a simple mouse movement and click. Here then, information is accessed faster than most memories. Unlike other potential forms of the extended mind — conversation, books — tabs require very little epistemic activity. However, information on these tabs requiring further clicks to access is less your extended mind as it requires more epistemic activity to recall. It’s also less likely you can functionally believe you know information that is further clicks away.

3. Ecological assembly integration through multiple dimensions

Let’s say you were doing research on Huskies — there was one at the shelter and you were wondering if it was right for you. You google Huskies and open 3 tabs:

  1. A tab containing the Wikipedia tab about Huskies

  2. A tab containing a webpage about Huskies’ history

  3. A tab containing an online forum for Husky owners

You quickly read each through. You have an amalgamated cognitive niche about Huskies within your mind, but you’ve also created a feedback loop where you have each ‘container’ of information within each tab — a cognitive niche outside your mind. Information relating to the tab is accessible through the “reminder” of looking at the tab at the top of your window (which, as noted, can generate thought related to the information within the tab) and also clicking on the tab to actually read the information therein. So if you were reading about the history of Huskies on the Wikipedia page you would reflect on the knowledge you’ve created within the husky history tab or the husky forum via:

  • the information within your physical brain — by remembering what you’ve read

  • your extended mind by using epistemic activity to reference that information within the tabs (by glancing at the tab at the top of your browser or by clicking on the tab and reading that information)

This happens on multiple feedback levels. You are thinking about multiple things when you research a topic — whether you want to or not — by referencing previously instantiated information (again, either by remembering in your brain or by using epistemic activity). This information is instantiated both in your brain and in tabs, and as noted, is similarly accessible and personal.

What’s key about all 3 of these factors is that they enable your brain to expect and integrate the information available via the tabs. In this way a loop is formed between your cognition and your browser tabs.

So, we have a personal, supremely accessible, customised system of inputs looped into our cognition on a multidimensional basis. Now, you might argue that you’re certain don’t use tabs this way, and perhaps you really don’t. Or perhaps — and I’d argue this is much more likely — you do in a way that this simply isn’t apparent to you.

Remember, the nature of our brains makes it such that being aware of our cognition is in fact unhelpful when we don’t infact intend to “meta-cognate”. In other words, self-reflection is useful, but as Heidegger noted, when we are using a tool we aren’t reflecting on that tool, we are focusing on our goal. We only focus on the tool, or in this case the tab, when something goes wrong.

So it’s very likely that you aren’t aware of your extended mind because it happens through effective unconscious operation.

But, in the end, isn’t this all just a trick of language? Why does it matter what we decide is part of our mind or is not part of our mind?

Were we to consider interfaces/information as part of our cognition, this would free us from the user-tool based conceptual restrictions, allowing us to conceive of new and more effective ways to actually think. For example, if we were to think of tabs not as just browser functions but as cognitive feedback loops we would perceive their utility and hence the design of them, much differently.

Imagine if hovering a mouse cursor over a tab over-layed the current page with that tab’s page until the mouse was moved again. Or perhaps users could highlight areas of content within a tab, and that content would appear when the mouse cursor was over the tab. Or what if tabs themselves had better and perhaps customisable signs on them that allowed us to recall the information therein with more ease. These are roughly thought out examples, but they reframe our perception of how we think of tabs.

Hovering over a tab could show a relevant section of text from that tab

With the extended mind in mind (sorry), our focus would be lowering barriers to accessing information, and by making it more instantly accessible and more personalised. It’s much much easier to consider how we relate our internal and external thoughts within and between each other when we utilise the extended mind thesis.

The extended mind’s conceptual structure helps us to understand how epistemic action should be prioritised over pragmatic action in information rich environments. The ability to quickly collocate, immediately access and cross-reference information becomes of paramount importance.

Of course, the difficulty with this is that, as I’ve noted in a previous article, the structure of digital systems are metaphors or extensions of preexisting physical systems. This means that systems are not intrinsically set up to support extended minds.

In the case of the tab, its development followed from the structure of the webpage, itself a metaphor of a physical, paper page. Webpages, in essence, are a metaphor for a millennia old system of recording linear spoken language rather than something sensitive to the potentiality of new forms of cognition.

The same is true for interactive physical systems. Rather than using a new system of typing that could leave one had free to engage in epistemic action, we used the keyboard, a hangover from the typewriter, as the main interaction device with computer.

The father of HCI, Douglas Englebart invented a unique system for one handed typing that allowed the other hand to use a mouse. This would have allowed the other hand to be involved in epistemic action, but his vision died for being “too complicated”.

Douglas Englebart’s one handed “keyset” . Taken from http://web.stanford.edu

But things might slowly be changing.

Material design seems to make epistemic action important by allowing for the movement of panels of information on multiple axes.

Cross integration of multiple programs using single sign-on allows the quick access and transfer of information.

Still, we are a long ways away from what could be. And because of our familiarity with the current system, and, more importantly, our deterministic belief toward what constitutes cognition, progress is slow going.

#12
April 23, 2017
Read more
 
Older archives
Powered by Buttondown, the easiest way to start and grow your newsletter.