DisAssemble

Archive

A Semiotic Approach to the Digital: Part I

When I was much younger I loved video games.

My rather strict father did not.

So I only played them when he was out of the house. When I heard him come home, I’d switch the Super Nintendo off, slide it under the TV and scamper upstairs.

Ours was a fairly noisy neighbourhood, so I’d have to pay attention for a particular set of sounds peculiar to his presence. His car let loose a unique groan as it heaved to a stop, and if I missed that, his weighty march up our stairs was my last cue to make a quick escape.

These were signs to me, in that they were things which represented something else that had an effect on me. The engine groaning let me know of the presence of my father’s car, which in term had the effect of communicating the fact that my father was home and that I should quickly find a save point and shut off the Super Nintendo.

The brilliant pragmatist philosopher Charles Sanders Pierce would give this sign a definite structure using his triadic theory of semiotics (signs): the groan of the car was the sign, which was motivated by the object, my father’s car, which told me that I should shut the games off, the interpretant.

Pierce believed we are a sign-making species, that we fathom our world as a series of shortcuts. We perceive our speech, our visual world, even our thoughts as representative of a further meaning.

Pierce spent decades formulating a complicated theory of signs, stating that this was different from conceptions of how we understood language (though he saw language as a library of signs as well). Of particular importance was his notion that the effect that signs have could not be pre-determined.

This is important for a variety of reasons to Human-Computer Interaction. We input code into computers and computers work within and between themselves on a code-by-code basis, in which interpretation isn’t a factor: computers process using a term-to-term relation, with a single pre-determined correct response. Humans , however, have a very personal and individual sign-making process that results us in each having a varying array of interpretations to the signs around us.

Nowhere is this more true than within how we perceive computers, which are systems of signs outputted to us. Were we able to process these signs unambiguously using a term-to-term relation like computers, developers and designers jobs would be much easier. Sadly, we are not able to do this— hence the field of user experience attempting to scry our varied interpretations.

The signs that computers reveal to us are essentially communicative mechanisms for an entire array of meaning that a developer or designer is trying to communicate. Unfortunately, the bandwidth for this communication is limited, usually by a particular set of pixels in a particular area of the screen.

Now, anything can be a sign. Anyone can think that a particular set of visual stimuli can indicate something, within the realm of computers we often think think of icons as the only signs.

An icon, a typical HCI sign

Yet what we might call ‘icons’ are actually not icons as defined in semiotics. See, Pierce came up with a further triadic breakdown of signs — ‘the sign making process’. That is, how signs point to their objects or how objects “motivate” their signs, as Pierce would describe it. There are numerous extremely clumsy explanations of this on the internet, so I’ll endeavour to be more accurate without being horribly convoluted:

An icon of a woman

There are icons, which share visual characteristics with the sign of something that may or may not exist (a human character in a comic book would be an icon of a human).

An index of something burning

Indexes point to the occurrence of something that exists. It simply says “here is something” (smoke is an index of fire — or at least burning). What’s important with indexes is that their objects have to exist “dietically”, that is within the context of the sign. Indexes are hugely important in HCI, because almost every visual artefact is structured by a designer to point to particular functionality or information.

A symbol of peace

Symbols are signs that we know through custom or law. We have to have a previous set of knowledge to understand what they mean (you wouldn’t know that a dove referred to peace unless you were told or you knew from experience).

It’s important to note that each one of these aren’t mutually elusive, to varying degrees we can see how a sign can motivate it’s object to varying degrees by all 3 of these processes.

Let’s conduct a simple analysis to get this straight.

Take a look at the signs (I’m going to say signs rather than “icons” because as I’ve noted, an icon is a type of sign-making process) along the left side of Hootsuite, the social media management platform.

Hootsuite

Examine the 3 bars sign

If I run my cursor over it, it reveals that it is a button indicating analytics. So we can say that the object of the sign is the analytics page. But how does it indicate this? It shares a visual quality with analytics themselves by showing a part of analytics — the bar chart. Thus, we can say this is mostly an icon.

Now let’s look at the gear at the bottom of the sidebar:

I run my cursour over it and I can see that it indicates “Settings”. So how is it indicating this? It’s a gear, but the ‘Settings’ themselves don’t contain any “gears” as such —so there’s no visual similarity between the gears and the settings. It is however a visual metaphor for inner workings (gears are to a machine vaguely what settings are to a computer) so it slightly iconic. But ultimately, you have to know that this is the accepted symbol for settings — an object that was used in machines and now, through some continued process of semiosis has come to be accepted to mean “settings”. This then, is mostly a symbol.

But now let’s look at the little puzzle piece:

If I run my cursor over it it indicates that it is an ‘App Directory’. Does it share a visual characteristic with an ‘App Directory’? Not at all — perhaps it is a visual metaphor but if it is, it’s a very stretched one. Is it a commonly understood symbol? I can’t imagine that anyone would say that it is a commonly understood symbol for an App store. So we might say that this is doing a pretty poor job of its indicative process. It’s object is doing a poor job of motivating its sign.

We haven’t looked at any indexes within this context. But an association with context will allow most users to understand that these are indicators of something within that context. Following Gestalt’s rules, we know simple ideas of proximity and bounding are important to users, and in themselves are indexical. All of the above signs say “here”; they all point, act as a reference, to a particular thing that actually exists in the context of the sign.

But let’s widen the scope of Hootsuite and think about an index that perhaps isn’t more an icon or a symbol — something more abstract: the grey header of a page.

If this is a sign, what’s it’s object? Well it seems to be saying “here are the meta functions”. Yet it’s certainly doesn’t have any visual characteristics of search. Does it point to anything? Well yes — it seems to state “here” are the objects. Is it symbolic? This seems less convincing. Certainly headers are an accepted model for meta functions, but they hardly require a user to understand an existing law to understand the object. What’s more its not indicating an object in the abstract, its indicating something within its context. The symbol, if there is one, is very mild, perhaps just saying “this is a known meta-type grouping” . It can be difficult to understand how this works with all of the other symbols involved — so let’s reduce the page to a low-fi wireframe.

The indexical signs now become clearer. We can get a feel for the overall groupings and how they point to the objects that sit within them. There might be roughly 3 indexical groupings, the white, the light grey and dark grey headers.

Let’s recap. Each of the signs looked at has different degrees to which it motivates its object. We can rank them based in a bar graph. Each of these signs are, to varying degrees an icon, index or symbol.

Each one of these — bar one — has one type of sign-object relationship that is stronger than one another, strong enough that it defines the sign. That is to say, the defining characteristic to the user of the analytics sign is that it looks like an icon, the defining characteristic to the user of the gear is that it is a known symbol, and the defining characteristic of the header is that it points to objects near it.

What we can understand is that anything can variously have some type of sign-object relationship to a degree but it has to be at least enough of one type of sign-object to be interpreted.

One of these signs doesn’t have enough of a sign-object relationship to be properly interpreted. The puzzle piece sign, notable in the bar graph by a lack of sufficiently long green bar, isn’t enough of an icon, index, or symbol. To a degree, it is all of an icon, an index, and a sign, but it isn’t enough of one to sufficiently represent of its object. The semiotic process falls apart.

A cursory view of the signs in websites and apps will reveal all sorts of signs that fail this test. It’s safe to say then, that developers and designers must pick one of these sign-object relationships to be the primary driver behind their signs meaning. And if they don’t?

Well, they’ll likely be serving up a confused array of interpretants.

More on that, and the problems with uncontrolled semiosis in Part II.

#11
April 3, 2017
Read more

On Reader-Centred Writing on the Web

What is a page?

If you speak English (and you are reading this, so let’s assume you do), it’s likely that you have quite a good grasp of what “page” means: a thing you write and thus read on.

But the etymology of “page” uncovers deeper connotations behind the word. “Page” comes from Old French, pangere to “mark the boundaries of”, or to “fasten”. Pangere was also used to describe the bounds one entered into in a contract. This is the essence of the word, to structure something as to be presented in a singularly comprehensible way. It reflects the physical, linear nature of the book; and earlier than that, the codex; earlier than that, the scroll and so on.

The page and its predecessors afforded the presentation of information in a linear fashion, meant for linear comprehension. There was simply no feasible way of writing in more than one dimension on these media.

Writing from thousands of years ago is fundamentally the same as today in format

Writing has been bound to its media since its inception. More than bound, the media that we write on have come to structure how we write and read, and how we expect to read and write. In writing we are “bound” or “fastened” to medium, which as noted, is linear format. This, however, is not reflected in how we think or how we hold conversations. Think about the idiosyncratic, branching, and unexpected way conversations proceed. Famously, Socrates refused to write anything down — he felt that “dead” paper was incapable of truly expressing thought and discourse. Our brains themselves are not even linear structures, or even branching tree structures, but rather networks of neurons and synapses that fire multilaterally.

Yet in creating the web Tim Berners-Lee, following the lead of Vannevar Bush with his Memex, chose to replicate the concept of the page in a digital format. The web, then, like other computer applications (files, folders etc.) were based on a metaphor of the physical.

In being bound to metaphors of paper, digital text inherited the limitations of the linearity of the physical page. And despite the addition of a futuristic sounding prefix, hypertext lacked invention with regards to the fundamental character of writing and reading.

Certainly, hyperlinks embedded in text were a novel creation in that they allowed different pages to connect to one another within the context of a sentence. This of course impacted the connectivity between writing, but not the writing itself. Pages themselves were and are still read in a singularly linear format. As professor of information Andrew Dillon noted:

“Hypertexts, despite their node and link structure, are still composed of units of text and there is no reason to believe that, at the paragraph level at least, these are read any differently from units of conventional paper or other electronic text”. -

But linear writing needn’t exist on the web, since the web could facilitate writing of a fundamental different character than traditional writing — one which could cut across and through dimensions of understanding and perspective.

Dimension of literature

Dimensions: think stratified layers. Imagine these layers of writing, eroded or aggregated for different readers. Or picture writing on branches, which twist and split, yet all emanate from the same root. Imagine writing akin to a conversation, not because it is idiomatic and shorthand, but because it can go any direction — it is subject to interactions with the viewer/reader/listener. The focus, in this sense, could be participatory rather than unilaterally ascribed linearity.

But return to the page, and its linear, bounded format. This primacy of bounded linearity underscores the importance of telling, or depicting, rather than exploring. Articles, then, drive towards a primary point, the theses, as defined by the author. The act of writing an “article” (increasingly an vague term) either implicitly or explicitly has this framework (this article as included).

It is arguable, then, that writing — the bounded linear structure — behoves the arguer, the writer, the teller, but not the reader. The reader, the self-driven exploratory learner, is damned to a fractured relationship between individual static texts. The reader, left to her own devices, works to find additional texts when clarity in singular texts are insufficient.

The closest we have come to user-centred reading…

Of course the primacy of this framework, increasingly, is subject to question, certainly in part due to our shorter attention spans and the simplicity with which we can be distracted by competing digital information. How does a writer-centred text structure itself within the digital sphere of feeds and notifications? The reader has ever increasing reasons to discontinue following a single thread.

But more than potentially being anachronistic, the focus-oriented linear article contains other delimiting characteristics. It assumes that each person has the same breadth and depth of knowledge; it is insensitive to the peculiarities of the reader.

Theory aside, new dimensions in writing/reading are reified in specific web applications — some of which are (unfortunately barely) in use today. These dimensions embody a branching, layered structure while doing away with the limitations of the page.

Here’s just a few.

Stretch-text

Reader-centred writing is exemplified in stretch text. Stretch text was a concept developed by Ted Nelson (creator of a competitor to HTML) in 1967. In essence, stretch allows users to determine the level of detail of a document.

Below is really simple javascript example which typifies stretch text, which seemingly should be a basic, hard-coded part of the web (or a given part of any CMS)

A simple but incredibly powerful concept, dropdowns or accordions exist in interface interactions, but not as a dimension of digital literature. There’s no reason that this is so other than the seemingly innate conservatism we have towards literacy.

This user-centred form of reading allows readers to have concepts they may not understand explained to them, and readers who understand these concepts not to be bogged down by heavy headed expositions. It can help battle some fundamental limitations of writing, of which Socrates details quite nicely:

“When it has once been written down, every discourse roams about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not.”

Stretchtext also allows those readers who find particular topics fascinating to pursue them in the context and voice of the article. As George Landow says in Hypertext 3.0:

Stretchtext does not fragment the text like other forms of hypermedia. Instead, it retains the text on the screen that provides a context to an anchor formed by word or phrase even after it has been activated.

Users needn’t leave a page to pursue a topic, fragmenting their experience. Similar to Stretchtext, modular forms of writing can cultivate a reader-centred experience.

Modularity

While expandable content reaches into the authors content repertoire, modularity reaches into the web to pull content into articles.

Take BBC labs’ Explainers.

In it, simply however over a keyword pulls out relevant content from other articles as a popup.

Establishing a keywords as gateways to pull in content from other articles allows users to see definitions of concepts they may be disinclined to investigate should they be required to leave the page. The ease of use by virtue of simply resting a cursor should not be overstated, nor should the barrier of commitment involved on clicking a link be understated; numerous studies have shown users are disinclined to click links to investigate topics.

Inline Dialectics

The degree of polarisation in socio-political discourse seems to parallel the degree to which digital media is present in our lives, which is of course on a soaring upswing. Whereas The Digital once promised cosmopolitan worldliness, increasingly our news sources are filtered through outlets that represent our most niche of beliefs, and are thronged by scores of like minded commenters banging the drum of groupthink. Over the next few years,

“the online environment may erode editorial influence over the public’s agenda as a result of the multiplications of news outlets and the resulting fragmentation of the audience”

Pablo Javier Boczkowski and Eugenia Mitchelstein say, authors of The News Gap.

Groupthink and polarisation, of course, are exemplified by the linear and the bounded. In environments with high walls intended to keep out external voices, echoes tend to be more resonant.

What I’m referring to as in-line dialectics then, can eat away at this rabid insularity. In-line dialectics is writing that argues with itself — for each point made an opposing, contradictory point can be seen. Take this example I developed for this article, below:

Here in line text sidles up next to the current article. Distracting, yes, but the point is too force the reader to engage with opposing viewpoints. Beyond that, a function such as this is far more immediately relevant than those digital distractions pressing upon a user at any given moment. This, at least, provides an opposing view to the reader, without requiring the reader’s ability, volition (or even intent) to seek it out.

Picture this normalised: text that was structured with opposing points built into it. Singular points from either viewpoint can be traced against one another. The reader can opt, with minimal effort, to see an inline dialectic. Where once we witnessed the words of sole demagogues, we could instead witness the dialogue of two interlocutors.

A preemptive response

There’s an argument that what I present here is tantamount to endorsing a celebration of our inability to focus, of our collective ADHD.

But focus can continue in a more superordinate sense; that is, the focus is on a larger topic with a free range to explore within that topic or argument. Focus too moves from writer to reader.

An argument could also be made that in moving from the focus from writer to reader points, arguments, syllogisms cannot be made — we would wander through information senselessly. But firstly, reader-centred writing doesn’t preclude more writer-centred reading, it can sit along side it. Moreover, none of what has been suggested precludes the fundamental premise-conclusion format of an thesis, rather, it simply creates a interactive, branching format to that conclusion. In doing so a reader gets a more evocative, personal picture that works to inform them rather than simply telling them. Writers may protest, but with new dimensions of reading come new potentials for writers.

But aside from the potential disruption of the sacrosanct writer-reader paradigm, reader-centered writing can progress beyond relatively unimaginative conceptions of the web. Writing is a vast component of the web, but the web isn’t writing, it is information, and information is, to again reference Ted Nelson, “Intertwingled”:

“EVERYTHING IS DEEPLY INTERTWINGLED. In an important sense there are no “subjects” at all; there is only all knowledge, since the cross-connections among the myriad topics of this world simply cannot be divided up neatly.”

Information isn’t a page, bounded and linear. It is cross-cut, interwoven and multi-dimensional. Information is our lived, real world, and our world isn’t bound to singular linear focus. Our writing and reading shouldn’t be either.

#10
September 25, 2016
Read more

The Necessity of Cognitively Dissonant Information Experiences

You read an article about your absolutely favourite movie. It’s not flattering — it rips the movie apart. The article says the movie is…

….trite, overlong, hackneyed, and filled with cringe-worthy lines.

It argues that the movie contains….

….hamfisted and overtly political themes.

The author even…

…bemoans the generation that celebrates the movie.

Immediately, the synapses in a very particular part of your brain fire.

Something is happening, but you, such as “you” are, are not aware of it: your brain is trying to reduce the cognitive dissonance between this new information about the movie and your preexisting opinions and feelings about the movie.

Thoughts pop up in your head:

The movie probably offends the author’s sensibilities or it doesn’t align with his political opinion.

I was young when I liked it and it holds a special place to me, he and I are considering basically a different movie you think.

He’s out of touch.

He’s an idiot.

There’s no good reason why he doesn’t like the movie.

Mechanisms in your brain are attempting to save you from expending energy, thinking about his points, considering them. Your brain is preventing you from expending the mental effort of holding onto two contrary opinions or taking the time to properly evaluate this new information.

Cognitive dissonance is the mental stress we feel when we hold competing information, ideas or beliefs in our head. When we get it, we have an urge to correct it, eliminate this inconsistency in our brain. Clearly this is a useful apparatus. We can’t “know” two contradictory things to be true. Practically, we don’t know what to believe or how to act if we don’t know the truth of the matter.

So when new, contradictory information comes we evaluate it against old information, ideally using a rubric of rationality and empiricism. Of course we don’t always do this.

We don’t have the time to carefully evaluate each side of an argument or search the web for a counter-argument. We don’t want to or can’t expend the effort. We’re busy. Internal and external pressures abound. And indeed, the payoff may not be worth it. Why would you spend hours and hours examining the validity of a writer's opinions and reading other sources just to determine whether he was right?

So we end up doing the above, rationalising, minimising and ignoring.

But the fact is that we do have access to reasonable opposing voices that we should listen to. The web gives us access to a multiplicity of opinion, of argument, of counter argument. Information is moving thicker and faster than it ever has. We’re flooded with information that can and should cause us to have dissonant ideas about our values, beliefs, and actions.

We can’t possibly evaluate all of these sources, yet we also shouldn’t use poor reasoning or insufficient evidence to evaluate competing opinions.

Unfortunately, the experience of the web is unconcerned and even opposed to presenting balanced points of views. The common tenor among think-pieces today is one of polemics, of demagoguery. The internet think piece does not tell you to think, it tells you what to think.

But a nuanced delivery of information, however, can help cognitive dissonance act as a weighted scale of sorts. Encouraging users to interact with information in new ways can strengthen the framework of their thought.

Although I mentioned earlier that we can’t “know” two contradictory things to be true, we can be faced with two opposing ideas, and sort out which one is true (or more true, as it were). Doing this, however, requires cognitive dissonance to be built in to the very information design.

Let’s say you come across an article

As you read it something happens

Another article pokes its head in, literally (and figuratively) nudging into the user’s viewline. A user can see that there is “more to the story” just out of reach. They’re able to drag the screen over, and see the rest.

The article that is revealed contradicts the first in that it provides an opposing view, with a counterpoint for every point made in the first. Here, we are foisting cognitive dissonance upon the user.

Of course, users are not required to read the opposing article, but it’s very obviousness, it’s salience, increases the chance that an opposing view might be seen. Normally finding such an article requires intent on the part of the reader— this experience does not. Some might call this a “digital nudge”.

In essence, this is a dialectic forced upon the reader, rather than a point of view forced upon the reader. This dialectic creates a cognitive dissonance that the user must sort out.

A reader might be persuaded more by their own emotions, or groupthink rather than any more rational or empirical evidence, but at least they are being exposed to an opposing point of view.

Implementing this requires a change in how we interact with information, but it also, obviously, requires a change in the mindset of how we produce content. This may seem more onerous than it actually is. Wouldn't you want the ability to see the opposing side of any argument? If anything, I believe it is a business opportunity.

But there’s more than just business. A proper and full dialectic is a necessity of good media practice, one that is intertwined with a good society.

Facilitating this means empowering people with two matched, sound points of view and making it difficult for them to rely on lazy ways to reduce their cognitive dissonance.

And UX and interaction designers can help to make this happen. All they need to give is just a bit of gentle nudging.

Subscribe to The UX Blog
The freshest user experience content on the web. Period.

#9
July 2, 2016
Read more

On slouching inwards

Any historian will tell you that there’s essentially nothing uniform about progress. Divining the future is at best guesswork and at worst alarmism. But the one element has been consistent in humanity’s progress is an overarching increase in the level of conceptual thinking. High-minded conceptual thinking, thinking bigger than yourself, naturally involves considering the underlying humanity we all share, not the superficial differences.

Some would call this march of progress ‘humanism’, others just ‘basic civility’. Certainly there are ups and downs but overall sectarian strife and inward looking groupthink have declined in the face of a deeper shared understanding of who we are.

That’s why the Brexit has been so utterly depressing. Chest-thumping nationalism, blind hatred toward some confused otherness, and the angry un-tethering of joint relations are symptomatic of a downswing in deeper conceptual thought, in humanism.

It’s not difficult to lose your idealism for high-minded concepts in the face of severe pragmatic hardships, but when people are relatively well off — as they are in Britain today — inward facing tribal thought is difficult to rationalise.

In the lead up to the referendum vague ideas about recovering Britishess and controlling one’s own destiny were churned out by politicians. It may be argued that these ideas are conceptual — or even humanistic — and perhaps in some ways they are. But they are wrapped up in fear, in the unfound anxiety towards a fictional future: immigrants overrunning Britain, Brussels controlling the government, and the English way of life dissipating.

Conversely, ideas about worldliness, about underlying connections, rely on humanism and deep connections involving culture, ideas, and art.

We bridge together because of these things, things bigger than us, and we divide when we look at things smaller than us. But the bridges that make smaller groups into bigger groups collapse when they aren’t supported by high-minded thoughts, and without bridges we see less of the “other”, only making us more insular and more afraid.

In the end one can only hope this is a temporary setback — a blip in the quest for the greater good — that we are facing. And whatever happens, holding on to the high-minded ideals of transnational humanism has never seemed more acutely important, especially when the English Channel seems deeper and wider than it has ever been.

#8
June 25, 2016
Read more

Prototyping the Extended Mind

There have been conversations I've had where, after a case of forgetfulness or curiosity, I've paused the conversation to look up pertinent information on my phone. “Paused the conversation” is perhaps a bit of a euphemism, “blithely ignored the other person” may be more apt. But I’m hardly unique - I’m sure you’ve done the same.

I’ve heard arguments that claim our memories will wither if we rely on smartphones to look everything up rather than attempting to remember it. I’ve listened to claim after claim that the art of conversation is sullied when people ignore others to look at their phone mid-conversation.

But allow me to take a rather provocative stance:

There is no functional difference between recalling information via your physical brain or via your phone. Our memory is as external as internal.

In academic literature, this is known as the extended mind hypothesis. The extended mind (EM) hypothesis is perhaps best exemplified by an an anecdote, drawn from the originators’ journal article:

Inga hears from a friend that there is an exhibition at the Museum of Modern Art, and decides to go see it. She thinks for a moment and recalls that the museum is on 53rd Street, so she walks to 53rd Street and goes into the museum. It seems clear that Inga believes that the museum is on 53rd Street, and that she believed this even before she consulted her memory. It was not previously an occurrent belief, but then neither are most of our beliefs. Rather, the belief was sitting somewhere in memory, waiting to be accessed.

Now consider Otto. Otto suffers from Alzheimer’s disease, and like many Alzheimer’s patients, he relies on information in the environment to help structure his life. In particular, Otto carries a notebook around with him everywhere he goes. When he learns new information, he writes it down in his notebook. When he needs some old information, he looks it up. For Otto, his notebook plays the role usually played by a biological memory. Today, Otto hears about the exhibition at the Museum of Modern Art, and decides to go see it. He consults the notebook, which says that the museum is on 53rd Street, so he walks to 53rd Street and goes into the museum.

Otto believes he has access to his memory, much as Inga does. Though it may take a second or two for Inga to remember it, it may only take a short time more for Otto to open his book to the page where he wrote down the address. Isn’t this just a difference of quantity of time rather than anything more fundamental?

There are certainly differences in the experiences of retrieving this information, and anyone who has studied memory will tell you memory is not analogous to a simple filing system. Nevertheless memory is a system of recall and retrieval, as is the extended mind.

With smartphones, the plausibility of the EM hypothesis is even greater. Our friend Otto is actually at a disadvantage than us smartphone-equipped and able-memoried folk. Not only does Otto not know where the MOMA is, he doesn’t know exactly where in his notebook that information is. And arguably, his access is slower than Google (since he may need to search through his notebook) and a smartphone contains knowledge that you haven’t necessarily recorded previously — it is a repository with nearly limitless encyclopedic qualities.

The thinner the division between access and realisation (i.e. full awareness) of the information the more convincing the extended mind hypothesis becomes.

Memory is not the only cognitive capacity extended through our tools. Examine web browsing — the activity which you are, or were just engaged in. How did you get here? Twitter? Medium? Reflect on the path that you’ve taken to achieve this route. There’s a particular quality to it in that you forged it; it reflected your thoughts in that your thoughts impelled the retrieval of the information on the screen. A feedback loop forms where your thoughts are splayed out on screen in a tangible way. Each new query forms a new thought which forms a new query, and so on.

In this way, it’s reflective of not only what you are thinking about but your thinking in and of itself. Your curiosity, your need or desire to remember something, are represented by the inputting of queries, or the clicking of links. Your browsing behaviour is a map, a record of your thought in much the same way as writing down something in a notebook is. Yet on the web, we can take it further — because you aren’t just recording your input on an empty page, you are interacting with, and reacting to, information. The content of what you are looking at and your reaction to it intertwine and become inexorably manifested to form an external mind.

But the manifestation of this thought is difficult to play with, to be “within”; it is insubstantial. The challenge is giving a corporeal form to our extended browsing mind, such that we can reflect on it and work with it as we do our own thoughts.

Yes. we have a we have our web histories, but they are simply lists of pages, not representations of our branching, query-laden thought process.

In viewing a map of our thoughts, we can recall what we were thinking about, how various thoughts (manifested as pages, clicks, and queries) are interrelated, and reflect on the nature of our curiosities and thinking patterns. Importantly, this also lets algorithms visibly work with us, within our extended mind.

Let’s take an example. In the gif below, a user googles a word she vaguely knows, ‘acedia’:

As she searches, her browsing — her extended mind — is mapped in an area above the browser.

She enters the Wikipedia article, then goes back to the Google search.

Slowly her cognition becomes visible.

Acedia, by the way is “ is a state of listlessness or torpor, of not caring or not being concerned with one’s position or condition in the world.”

Finally, we see her her clicking a link to a related topic, “ennui”.

How does “ennui” and “acedia” relate? They involve meaninglessness — a lack of purpose. Accordingly, we can see that the system recommends an article , “Leo Tolstoy on Finding Meaning in a Meaningless World”. Algorithms work to find relations between words that she searches, find common themes and ideas. From there is she is able to find similarly themed articles, based on her history (great care of course, needs to be taken with such a thing to avoid filter-like bubbles).

And perhaps she was feeling empty, forlorn, and was Googling these words. But on viewing her extended mind in this way, she could take a bird’s eye view, and perhaps realise something about herself she hadn’t otherwise known.

But it’s not just the work of semantic algorithms that could exist within an extended mind. Importantly, this “thinking” is all mapped for her to recall at a alter date. She could tag the grouping of browsing, or have it tagged automatically. Much like one might remember the name of a friend of a friend by recalling the closer friend, new understandings can be sought by recalling how one browsed.

Pages become nodes of activity that have been determined by the cognition of the user. In this way the cognition of the user and the system work hand in hand to aid the user.

Interfaces are our bodily proxies to an intangible world — the world of information.

However it’s difficult to be within our space within the realm because we don’t have the phenomenological awareness that we have in real life. What does it “feel” to think on the web?

Making this feeling more visible, more tangible, is a first step to narrowing the gap of experience. Yet this gap will never be crossed if we don’t make that first leap — the leap of understanding that our minds extend beyond the matter within our skull.

#7
June 12, 2016
Read more

Facilitating Digital Musical Exploration

Discovering new music is an experience for me like little other. It’s exciting. It’s stimulating. When I find some amazing new music, I spread its gospel to whomever will listen.

When I was doing my Masters, I spent long nights alone in front of the computer. In between studying my disjointed notes and designing wireframes, I browsed Youtube, much as one would browse through records in a record store. But unlike a record store, on Youtube I could listen to music with no more effort than it took me to look at it.

I spent hours wandering through the weird and the wonderful, coming across gems, but also a lot of crap.

Moondog, an odd spectacle of a man, was one of the gems I came across. His music, which ranged from odd chants, to classical compositions, to child-like rhymes, all were composed with utter virtuosity. I found out more by Googling him — he was homeless often, and would stand on New York street corners in a horned Viking helmet. An outsider musician, he was called (a fascinating subject in its own right).

#6
April 29, 2016
Read more

The User Experience of News

As much as we may wish it, informed citizens are not a natural result of a democratic society. Nor are they necessarily the result of simply wanting to be informed. In large part, this is because news and information acquired by even the most well-meaning among us is often emotionally manipulative, agenda-driven, or just simply clickbait.

For citizens to be informed, something is needed from those who disseminate the news. New organisations must ensure that the content they produce fits in with their readers lives, and is structured around how they consume, read and think. Even more importantly perhaps, news organisations have to work to ensure that citizens want to consume news of relevance.

This doesn’t mean news organisations must make their stories sensational, ribald or dumbed-down in order to collect as many clicks as possible. It means that news must be designed around an experience — the user’s experience.

I’m hardly the only one making such declarations. The American Press Institute recently invited 40 top thinkers in digital news to one of their Thought Leader Summits in which the theme was thinking of “news as a product”.

Thinking of news as a product gets us thinking about how users experience the news, rather than simply consuming it.

We cannot just think of readers as consumers, who are happy to simply consume news in a layout and format that is hundreds of years old in design and character. Approaching the news holistically, understanding how the editorial process integrates with the design process means that we can leverage the properties of digital to give the user the best experience possible. Giving the user the best experience is vital for the news — there’s cuts upon cuts as news leaves websites and jumps to social platforms.

There’s a number of avenues that news and journalism can pursue in order to incorporate user experience into their product — here’s just a few.

News as education

News is education in the sense that it allows users to experience and understand the world in terms of current events. But it can also be a gateway, a catalyst for an educational journey. If you read about Boko Haram in the news, you might realise you don’t know much about Nigeria, so you Google it, and find out facts you never knew: the country is host to 182 million people, and more than 500 ethnic groups.

The idea that news can remove the Googling aspect and facilitate these educational journeys within the context of news is well within the realm of possibility, but sadly, is rarely occurring.

BBC Labs is is the BBC’s “innovation incubator” aimed at driving innovation in the organisation. Take a look at BBC’s explainers project, a BBC labs initiative. In it, the BBC is trying to embed “explainer” interactions into the words of articles that would create dialogs that help define concepts, and link to other articles that are tagged with the concept.

BBC’s explainer Project. Via: http://bbcnewslabs.co.uk/projects/explainers/

Our experience of news, however, is so much more than whether we understand it. Our experience of news is tied intimately into how it’s written. News can affect our worldview simply by linguistic style, the use of particular words, or a focus on certain aspects of information.

Technology could provide us with the means to be more critical of the news. Take a look at the rough wireframe I’ve mocked up below. In it, various semantic and syntactic choices in the journalist made are highlighted by the click of a button. With it, we can see how an application may detect words, syntax, and other features of language to sway opinion.

Those involved in creating the news can be taken to task with an app such as this. News organisations and journalists could be viewed with a much more critical eye. But how does this help news organisations? If readers can become more educated and critical of journalism itself, journalists and news organisations are forced to become better at their job, and produce a more accurate, robust and effective product. Users demanding more means that the news becomes better.

User Research

One of the key takeaways from the aforementioned Thought Leader Summit was the importance of user research. User research can reveal an enormous amount, but most importantly it can discover:

  • how users read the news

  • how users consumption of the news revolves around their daily routine

  • the formats that users want to, and are most are able to, consume the news

Many might think that all user research may be with regard to the news is analytics, but analytics can’t describe the characteristics listed above, least of all any of the why’s involved in them. For that, more qualitative user research is needed, such as user testing (in person or remote), surveys, interviews, focus groups, diary studies, guerrilla research or numerous other methods.

A map of some of the techniques that can be used for user research.

As Nieman Journalism Lab reported, both ProPublica and the New York Times have undertaken user testing, from long form diary studies to remote user testing.

Any new features that are aimed to enhance UX must be tested thoroughly, with an eye toward usability, user experience metrics (such as comprehension or usability), as well as other more ethnographic data, such as at what point in their day might someone use a feature.

Personalisation

Integrating UX into news is a prospect rife with difficult issues. Matching news to a user’s needs risks losing objectivity as a user with particular political stripes may only want to hear news reflecting their political outlook. Imbuing UX into news risks corralling users into “walled gardens” of news. Users may want only news on a particular subject or outlook, but as noted earlier, it’s the news’ jobs to make citizens fully informed of the world.

On the other hand, not all users want to hear everything, and some users want to hear more than others. Or, as Nieman Lab pointed out:

“People who know a lot about a story get bored by obligatory background; people who don’t know a lot about a story don’t get enough context”

The BBC’s app revamp in 2015 was aimed at personalising the news experience without cutting the user off from regular news feeds. The updated app allowed users to add topics to follow, providing a “My News” section beside “Most Read”, “Most Popular”, etc. In this way, users have a personalised experience, but also aren’t walled off from the rest of the news world.

There’s a fine balance between telling users what they want to know and what they need to know. But news stories can also be personalised by making certain parts of news stories relevant to people. This can only be done by granulating the various bits of news stories into taggable, flexible chunks that can be reformed into stories and other narrative structures more appealing to users. One might call this atomisation.

Atomisation

The long and short form news article is a leftover from an era of the broadsheet and tabloid. These aren’t formats that leverage the capabilities of the digital. I’m not just talking about the potential of multimedia integration (video, commenting etc.), but rather an experience of the news that has the individual elements of stories structured around user needs. Content, even of individual stories, need not be the same for everyone — everyone’s information consumption habits are different.

Kevin Delaney, the editor-in-chief, president and co-founder of Quartz, a digital only start-up, feels that the normalcy of the 800-word article has to end. He argues for the atomisation of news into pertinent, mobile chunks that can form personalised news dashboards. Delaney says this isn't too great a loss anyway:

“A lot of the 800-word stories have been padded out with the B matter. It’s called B matter because it’s B grade, not A matter, which is the focal point of the story.”

Refer back to BBC Labs again. One of their workstreams, atomized news, involves projects aimed at playing with granular elements of stories. They describe their initiative:

“Some segments of the audience find existing BBC approaches to news unwelcoming. We set out to explore if taking a completely different approach, segmenting stories into their constituent parts, would be more attractive to them”

The possibilities of such projects are endless. Imagine a news feed that is able to reach into breaking stories to look for and pull out events, people, or places, that you’ve previously read about. Consider news stories or even headlines of stories reformatted to reflect details you’re interested in.

Of course, the risk of this is that news becomes uncompelling, lacking a strong narrative, a human voice. The App “Circa” found this out last year when it was forced to shut down. Based around chunking content into ever-updating stories, it failed to garner a following, and subsequently, enough capital. The UX was perhaps not looked at holistically — the concept was solid, but the content of the concept didn’t reflect user needs. Hopefully this will be a important lesson for future atomisers of the news.

News as reflection

What is the purpose of news? To inform readers of current events, and thus create a society of informed citizens, most would say. But being aware of current events doesn’t necessarily mean being informed of current events. Being aware of current events means truly understanding what is happening, why it is happening, and what means to humanity at large.

Take a look at two unique examples.

Lapham’s Quarterly is a beautiful, thoughtful magazine that discusses topics within the tapestry of history. However, the magazine fascinatingly contrasts topical events and culture with historical parallels. During Daylight savings time, the magazine posted a letter written by Benjamin Franklin about how to make use of the daylight we have to us —

“Every morning, as soon as the sun rises, let all the bells in every church be set ringing; and if that is not sufficient, let cannons be fired in every street to wake the sluggards effectually, and make them open their eyes to see their true interest.”

— and infographics compare an English Duke’s from the 14th century to Donald Trump’s.

Slow Journalism magazine is a magazine that involves stories that broke no fewer than 3 months ago. It provides long form journalism that explores the context of a story, letting time accrue to examine how a formerly current event has panned out. In this way, it contrasts itself to other news organisations, who seek to be the first to break news.

Slow Journalism’s homepage

News as a reflection of the past, or in the context of time accrued, refocuses news away from the cult of the “breaking”. This opens up whole new experiences of the news to the user, experiences that are unique, insightful, and thought-provoking. To be among the first to do this is an attractive option for any news organisation.

Readers as Participants

Understanding readers as participants is not really a new concept as commonly understood. Citizen blogs pepper news sites, and front-line journalists are being replaced with someone with a twitter event near breaking news . But that’s a single reliance on readers as an outlet, not a editor, curator, navigator, or a sense-maker of the story.

Often even more apparent than that, there’s an idea that citizens are just ‘reactors’ to the news.

“Exploring the relationship between journalism and active audiences, most research has suggested that legacy news media resist rather than embrace such participation. Journalists typically see users as “active recipients” who are encouraged to react to journalists’ work but not contribute to the actual process of its creation”

-Lewis and Westlund write in their paper Actors, Actant, Audiences and Activities in Cross Media News Work.

The expansive roles that readers can play in the news has hitherto hardly been examined.

One example may be The New York Minute, an email bulletin periodically sent out by anonymous few New Yorker readers that summarise each of the magazines’ stories,and recommends which stories should be read. This is a fascinating example of atomising long form journalism in such a way that the depth and breadth of an article is not lost; users are able to comprehend the full context of each issue, and move forward from there. However, it is New Yorker readers who take it upon themselves to write succinct summaries/reviews of New Yorker stories, not anyone who works for the magazine.

The New Yorker Minute simple signup page

But there is a more obvious example.

More often that not, news is passed through sites (Huffpo, the Guardian, NYT, Washington Post, etc) then editoralised by our friends and family members on Facebook or Twitter. We witness content through the prism of our friends, peers and thought leaders words. Users pick new sources to share, as well, reflecting their own tastes and beliefs.

In this way users are curators, editoralisers, sense-makers and navigators of our shared news world. News organisations need to realise how their news is being filtered and steered by users — users are creating an experience for other users. They are not just reactors, they are presenters, filters, sense-makers and thought-provokers.

In order to grasp this phenomenon, a holistic understanding of how news is framed by other users must be incorporated into the user experience research and conceptual development of news UX.

None of this far fetched. As noted, many news organisations are already beginning to incorporate some of these ideas. Most are not. Budgets are a constraint, but news’ UX can certainly be done cheaply. Indeed, just an awareness of these concepts alone means that news organisations can stay one step ahead.

And news and UX aren’t so different. As Alex Schmidt notes, there’s a lot of commonalities between journalism and UX. They both require careful observation, the ability to ask questions, and a whole whack of other parallels.

It’s not unreasonable to say it’s we are flooded with new ways to learn about the world. Ensuring these experiences are robust, effective, and enjoyable isn’t just something that’s good for new organisations, it’s good for all of humanity.

#5
April 21, 2016
Read more

To Focus on Why, not What, in User Research

A significant difficulty involved in tackling user research is finding a way into the user’s head. Many researchers avoid this entirely, and focus simply on what a user does.

Accordingly, focusing on what a user does can lead to a magnification of the importance of a single aspect that the user is engaged with.

Let’s take an example (a simplified, reduced example, but one that it is illustrative nonetheless). You are user testing a retail website. A user browses to a sewing machine and clicks on photographs of it. When doing a thinkaloud, she may casually tell you that she liked the photo.

Now, you take that information back and report on it: “Photos appreciated by users”. You might then prototype and test a design with more product photos.

The rationale for the user’s actions, however, is not truly understood in this scenario. This is especially important for something like this, where a usability issue is not involved. Yes, the user liked the photo, but why? Was it because she liked the aesthetics of the photo? She recognised the brand of the sewing machine? She thought she recognised something in the photo that was useful or novel?

If any of these reasons were the case, the conclusion that more photos = better may be a specious, or at least an exceedingly shallow conclusion to draw from the data.

Continuing the example, if we had pressed the user and examined why she liked the photo, we may have found that perhaps, she was curious about the size of the sewing machine and that clicking on the photo was the best method to understand how large it was.

What are the aspects a user is interested in?

If we understood this we would glean much more than the conclusion “add more photos”, which may be fundamentally not what users are interested in, but only a superficial manifestation of a deeper rationale. Understanding this rationale, this why, allows user researchers to grasp the cognition of a user.

Returning to the sewing machine example, the user essentially wanted to understand how the sewing machine would integrate with her life, in a physical sense. In understanding that users on this site are interested in understanding how products integrate with their lives, we might want to prototype methods that facilitate this. We might want to user test a prototype that displayed pictures of multiple angles of a product, or showed the product in context with a person, or allowed users to see a video of a product in use, or even allowed users to superimpose pictures of the product on pictures of their home or person.

I recently completed my Masters thesis on this very topic — why users undertake interactions when they are using the web. After a bout of research, I developed a typology of the rationales users used to describe why they do what they do when browsing the web. These were not rationales for overall goals for web browsing, but rationales for single browsing interactions (clicking on various elements).

These types of rationales were divided into 4 categories: (in actuality the rationales were reactions to the content they were looking at, which then informed the rationale for their interactivity — but that’s perhaps needlessly in depth!)

  • Appeal: Users perform an interaction because they quite simply find something appealing or unappealing. This appeal may be due to the visuals, where it is in the hierarchy of the site (e.g. near the top of the page), the emotions it elicits, or many other aspects.

  • Apprehension: Users sometimes doing things because they want to “apprehend” content. These rationales involved a user seeking to, or failing to, acquire or comprehend content. For example, this might involve a user clicking because he wants to further understand some written content, or could involve a user clicking “back” because he fails to understand content.

  • Congruence: Sometimes users are looking for something, and what they see may be congruent or incongruent with the idea of what they are looking for, thus they’ll often click on a link that when the content is congruent with their expectations, and click ‘back’ when it is not. For example, a user may have the name of particular person in mind, and not seeing it listed, they may hit ‘back’. This is the main reason for interactions people use when they are “finding” things. This idea is also closely related to the idea of “information scent”.

  • Life-world Orientation: Sometimes content that a user sees impacts their life, or zone of experience that is their world. This content might affect their past, current, or future life, (or it might not) so they perform an interaction.

Note that these typologies are not mutually exclusive — their may be multifaceted reasons as to why a user does something.

On identifying these rationales in users, we can consider design suggestions that acknowledge and reflect these rationales. I developed a framework of these suggestions for my MSc. You can check it out here. It’s lengthy, and expects that you know the sub-types of user rationales.

But a good deal of them are self-explanatory.

For example, if we did user testing where we looked at a user’s rationale, and we kept discovering that users were clicking back because they read everything on the page, we might categorise their rationale in the realm of Apprehension, in that they clicked back because they apprehended and exhausted all pertinent information.

What can this tell us?

That users are interested in the information and they likely want more. We can see that users want more of this particular type of content, and that we should structure this page, or collection of pages, around this. For example, we might work to create more related links.

It may be that it is in the site’s best interests that the users do exhaust all the information, so they perform an action (like click a call to action for more information). This is a positive outcome, and would require no change in the site, but even then, seeing this result allows us to understand and confirm that this is a quality on the page that is eliciting a preferred user interaction.

Indeed, understanding why a user does what they do helps understanding the user’s cognition as a whole. It makes it far easier to empathise with a user and model their cognitive behaviour.

Investigating the user rationale is a concept with a great deal of facets and depth — I’ll certainly write about it more. But in the meantime, should you want to learn more, click here to take a look at my Masters thesis, which has a whole lot more detail about what I have been discussing.

#4
April 10, 2016
Read more

Media Literacy : Applications and

I

The premise behind self improvement applications is that we can become better, more efficient people.

II

There are numerous applications that aid self-improvement:

#3
February 2, 2016
Read more

The Vast Emptiness of the Daily News

Our attention and emotional output are better spent elsewhere.

In 1846 Kierkegaard had a profound realization:

“Even if my life had no other significance, I am satisfied with having discovered the absolutely demoralizing existence of the daily press.”

He was struck by how the public reacted to the daily uptake of news, a relatively recent construct. He found that the public were no longer constrained by their locality — relegated to the news of the extended family and village. Instead, they were privy to the wider sphere of their existence — politics, trade, scandals and more.

#2
June 1, 2015
Read more

Neglecting Our External Minds: A Call for Better Bookmarks

Bookmarks are the key to organizing our thought as embodied in the web — and a stepping stone to the web-enabled mind.

Making our mark on our world is arguably among the most important activities we can undertake. The drive to make our mark on this world is underpinned by our most passionate hopes. It affords us the promise of being able to be proud, saying to the world, “look, here’s what I did!”.

But making our mark serves another purpose too. It provides guidance for us in that it serves as a reminder for where we’ve been, where we are, and here we want to go.

We write lists of our goals, write diaries containing our innermost secrets, and highlight text we might need for an upcoming exam. It’s a very personal activity, and involves reorganizing and tagging the world to represent how we understand and feel about it. It reflects a complex array of phenomena in our mental state — our thoughts, perceptions, and feelings.

#1
May 16, 2015
Read more
Powered by Buttondown, the easiest way to start and grow your newsletter.