Designing news for the modern consumer can help overcome misinformation. Photo by Mike Ackerman
Where can we find the solution to the spread of digital misinformation? In technology? Media literacy? Fact-checking? Legislation?
There’s no question that these are useful entry points to attack the problem of misinformation — but what of the root of the problem? The root of misinformation at any given time involves our relationship, both conceptually and practically, with the news. We’re the readers, we’re the ones who misinformation is for. If we want to attack the problem at stem and root, we have to step back and consider our ‘experience’ of the news.
Most proposed solutions to misinformation seem to lack this perspective. This may cause, and indeed has caused, solutions to misinformation to be ineffective. Case in point, current solutions seem to be operating with the following premise: users think news is a repository of factual information about current events.
Solutions inheriting this premise very reasonably attempt to address this problem by increasing people’s media literacy through fact checking and displaying the outcomes of said fact-checking. There have been many approaches like this:
The Credibility Coalition are working on implementing ‘credibility indicators.’ These indicators attempt to show how credible a news story and a source is. This endeavour is at an early stage, but in application, it would seemingly involve some sort of visual indicator to the user, which would note that a particular news source is trustworthy, untrustworthy, or somewhere in between.
The Trust Project also provides an indicator system, this time on news organization’s ethics and other standards for fairness and accuracy. It appears as a logo on news organisations who have been verified by the Project.
Even UX-first solutions have honed in on technologically-centred solutions. In this article, UX architect Jason Salowitz presents a credibility framework for news stories. He discusses a ‘validation engine’ and a number of indicators that could help users determine the validity of particular articles and news sources.
These solutions are like an objective judge of the news’ validity, determining the ‘truthiness’ of an article or validity of a news organisation.
But if we pull back, if we think with a human-centred approach, we can begin questioning the efficacy of these solutions: do they integrate with how people live their lives, and meld with their conceptualisation of the news?
So what would a human-centred view of news engagement tell us? Let’s investigate, and in doing so, we can question whether our view of the news as an abstract reporting of facts is accurate. It will also help us generate some UX takeaways that should be considered in misinformation solutions.
In previous eras, engaging with the news was causally related to an intent to look at the news. You had to choose to pick up a newspaper or turn to a news TV program. Now, news is typically posts on social media, comments on posts, and chat messages to one another. ‘News’ tends to live as an ever-present entity that takes almost no effort to view.
This means any solutions need to mesh well with the embedded experience of the news. Solution frameworks should engage the reader in a similar manner as the news. They should be embedded in our everyday experience, not abstracted from it.
As noted, news often manifests as tweets, posts or comments that frame or respond to news articles. In this way, ‘news’ is separated from the requisite factual bedding that news stories have historically had in media such as newspapers and television. Compounding this de-contextualisation, 60 percent of people don’t read past the headline. News organisations have responded to the atomisation of news and corresponding user habits by making news articles shorter and punchier than ever, often in the form of bullet points or inflammatory headlines.
This means solutions shouldn’t provide context or other useful information through vague approaches which require users to continually chase down facts and figures and rationales. If we expect that people won’t read past the headlines, it’s unrealistic to expect users to innately want to understand the broader context of why a particular story is treated as misinformation.
This also means that asking the user to understand complicated mental models are likely going to be ineffective. Self-imposed and external time pressures mean that solutions need to do what they do quickly.
We are more partisan than ever. Filter bubbles, the immediacy of news, user comments, memes and mobile phones have all contributed to this state of affairs. We don’t have to dig very deeply to understand who represents our views and who does not.
This means that solutions can’t simply ignore or act in contradiction to a user’s associative group structure, rather they must work within the parameters of peoples’ tribes. This isn’t to say that these tribes are good or useful, but merely that they exist and need to be accounted for. Solutions that tend towards partisanship — or even hint at it — will likely be unsuccessful.
So how might these principles be incorporated into solutions? Here’s just a few ways.
Incorporating more credible articles next to less credible articles can help educate readers not only on a more authentic description of events, but can help users to understand what accurate stories ‘look like.’ An objective isn’t necessarily only to show more plausible contrasting accounts, but to get people to explore out of their comfort zones. This is a form of what is known in information studies as serendipitously finding information. Users can ‘accidentally’ come across information that is of value to them in a way that is embedded in their existing news consumption.
This has been proven successful previously, as a highly detailed and insightful report from the Shorenstein Center notes:
Experimental research by Leticia Bode in 2015 suggested that when a Facebook post that includes mis-information is immediately contextualised in their ‘related stories’ feature underneath, misperceptions are significantly reduced.
Facebook’s related articles feature
Encouraging contextual exploration and serendipity is useful, but it doesn’t mean that what users are discovering necessarily embeds satisfactorily with their beliefs, identity, and their associative group.
Therefore, a credibility framework can be enhanced by nudging users to explore content through the provision that other people like them are also looking outside of a single information source. No one wants to feel as though they are less knowledgeable or competent than others, so messages noting that others are looking at related content could prove valuable.
Here’s a quick wireframe of how social proof and contextual articles could work together:
Including related content and articles can “nudge” users to explore more information. Mockup by Vikram Singh.
A story is rarely presented without a social layer, given that news is already filtered and editorialised by friends and people you follow. Accordingly, this approach embeds well with a user’s experience.
Misinformation thrives on ignorance and a lack of context. As such, we want users to be able to understand the broader picture of a news story, but without overloading them or categorically shutting down their political perspective, so that they can better navigate away from misinformation.
For example, take a look at a new site entitled “Kialo”, which hosts debates by topic. Each topic has arguments for and against, with each of these arguments containing sub-arguments for and against the arguments (and so on, deeper into specific sub-arguments). Each argument and sub-argument is voted on.
Here’s how the topic (the grey box) of whether the US should pay reparations for slavery is structured, with arguments and sub arguments — green ‘for’ a orange ‘against’:
Kialo thus encourages users to navigate away from a single information source using a tree structure.
Users are able to explore the totality of a topic in a familiar format (most people are used to tree structures). In a validation framework, if we’re able to harness not only validity of content but also theme, something like this would be an exceptionally powerful way to fight misinformation.
Here’s a wireframe of how it might look:
A validation framework could include a variety of related articles on the same topic. Mockup by Vikram Singh.
Of course, this could get unwieldy and confusing to a user. This approach would likely need to be strictly limited to the amount of articles present, with only those of the highest credibility appearing. Primarily, this could act as a contextual element next to articles that have poor credibility ratings. In this way, you can work with with highly partisan users and their associative groups.
Users ‘in the wild’ consume news in ways that are unpredictable to the creators of news and the designers of news experiences. Our hubris leads us to imagine that we can control how people will use a system we create — but we can’t design particular a experience, we can only design for it.
The only real way to understand if solutions to misinformation are effective is to continually test them with real users and iterate on the solutions based on their feedback.
The Trust Project did some interviews to understand how people consume the news, and despite being fairly difficult to parse, their report has some good information. Unfortunately they committed the sin of letting users design the solutions rather than observing how users use the news, or watching how users use prototype solutions (get users to do, not tell):
The Trust Project’s Research Report
I’m not so clever to know I have all the answers to misinformation, but I do believe we are not thinking broadly enough. Solutions to misinformation thus far may only be effective towards people with high digital literacy and educational backgrounds, rather than to users/readers at large.
So I’d love to hear your opinions on how solutions to fake news can be better integrated into our daily experience, and your opinions on my suggested solutions. Ultimately, we’re all victims of misinformation, even if we aren’t consumers of it.