There’s not enough disobedience online. Yes, that’s right, you heard me: there’s not enough space for disobedience online.
This may seem like a ludicrous claim, especially when leveled at the major social media platforms, which are currently struggling to deal with overwhelming tides of misinformation, harassment, conspiracy, demagoguery, and more. But these content moderation issues are symptoms of a greater problem with how the platforms are governed: in an extremely hierarchical, top-down manner.
Platform authoritarianism is fractal. Content moderators are low wage workers whose critiques of moderation guidelines are valued even less than the moderators’ (largely ignored) mental and physical health. Product teams treat users as subjects to be surveilled rather than partners in design. The companies who own the platforms craft their goals around the needs of distant shareholders rather than immediate stakeholders such as workers and users. Even at the societal level, we’ve chosen to reify these power structures by protecting platforms from responsibility for bad decisions (see Section 230 of the Communications Decency Act) and by criminalizing efforts of users to work around these poorly designed systems (see the Computer Fraud and Abuse Act). From lines of code to lines of law, there’s very little room for disobedience.
It’s easy to see that as a good thing when we’re visualizing a Twitter troll, flaunting the site’s content policy in order to spit invective at an activist. But there are many different kinds of disobedience. One kind, civil disobedience, has helped rectify some of the greatest injustices in our history.
The hallmark of civil disobedience is the acceptance of legal consequences. In this way, the protester challenges the the injustice of a given law, or an institution, or set of practices, while recognizing the legitimacy of the overall system. Civil disobedience is a warning sign for society that something’s gone wrong; it surfaces information about our failures and asks us, collectively, to act.
In a well-functioning system, there’s no need for mass civil disobedience, because this information is surfaced and acted upon quickly and easily, mistakes fixed and reparations made. Unfortunately, and few I think would argue this: we are not currently in a well-functioning system. There is suffering, inequality and injustice everywhere, and it is very, very hard to get the powerful to act, whether they’re in Congress or on Facebook’s Board of Directors.
I mentioned above that platform authoritarianism is fractal. We need to make room for disobedience and dispute at each of these levels. At the societal level we need different laws and better regulation; at the company level we need unions, worker cooperatives, and alternative institutional forms that include all stakeholders; at the product level we need participatory design and community planning.
I want to focus for now on the lowest level, on how the code that runs social media platforms constrains how people can interact with each other, but this of course is influenced by all the other levels, and by the way that those higher systems are cut off from feedback, from pushback, from disobedience.
To that lowest level now:
When you go to a site like Twitter, you see a range of actions open to you: tweet, retweet, follow, unfollow, block, report, etc. These are the actions you’re allowed to take. But what about the actions you’re not allowed to take? They are never surfaced to you by the platform, and so are hidden from you behind an impenetrable wall. “Do it anyway and await the consequences” just isn’t an option for you.
There are two different kinds of ‘disallowed actions’ I’m speaking of here. There are explicitly disallowed actions - that is, actions which have been conceptualized as a thing users might want to do and built into the system, but which you have been deemed ineligible to do. For example, messaging a stranger who has their DMs closed, or replying to people who have replies turned off, are explicitly disallowed actions. You’ve also explicitly been disallowed access to actions that platform moderators may take, such as deleting content or labeling it as misinformation, or suspending a user.
Implicitly disallowed actions cover a much wider range of space. These are hypothetical or counterfactual actions which the product designers never built into the platform in the first place. On a site like Twitter, these counterfactual actions include things like adding a user to a collectively managed block list and casting a vote on whether content should be hidden.
It’s impossible to make all possible actions explicit. There will always be counterfactual actions for which the only way to access them is to influence the product development to allow for their encoding, hence the need for participatory planning.
But what about explicitly disallowed actions? Should they be hidden away from users, or should users be able to take them anyway? Should they be able to disobey?
I think, in the context of Twitter as-is, the answer is no. We can’t fix an authoritarian system by acting only at the lowest level - there would need to be bigger, deeper changes first. But Twitter’s not the only game in town. We can explore explore what disobedience would look like on an anti-authoritarian platform.
For the past several years I’ve been working on a project, Kybern, which is designed to give communities more choice in their governance systems. Until recently, I’d been designing the interface to do just what Twitter does: show an action if a user has permission to take it, and hide it if they don’t. But now I’m considering showing the Kybern equivalent of Tweet buttons to everyone, even if they don’t have permission to do so. I want users to have the knowledge and space they need to contest that lack of permission.
This approach is facilitated by Kybern’s action history structure, which creates a record, complete with discussion space, for every action taken on the platform. If a user can create an action, even a rejected, un-implemented action, that’s a record of their desire for a different governance system, a record which can potentially be acted on. Users can also optionally include a note whenever they take an action. In the case of someone who wants to add a post but isn’t allowed to, that note could say, “I know I don’t have enough points to post yet, but I think this is really important.”
Communities could even choose to make some rules completely disobeyable. In such a case, if I lack permission to post, a community could be configured to let me post anyway, and the violation automatically flagged for review later.
Obviously such a system opens up lots of opportunities for abuse. But Kybern is designed to empower communities to handle abuse themselves, in a transparent and collectively determined manner. It is up to them to decide how to discourage bad behavior and repair harm, not up to me as the project’s developer to decide for them. They will get things wrong a lot of the time, yes, but so would I.
And when they get things wrong, there is space on the platform for people to disobey, to dispute, to contest. And we will all be stronger for it.