Regulation, an independent scholar, the semantic web, and SCALE (Across the Sundering Seas, #11)
Hello again, readers-of-newsletters! This week is more the “traditional” format: lots of good links from things I read this week!
-
A Regulatory Framework for the Internet – first up is Ben Thompson’s weekly article: probably the best take on how to actually regulate the internet in a way that isn’t ultimately just awful for everyone. Because it turns out that swaths of the internet are in fact in need of some degree of regulation… but most of the regulatory approaches that have been on offer to date have seemed (or proven!) likelier to end up affecting the wrong players in the market, to cement the existing dominance of the worst companies, and to be tools of abuse for bad actors. That goes, in many ways, for everything from DMCA a decade ago through last year’s GDPR and this year’s early proposals by Elizabeth Warren. Thompson makes a clear case for differentiating between different segments of the market based on specific attributes of those segments—in ways that would keep Facebook from abusing its position and force its hand on consumer protection, while still allowing ISPs to operate in reasonable ways, for example. Perhaps most importantly, for those skeptical of regulation as needful here (or anywhere), he traces out why regulation is appropriate:
This is, in its own way, a market failure, albeit not, to be clear, in an economic sense: the allocation of goods and services by a Super-Aggregator is not only efficient, but also generates significant consumer surpluses. The failure, rather, comes from videos like that of the Christchurch massacre, or problematic YouTube content. It is not good for society that terrorists be able to freely broadcast their videos, or that child-exploitation videos spread on YouTube.
The problem is that there is no way to check this behavior: the vast majority of Facebook and YouTube users self-select away from this content, and while advertisers raise a fuss if they find out their ads are alongside this content, they have no incentive to leave the platforms entirely. That leaves Facebook and YouTube themselves, but while they would surely like to avoid PR black eyes, what they like even more is the limitless supply of attention and content that comes from making it easier for anyone anywhere to upload and view content of any type.
The “market” as such is succeeding perfectly in strictly economic terms here! …and that’s the problem. The whole thing is worth your time.
-
Nadia Eghbal: yes, here I’m simply linking a writer, because I think her work is of sufficient interest that, if you’re the kind of person who reads my newsletter, you should almost certainly keep an eye on the variety of very interesting things she’s doing. Eghbal first came across my radar through a talk she gave in 2017 on open source software sustainability—a wide-ranging and extremely interesting and approachable talk—using the metaphor of city infrastructure to show how open source software has become the infrastructure of our digital ecosystems. This week I enjoyed both the latest issue of her newsletter and a blog post on patronage. Her Research page is chock full of interesting things she’s dug into over the last few years; in the course of writing this blurb I read “An alternate ending to the tragedy of the commons”—and immediately shared it with my Winning Slowly cohost Stephen Carradini as of interest for our work this season (as well just being interesting in general):
Ostrom believes that given the right conditions, actors can work together to sustainably manage the commons. That means not leaning on government, foundations, or businesses to solve the problem, but rather recognizing (and trusting!) the community’s ability to regulate itself, so long as individuals have high mutual trust and a low discount rate (i.e., long-term interest in the community). Managing at scale doesn’t mean stuffing more and more people into the same community, but acknowledging boundaries and working together to govern at multiple levels.
Not only did she put Ostrom on my radar, but she gave me more helpful (sharper, better!) tools for making the kinds of arguments I’ve been trying to make already—something I deeply appreciate.
She’s also interested in making independent scholarship viable (and working as an independent scholar herself!), and as someone who dreams of being an independent scholar someday… well, I’m a fan!
-
Whatever Happened to the Semantic Web? (Sinclair Target/Two-Bit History)—a fascinating deep dive (and long read, therefore!) into some ideas that could have but ultimately did not shape the history of the web since roughly 2000. Those ideas have ended up both mostly obviated—and also, insofar as they were technically useful, absorbed—by Google’s and Facebook’s dominance of the web: The little bits of the “semantic web” that made it through in practice are Google’s “knowledge” cards and Facebook’s OpenGraph tools to make fancy previews of your blog post or video or whatever else show up in the News Feed. The hope was for something much richer. Why didn’t we get it? “Whatever Happened to the Semantic Web? makes an excellent effort at tracing out the history of people and tech that left us where we are today. One of the points raised even in the early history of the idea:
Even if users were universally diligent and well-intentioned, in order for the metadata to be robust and reliable, users would all have to agree on a single representation for each important concept. Doctorow argued that in some cases a single representation might not be appropriate, desirable, or fair to all users.
This basic challenge remains. It is one I have run into repeatedly in programming tasks. The world simply is too complex a place—especially when humans are involved—to be easily or accurately reduced to a set of XML tags, no matter how carefully designed. That does not mean there is no future for these ideas; I might just be drawing on a few of them myself in a project I’m working on. But it means that if we do use them, we have to do so deeply aware of their limitations, with built-in “escape hatches” to deal with those.
A final, short and (not-so?) sweet note here from Alan Jacobs, responding to Jeffrey Zeldman asking if the “indie web” can scale:
I think that’s the wrong question. Of course the indie web cannot scale. But that’s a feature, not a bug. Scale — as-big-as-possible, universal-not-local, something-for-everyone scale — is the enemy. It’s the biggest enemy that community and fellowship and friendship can possibly have. If it scales, I want no part of it.
Although yes, this is indeed tracking along lines related to the points I made last week, I’m not at quite the same point as Jacobs here. I still use (and indeed work on) some platforms which do indeed scale. But I share his view that “does it scale?” is wholly the wrong question. Scale may be manageable under certain specific circumstances. But it is by no means anything like a categorical imperative for tech… and indeed, it may be the opposite.