Connection Problem S03E19: The Reckoning
Last week, I forgot to include two items I meant to share. Let me start with those and then head on to the other news. Also, again I'm sending this on Thursday because I won't get to it tomorrow. Apologies—we'll be back on a Friday schedule soon.
×
As always, a shout out to tinyletter.com/pbihr or a forward is appreciated!
×
The Forgotten Items
Activist in residency (in motherhood)
Michelle decided to add another layer to her parental leave, a counterweight to the rewarding (often) and stressful (sometimes) but also sometimes somewhat braindead aspects of caring for a newborn: She started a self-directed residency to do research into utopias. This is awesome. More people should do that. I might during the time I'll be spending more time at home with the little one, too.
Solarpunk: Against a Shitty Future
Michelle made me aware of Solarpunk, an emerging genre of utopian science fiction based on the premise that the world lives on infinite solar energy. (I'm afraid I might be short-changing the genre with this summary, but bear with me.) It sounds interesting in that it's decidedly and unabashedly utopian and celebrates diversity and inclusivity; it also sounds like it might easily turn out to be a little boring. I haven't yet read any of it, which I'll change as quickly as I can; watching a new genre emerge is always interesting. This review is an interesting way to frame it, I think. Also, I have a hunch the author is onto something with this quote that I'll end on: "We can start with what we might call the fascism of utopia. The fact that there is no abject, no excrement, in Solarpunk belies a sinister omission."
×
A Trustmark for IoT: First Thoughts
As many of you know, (1) I've been working on a trustmark for IoT and (2) I believe in working and thinking in the open. In that spirit I shared first thoughts on how I'd like to develop this trustmark over the next few months. Feedback welcome!
×
Openness, ethics, tech
Malavika Jayaram kindly invited me to a small workshop with Digital Asia Hub and the Humboldt Institute for Internet and Society (in the German original abbreviated as HiiG). It was part of a fact finding trip to various regions and tech ecosystems to figure out which items are most important from a regulatory and policy perspective, and to feed the findings from these workshops into policy conversations in the APAC region. This was super interesting, especially because of the global input. I was particularly fascinated to see that Berlin hosts all kinds of tech ethics folks, some of which I knew and some of which I didn't, so that's cool.
×
The Elephant in the Room
Ok, I guess I can't avoid talking about this story for any longer: Cambridge Analytica. The story maybe most talked about right now. I don't have any hard analysis to add to this, so I just want to share these purely anecdotal and subjective impressions:
(1) Everything that's bad in the world right now. From the sucking dry of Facebook's not-nearly-secure-enough data vaults to the way their apps allow developers to bleed users data-dry through Less-Than-Entirely-Meaningful Consent, this scandal feels representative of all that's bad in the world right now: Sleazy, unethical actors exploiting both people and infrastructure for ill gains; Companies who do the bare minimum to protect their users; And unholy alliances of shady characters and organizations conspiring in new and unsavory ways. This is a snapshot of the connected world as if straight out of a cheap spy thriller, only alas it's non-fiction.
(2) The immune system of the networked world seems to be kicking in, with more and more, and ever-more-skilled researchers looking into these things. It still feels like a fundamentally unbalanced and one-sided fight, but more and more of these activities are dragged into the sunlight, at least. One data leak or #metoo scandal at a time.
(3) The sinner-turned-savior narrative is getting a little old. Someone does something bad, recognizes the errors of their ways, and turns around to battle the monster they unleashed. It's a narrative of bad-guy-turned-hero, and I see why it appeals, but oh boy is it annoying that these same people could have just not done the damage in the first place. Chris Wylie, who's behind this round of leaks, was a central figure in everything that happened here. The Guardian describes him as young and naive, a guy who "didn't have a clue what he was walking into."; but also as incredibly smart and "Machiavellian" and able "to plan 12 moves ahead". So which is it: Someone who knowingly hangs with Bannon and his ilk and discusses psy-ops, or the innocent data nerd? Everybody has the chance, every day, not to do something that enriches them at other people's expense. The choice not to pioneer a new way of hurting people. Yet, we strangely celebrate those who first do that and then turn around more than those who never did them in the first place. I'm not quite sure what to make of that. It just feels a little... stale? (Cue last week's read, Hail the Maintainers.)
I guess what I'm saying is, I'm glad this story is out there; but I'm not so sure about the choice of heroes and villains, and it'd be great to see some of the villains to suffer the consequences of their actions.
And by the way, we already know who hasn't (yet?) suffered any consequences of their actions: Mark Zuckerberg, who just went first on Facebook and then on CNN to deliver a big "sorry not sorry we didn't do anything wrong". As Ben Thompson points out (potentially paywalled), Facebook isn't the victim here but opened up their users' data to developers consciously and strategically, in order to attract developers to their platform: "And so began the great five year Facebook data giveaway to developers: If you build your apps on our platform, we’ll give you more user data than you could possibly imagine. And that’s what happened."
How. Many. More. Chances. Do. They. Get. Sigh.
×
The Reckoning (Part 1)
NatGeo asked a historian to investigate their coverage of people of color in the U.S. and abroad. Here's what he found. (Spoiler alert: It was very, very bad.) I truly applaud this kind of introspection, especially done publicly and without glossing over any of the grossness that happened there in the past.
×
The Reckoning (Part 2)
Yonatan Zunger discusses how computer science, unlike physics or chemistry or other big areas of science, haven't yet had their "reckoning": Like physics did with the bomb, and chemistry had with chemical weapons, so (his hypothesis) every field inevitably has a moment of reckoning. But computer science hasn't had that yet. It's not yet a mature field of science because it hasn't yet developed any of the safety measures and ethics that come after a horrific moment of reckoning, like meaningful ethics training, certification boards with teeth, and a culture that's wary of how their tech might be weaponized.
I find this narrative really compelling, in a pretty terrifying way.
×
Some AI-y Things
(1) Baidu says they can synthesize your voice based on just a few seconds of sample data. I haven't tried, but this sure is a fast-moving space to watch.
(2) On Twitter, @samim shared this photo of (presumably) a Google team in India "focused only on creating labels for new machine learning datasets." This is what machine learning looks like indeed:
(3) Remember William Gibson's concept of the Ugly T-Shirt, ie. apparel that is designed to confuse facial recognition? Well what if autonomous cars but also people wore those, or used face detection camouflage make-up? Autonomous cars are trained to recognize humans that look and behave like humans do. But if privacy conscious folks wandered the streets, would they just get run over? Alas it's not as outrageously a question to ask, because we just had a sad premiere: The first human was just killed by an autonomous car. An Uber car in autonomous mode ran over a woman who was crossing the street (and no, she wasn't wearing an Ugly Tee). Cue Citylab: What will happen if we just accept that a certain number of pedestrian deaths are an inevitable part of adopting autonomous vehicles?
(4) Politico has tough words on Europe's AI delusion.
(5) Over on The Verge, read the tragic story of a person with cerebral palsy who had her health care cut tremendously by an algorithmic assessment. Another instance of we-need-a-human-in-the-loop, and for institutions hiding behind algorithmic intransparency.
×
Some Blockchain-y Things
(1) Child abuse imagery found within bitcoin's blockchain
So here's an interesting potential issue with blockchain. Again (like recent notes about blockchain potentially violating GDPR in some cases), this one is about the blockchain's core strength, its immutability. Turns out bad actors might store illegal (and societally unacceptable) content on the Bitcoin blockchain, which is so problematic because the blockchain is distributed which means every participant might inadvertently be technically in possession of said content, and involved in distributing it. Now if this is the Bitcoin blockchain that means that the blockchain might bring down Bitcoin itself.
researchers have discovered unknown persons are using bitcoin’s blockchain to store and link to child abuse imagery, potentially putting the cryptocurrency in jeopardy.
The researchers continue to say:
“Our analysis shows that certain content, eg, illegal pornography, can render the mere possession of a blockchain illegal,” the researchers wrote. “Although court rulings do not yet exist, legislative texts from countries such as Germany, the UK, or the USA suggest that illegal content such as [child abuse imagery] can make the blockchain illegal to possess for all users.”
This is a fascinating instance of unintended consequences in action.
×
So where from here?
(1) Tim Berners-Lee: we must regulate tech firms to prevent 'weaponised' web A week or two ago, Tim Berners-Lee wrote an open letter to mark the 29th annivesary of the WWW. (Awkward - 29 years? What kind of anniversary is that? But anyway, it's a good and important letter, so whatever the occasion.) He calls openly for regulating the web lest it be weaponized, and calls for less centralization. And while he's at it, he tries to dispel two myths that might otherwise limit our thinking:
“The myth that advertising is the only possible business model for online companies, and the myth that it’s too late to change the way platforms operate. On both points we need to be a little more creative”
Aye. All of the above.
(2) Annotate the Web Good friend Boris Anthony has been involved in Rebus Foundation which has been exploring how we can fix reading (and especially annotating) things digitally. Here's a first in-depth report on their work. I haven't read the thing yet (it's quite extensive) but I believe this is a super important issue. Also, I'm not sure if the two are directly related or how, but a new W3C standard enables for standardized web annotations!
×
Yours truly,
Peter
PS. Please feel free to forward this to friends & colleagues, or send them to tinyletter.com/pbihr