S03E29 of Connection Problem: AI, weapons, databases
Sitrep: I'm writing this on a hot, sunny afternoon in Brooklyn. NYC is the first of multiple stops on a trip that's going to last about a month, working from multiple locations. Expect me to post outside my usual rhythm.
×
×
As always, a shout out to tinyletter.com/pbihr or a forward is appreciated!
×
Personal updates
Been on a whirlwind tour of meetings in NYC, to be continued in Toronto and beyond. Swung by Data & Society and Sidewalk Labs, been meeting up with friends and collaborators and folks from the more extended tribe and been discussing everything from AI to art to smart cities and responsible tech. So many discussions harking back to responsible tech and the responsibility of tech companies. Either my filter bubble is extremely well honed or there's a sea change, a big wave of smart thinking about tech, trust & responsibility. Which is great!
Also, I just posted the announcement for the next ThingsCon Salon Berlin: Trustmark Edition (17 July). Come swing by!
Travel: NYC 6-11 June, Toronto 11-15 June, Spokane, WA for the rest of June.
×
AI, weapons, databases
A few things this week lined up almost too nicely. Let's chop 'em up & mix'em up, stir fry-style. The theme that ties them all together is centralized systems of data collection & analytics & turned up to 11 through machine learning. Also, flawed premises.
(1) Globalise identity, not aadhaar: Using one single database and identity management scheme for everything will not work]: Aadhaar, India's centralized identity database is based on the premise that strong and fast identity verification helps deliver better basic services to all citizens—especially the poorest and most vulnerable—but has already be shown to be insecure. Not that it needed proof, it's an inherent weakness of highly centralized databases of valuable data. So we have a good idea here, but implemented with huge and dangerous flaws. So should we scale it up worldwide? Eben Moglen and Mishi Choudhary nail it:
"Because the premise of Aadhaar is correct, the Indian government has an enormous political stake in ignoring the flaws and shutting down public conversation. Globalising Aadhaar’s ambition is a worthy goal for the world’s social welfare policy makers, including the World Bank and Gates Foundation. Imitating a system that has barely reached version 1.0 and is already showing serious architectural flaws would be serious policy malpractice."
See also: The smart city paradigm of centralization and efficiency following is flawed.
(2) AI at Google: our principles. Google announced their principles for approaching AI. There's quite a bit about what Google aims to pursue, but I'd like to focus on the things that Sundar Pichai says they will not pursue instead:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
He notes that they will continue to work with governments (presumably multiple) and the military (presumably US only?), for anything but "AI for use in weapons". So there's a big area of potential engagement that might still be problematic—where do "weapons" begin and end?—but it's interesting and pretty good anyway. What I'm not sure with is the alignment with one country's military but not others: Is that good, bad, understandable, necessary, or maybe a really bad idea? I'm honestly not sure at this point. Needs more mulling over. But overall, these principles are pretty much what you'd want to see.
(Full disclosure: I've worked with Google on multiple occasions.)
(3) Watch this drone use AI to spot violence in crowds from the sky : So researchers at Cambridge University are using drones and machine learning to spot violence in crowds. Two things in this article stand out to me:
First, "The system—for now limited to a low-flying consumer Parrot AR Drone—hasn’t been tested in real-world settings or with large crowds yet, but the researchers say they plan to test it around festivals and national borders in India." I think we'll see more and more instances of algorithms being trained in geographies that don't have strong privacy protection for various reasons. And that has a strangely colonial smell to it, and troubling implications.
Second, "There are (...) lingering privacy concerns about how this and other AI-based technologies could be used. Civil libertarians have warned that when applied to photos and video, AI technology is often inaccurate and could enable unwanted mass surveillance."
This might be just a pet peeve of mine, but I don't think this "could enable" mass surveillance; it is mass surveillance. I think it's essential we don't pretend this is just one step towards mass surveillance but pretty much the end state of it. By flying drones over crowds and having algorithms scan and analyze the video feed we've gone all the way. The concerns cannot be about "enabling" mass surveillance anymore but only about the types of abuse it might bring.
(4) UK homes vulnerable to 'staggering' level of corporate surveillance: This Guardian article starts to strongly, I'd like to just let this one speak for itself. Note the subject matter, but also the terminology used in describing that matter, all of which is really powerful:
British homes are vulnerable to “a staggering level of corporate surveillance” through common internet-enabled devices, an investigation has found. Researchers found that a range of connected appliances – increasingly popular features of the so-called smart home – send data to their manufacturers and third-party companies, in some cases failing to keep the information secure. One Samsung smart TV connected to more than 700 distinct internet addresses in 15 minutes. The investigation, by Which? magazine, found televisions selling viewing data to advertisers, toothbrushes with access to smartphone microphones, and security cameras that could be hacked to let others watch and listen to people in their homes. The findings have alarmed privacy campaigners, who warn that consumers are unknowingly building a “terrifying” world of corporate surveillance.
×
Things that caught my attention
A few words on Doug Engelbart. "Almost any time you interpret the past as "the present, but cruder", you end up missing the point. But in the case of Engelbart, you miss the point in spectacular fashion." A fantastic bit of thinking about how to consider the work—and thinking!—of Douglas Engelbart, about which I feel I know a fair bit but not nearly (!) enough. Especially the point about designing for a shared intellectual space strikes me as really powerful.
Hailo raises a $12.5M Series A round for its deep learning chips. It seems there's a lot of pretty groundbreaking stuff happening in chips for machine learning these days, and I feel like I should learn more about it.
The Uberization of telcos: "Among the top 100 most trusted brands globally, you will find companies of almost any industry, except telco. You will find our serial disruptors, big brand consumer packaged goods, car manufacturers — even banks, payment companies and healthcare service providers. But you won’t find telcos. In their battle for growth, telcos globally have largely alienated their customers for the sake of managing yield and profitability." And that's just it, isn't it? Nobody can get by without telcos, they provide absolutely essential services, and yet they manage to make everyone despise them. It's like your plumbing tries to upsell or trick you constantly. Can't live without them, can't live with them either. So yeah sure, disrupt away.
×
I wish you an excellent weekend.
Yours truly,
Peter
PS. Please feel free to forward this to friends & colleagues, or send them to tinyletter.com/pbihr
×
Who writes here? Peter Bihr explores the impact of emerging technologies — like Internet of Things (IoT) and artificial intelligence. He is the founder of The Waving Cat, a boutique research, strategy & foresight company. He co-founded ThingsCon, a non-profit that fosters the creation of a responsible Internet of Things. In 2018, Peter is a Mozilla Fellow. He tweets at @peterbihr. Interested in working together? Let’s have a chat.
×
This picture and the one at the top and via Unsplash (alice donovan rouse & ckturistando).