Connection Problem S03E13: New Futures
Note: This week's Connection Problem might easily be the longest yet (it's just shy of 2,600 words), so I took the liberty on sending it out on a weekend. A lot has been happening, and a lot has been on my mind, too! But rather than cut half again, let's try a different approach this time: Here's a quick overview of what to expect so you can jump and skip around the newsletter to your heart's delight:
- New futures?
- Privacy Imperialism
- Privacy no more
- Small delights
- Who's responsible for my AI?
- Smart-ish cities
- The gift that keeps on giving: Open protocols as infrastructure
- No! Ads! On! Alexa!
- Some things to ponder
×
×
As always, a shout out to tinyletter.com/pbihr or a forward is always appreciated!
×
New futures
As I mentioned in last week's slightly-more-ranty-than-usual newsletter, I've been somewhat disenchanted with the Silicon Valley future narratives we've seen unfold over the last few years. Which made me wonder about alternatives.
Science fiction authors—especially William Gibson—had a line for a while about the lack—or rather loss—of capital-F Futures: Big, interesting Futures with a Vision. Combined with a crazier-and-more-volantile-than-ever present, they focused on the Very Near Future: Gibson once called it science fiction set 5 minutes in the future (before moving his novels 5 minutes into the past).
Now, I'd argue, we do see capital-F Futures again but: they suck. They're entirely predictable, we just don't know which one of them will play out. The big one to rule them all is what Bruce Sterling refers to as old people in big cities afraid of the sky (aging population, urbanization, global warming).
Others that are more politically focused are along the lines of
- globally dominant China (hyper-capitalism, dominant state, political and commercial mass surveillance)
- globally dominant Russia (oppression, rogue state, political mass surveillance)
- global Silicon Valley (no state, surveillance capitalism, i.e. now but turned up to eleven)
- strong but inward-focused Europe (technocratic and somewhat boring yet wealthy, but with brutal outer borders)
So these all seem like valid, not entirely unlikely Futures, but they're kinda boring and pretty sad.
On the other hand, maybe there are more interesting options, too. Smaller, more decentralized. Less, I don't know how to put it, integrated?
×
Privacy Imperialism: With a dramatic opening, POLITICO discusses Europe's new privacy and data protection framework: "Europe wants to conquer the world all over again. Only this time, its killer app isn’t steel or gunpowder. It’s an EU legal juggernaut aimed at imposing ever tougher privacy rules on governments and companies from San Francisco to Seoul."
Now, theatrics aside, this is an interesting piece of Europe taking a global leadership role. Personally I'm very, very happy to see this happen in the space of consumer and data protection. Not only do I think that it's good to see Europe engage in such a benign field; Europe appears to be the only global player with the sway AND interest in protection user data at all. (Personally, I'd also argue that in fact this protection should be against not just commercial but also governmental data collection because democracy, but maybe I'm old school that way.)
Anyway, interesting read!
×
×
Privacy no more
While Europe is ramping up its efforts to protect privacy and consumer data (at least from commercial exploitation, if alas not governmental surveillance), other locales now double down on AI-powered facial recognition by police:
(1) Police in a city in central China are using special glasses with facial-recognition software to help search for wanted criminals passing through a railway station during the Lunar New Year holiday travel rush.
(2) Cameras with facial recognition software will identify wrongdoers in Dubai "Dubai Police will add ten of thousands of surveillance cameras fitted with artificial intelligence software across Dubai ahead of Expo 2020. Using facial recognition software and their ability to track and analyse movements, the cameras will issue verbal warnings to those they suspected of wrongdoing.
"Criminal activity-related cameras will capture footage of people involved in crimes, recognise their faces and analyses the crime,” said Brig Jamal Al Muhairi, deputy director of administrative affairs at Dubai Police. “Of those criminal activity-related cameras, microphones will be connected to cameras to warn criminals. If a person is trying to steal, for example, a voice message from the microphone will tell the person that he or she is being watched by policemen.”"
(It's always the World Expos, the Olympics and soccer world cups where blatant mass surveillance systems are introduced. If we agree not to have these three public mass events, can we then continue to walk down the street without a police person high on AI yelling at us with a robot voice?)
Sigh. Where to even start with these? Absolutely everything with these initiatives is wrong, wrong, wrong! History will judge these efforts harshly; in fact it already has. We know too many historic incidents already to know that highly centralized government surveillance systems have abuse built in. If combined with the power and scale of real-time data capture and machine learning, this is a nightmare waiting to happen.
×
Small delights
Finding things hidden in the jungle: We (the collective we, as in The Whole of Resourceful Humanity) just used LiDAR to discover a whole new league of ancient Maya cities. Maybe if we get more of this we might just get out of this mess with an interesting future, yet an interesting one that doesn't suck. We might end up rewriting the future by rewriting the past.
Hiding things in plain sight: The Chinese-speaking internet is always good for a delightful, cute circumvention of automated censorship: Rice bunny (Mi Tu / #metoo)! The cutest censorship circumvention technique ever? The Conversation has more on how social media users are campaigning in China. (via Joanna Chiu 趙淇欣)
Image: Marcella Cheng/The Conversation NY-BD-CC
The future is female: Turns out there's a remarkable new species of crayfish (the marble crayfish) that didn't exist 25 years ago but is now spreading like crazy all over the world. That's slightly terrifying in an ecosystem sense, but the reason is super fascinating: Due to a mutation, these crayfish are all female and clone themselves: "instead of reproducing sexually, the first marbled crayfish was able to induce her own eggs to start dividing into embryos. The offspring, all females, inherited identical copies of her three sets of chromosomes. They were clones."
×
Who's responsible for my AI?
Two bits here that popped up side by side on my radar:
(1) Google gave the world powerful open source AI tools, and the world made porn with them [Quartz]. Quartz says Google is responsible for what happens with the open source AI tools they've been developing: "Since the software can run locally on a computer, large tech companies relinquish control of what’s done with it after it leaves their servers. The creed of open source, or at least how it’s been viewed in modern software development, also dictates that these companies are freed of guilt or liability from what others do with the software. In that way, it’s like a gun or a cigarette."
It's hard to fully disagree with this—certainly there's some level of responsibility for what you put into the world—but this strikes me as shortsighted and somewhat off: If we start blaming the publishers of knowledge what what happens with this knowledge, then I guess code schools are bad because people could write bad-faith bots with them? Journalism schools are responsible if one of their students later takes on a job as Chief Propaganda Minister or something?
My point is this: A gun, while it may have legitimate use scenarios, has one primary purpose, and that purpose is killing people. It has clear, intensional affordances. A cigarette, while it may have culturally acceptable use scenarios, has such overwhelmingly clear negative side effects as to render any other uses (by and large, and increasingly) unacceptable.
With AI, this isn't the case at all. And while we might get to "hard coded" ethics for AI at some point, we're nowhere near that point yet, and I frankly doubt ethics can ever be hard-coded. For now, I think of AI as a technology more along the lines of a knife: Knives are very much open-ended technology in that they could be used to injure people, but tend to be used more for benign purposes like preparing food, and hence as a society we make knives pretty much universally accessible and consider that a good thing.
Google's machine learning tools have been used for some crappy things. They will continued to be used to do crappy things. We should try to avoid crappy things, and punish those who use them for illegal purposes.
That said, we see a true moment of democratizing access to a new set of technologies and skills at play: Just like general access to reading and writing lead to a whole slew of ill effects but overall much more good than bad; and just like general access to the tools of content distribution (zines! the internet!) led to a whole slew of crap content along with infinite amounts of empowerment to individuals, and hence a net positive by most standards; access to machine learning tools will lead to some nasty outcomes and a tremendous gain in global knowledge.
Democracy is messy by design, and democratic access to tools, skills and knowledge represent this messiness perfectly.
Case in point 👇
(2) Finding non-horrible uses for fake videos. Remember the story about using machine learning to paste celebrities' faces into porn movies? So by now there appears to be an active community developing more advanced tools to do this more easily. No surprise there, obviously. Deepfake is the name of this technique.
Even though this community's focus has been on fake porn, it appears some folks are discovering the potential (for fun? for mayhem? who knows.) of this more democratic access to the means of deception and video manipulation. There are now a number of (still small) reddit threads around "SFWdeepfakes" or the more boringly called "fake videos". The results so far bear all the markings of an early technology, a proof of concept. A Trump face overlaid on a Merkel speech, which is knee deep in the uncanny valley; Lucy Liu copied & pasted into an old kung-fu movie.
Trying to withhold any judgement here, one thing is clear: The tech is here, and it's getting much more powerful and much easier to use by the day. It's the kind of true democratization of the means of production that are truly empowering, and hence will be used by a great number of people soon to a very wide range of intentions. Presumably, hilarity and attacks on the pillars of democracy ensue.
Note: All SFW videos I had bookmarked have been pulled off of Youtube.
And speaking of taking responsibility: "A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding together to challenge the companies they helped build." [NYTimes] Good. Yet, also sucks a bit to think that a bunch of these (in their own words) "world-class team of deeply concerned former tech insiders and CEOs" made a ton of money on this and then choose to do a feel good thing about it. Ah well. I mean, look, I applaud any effort to sort this stuff out. It just feels a little icky at times, especially if it appears that there's more to be gained by switching sides. Might be worth keeping an eye on, though: humanetech.org
×
Smart-ish cities
We've all seen the hand-wavy buzzword-laden and mostly horrible videos of what passes for smart cities these days. (Mostly efficiency drivers and public surveillance of sorts; there are exceptions where citizens rather than vendors are empowered.) Well, as Bruce Sterling discovered, in India some cities have been harshly stripped of their "smart city" label again already:
"These [Indian smart cities] have proposed to take up various project, including smart roads, rejuvenation of water bodies, cycle tracks, walking paths, smart classrooms, skill development centres, upgradation of health facilities and pan-city projects like integrated command control centre.
Uttar Pradesh’s Meerut, Ghaziabad, Rampur and Raebareli were among cities that failed to make the cut for Smart City tag."
As Bruce Sterling rightfully points out, "That’s impressive. That’s the first time I’ve ever seen a city claim to be “smart” and to be judged as having failed to do it."
We live in strange times indeed.
×
The gift that keeps on giving: Open protocols as infrastructure
Is blockchain going to be the next generation of open infrastructure or just a honeypot for charlatans? The NYTimes has a pretty great overview of blockchain beyond the bitcoin bubble that I found well worth the read. It's nicely hype free and gives a solid overview of hopes, fears, potentials. Two paragraphs stood out:
(1) "Along with Wikipedia, the open protocols of the internet constitute the most impressive example of commons-based production in human history." A hundred times yes! Infrastructure is where it's at.
(2) "(...)herein lies the cognitive dissonance that confronts anyone trying to make sense of the blockchain: the potential power of this would-be revolution is being actively undercut by the crowd it is attracting, a veritable goon squad of charlatans, false prophets and mercenaries. Not for the first time, technologists pursuing a vision of an open and decentralized network have found themselves surrounded by a wave of opportunists looking to make an overnight fortune."
And that pretty much sums of the state of the blockchain 2018.
Mozilla announces an open gateway for IoT. Here's the project website, here TechCrunch's summary. I've yet to dig into the details, but know a few folks who work on this and am confident they know what they do. (Also, because this concerns Mozilla, it's full disclosure time: My partner works for the Mozilla Foundation, I've done some work for them before, and might do more in the near future.)
×
No! Ads! On! Alexa! No ads on Alexa (for now). That's all around good news: Ads in voice-enabled smart assistants would absolutely break that ecosystem, so let's hope Amazon and the others stick to the no-ad rule (and don't sneak in "partner content").
There are ways to monetize voice. But in smart assistants and especially smart search, there is only ever going to be one results: One search, one result. If there's an ad on the top spot, the search just became useless.
×
Some things to ponder
Brain implants for better memory: Scientists have developed a brain implant that noticeably boosted memory in first test runs [NYTimes], which might in the future help offset some of the effects of dementia and traumatic brain injuries.
Pace layering in how complex systems learn: [JoDS / MITPress]: "From the fastest layers to the slowest layers in the system, the relationship can be described as follows: Fast learns, slow remembers. Fast proposes, slow disposes. Fast is discontinuous, slow is continuous. Fast and small instructs slow and big by accrued innovation and by occasional revolution. Slow and big controls small and fast by constraint and constancy. Fast gets all our attention, slow has all the power." I found this strangely beautiful. Even though the author uses double blank spaces.
×
×
Have a great weekend.
Yours truly,
Peter
PS. Please feel free to forward this to friends & colleagues, or send them to tinyletter.com/pbihr