Connection Problem S03E11: Ready for take-off
Sitrep: I'm typing these first words at Berlin TXL, at the gate and about to board my flight to SFO for a work week with Mozilla, because reasons. On the way in, the cab drove by a giant poster ad for "Projekt Tegel"—the projected afterlife of TXL once the airport ceases its operation, that is once Berlin's "new" international airport "opens" (projected for 2012 originally, now around 2020, status pending). The posted, with the sweet irony that only Berlin can pull off, features actual Back to the Future style hover boards. Aaaaanyway, I'll be in SF most of the week. If you are, too, say hi!
×
AI, electricity, fire AI (as a research area for humanity) is "more profound than, I don’t know, electricity or fire". This article is essentially just a teaser built around that money quote (AI is bigger than electricity) but Pichai isn't exactly known for his hyperbole. He might be on to something; at the very least, as Google CEO he has insights in to this field that are second to none.
×
Walk-in computer
Dynamicland, a computer (and community space) you can physically walk into in Oakland, looks amazing and lovely in all the right ways. Educational collaborative physical computing at its best, if you will.
×
Funders be funding (but should do it right!)
Brad Victor (who's also behind Dynamicland) has a fascinating post with excerpts from Alan Kay emails. (I think? That's what the headline suggests at least; some might be his own commentary; for our purposes it doesn't matter.) The theme is that those who make a lot of money on the back of advanced technologies stand on the proverbial shoulders of giants, who shared their work for the public to use one way or another. And that a lot of the most valuable work in research and development in fact come out of undirected research:
"The "golden age" funding included a lot of funding for "problem finding" -- which means the funders were not vetting specific proposals or funding "directed research". The points of agreement were on a "vision of desired future states", not goals or routes. An example of the vision was Licklider's "The destiny of computers is to become interactive intellectual amplifiers for all humans, pervasively networked world-wide". This vision does not state what the amplification is like or how you might be able to network everyone in the world."
More significantly, he goes on to consider how if you benefit from other people's freely shared work and build your fortunes on them, you might want to consider paying it forward: You already got a freebie, why try and claim much more by "investing" your money rather than giving it to more free research?
"Licklider "just funded" period.
As I pointed out in a previous email, Engelbart couldn't get funding from the very people who made fortunes from his inventions.
It strikes me that many of the tech billionaires have already gotten their "upside" many times over from people like Engelbart and other researchers who were supported by ARPA, Parc, ONR, etc. Why would they insist on more upside, and that their money should be an "investment"? That isn't how the great inventions and fundamental technologies were created that eventually gave rise to the wealth that they tapped into after the fact.
It would be really worth the while of people who do want to make money -- they think in terms of millions and billions -- to understand how the trillions -- those 3 and 4 extra zeros came about that they have tapped into. And to support that process."
I find this a simple and incredibly powerful point. The obsession with commercial investments rather than more fundamental "investment" in research—knowledge infrastructure if you will—can be damaging and exploitative, and frankly seem a little small-minded.
×
×
From the design guidelines archives
"Ubicomp", remember that term? Anyway, Adam Greenfield's design principles for IoT (at the time, he referred to it as Everywear—this was in 2006) hold up nicely today:
- Principle 0, is, of course, first, do no harm.
- Principle 1. Default to harmlessness. Ubiquitous systems must default to a mode that ensures their users’ (physical, psychic and financial) safety.
- Principle 2. Be self-disclosing. Ubiquitous systems must contain provisions for immediate and transparent querying of their ownership, use, capabilities, etc., such that human beings encountering them are empowered to make informed decisions regarding exposure to same.
- Principle 3. Be conservative of face. Ubiquitous systems are always already social systems, and must contain provisions such that wherever possible they not unnecessarily embarrass, humiliate, or shame their users.
- Principle 4. Be conservative of time. Ubiquitous systems must not introduce undue complications into ordinary operations.
- Principle 5. Be deniable. Ubiquitous systems must offer users the ability to opt out, always and at any point.
We could all do a lot worse than taking those as a base line. (Tipping my hat to Thomas Amberg for the pointer.)
×
Missile UI
By now you have most likely seen some screenshots of the user interface mess that led to the recent false alert for a ballistic missile attack on Hawaii. Here are some details on that, and the systems behind it. Let it be a reminder that interfaces matter.
×
×
Voices of the Adversarial Generative Network
Interesting piece [Intercept] about the NSA's advances in voice identification. Reading this made me wonder what's going to develop faster: AI-based voice identification (which is what this article is about), i.e. "we recognize this person by their voice"; or AI-based voice generation, i.e. "computer, read this text in voice A, B, or C". In other words, while the NSA is working on voice fingerprinting, surely some other parties are working on spoofing voice fingerprints?
See also: Scientific American: New AI Tech Can Mimic Any Voice
×
AI voice ethics "ethics"
While we're talking about AI generated voice, here's a true gem: Lyrebird's "ethics" statement that essentially says don't abuse this, and be glad we published this and no one with worse intentions. Or in their own words: "Imagine that we had decided not to release this technology at all. Others would develop it and who knows if their intentions would be as sincere as ours: they could, for example, only sell the technology to a specific company or an ill-intentioned organization. By contrast, we are making the technology available to anyone and we are introducing it incrementally so that society can adapt to it, leverage its positive aspects for good, while preventing potentially negative applications."
Team Montréal, are they any good despite their odd grasp of the language around ethics?
×
There's no party like a collective legal action party
Max Schrems, who became well-known in European privacy circles after winning privacy-related legal battles including one against Facebook and one that brought down the US/EU Safe Harbor Agreement, is launching a non-profit: noyb (short for None Of Your Business) They aim to enforce European privacy protection through collective enforcement, which is now an option because of GDPR. They're fundraising for the org. The website looks crappy as hell quite basic, but I'd say it's a legit endeavor and certainly an interesting one.
×
Silence!
Want to purge your voice data? This article has you covered.
×
21c aphorisms
Some gems in this Minor Literatures collaboration for 50 aphorisms for the 21st century.
Image by Minor Literature(s)
×
Have a great week.
Yours truly,
Peter
PS. Please feel free to forward this to friends & colleagues, or send them to tinyletter.com/pbihr
PPS. Most images in this one are from the commons; especially the archives of NASA and of the Bell Telephone Magazine are truly marvelous and worth browsing in their own rights.