S03E04: Not always serious, but always sincere
Tuesday, 5 December 2017
Sitrep: Typing from our private nest. Apologies for skipping a week. I trust you'll understand—we just had a baby and that, frankly, bumped everything else off my priority list for the week. The whole experience has been, and I expect will continue to be, a bliss; but for now I'm slowly coming back online with this short episode.
///
ThingsCon Amsterdam
The most bad-ass of all ThingsCon events just happened (Amsterdam website), and oh boy did I suffer from FOMO as I followed remotely via Twitter. The idea that ThingsCon is now four years and has come from a sidewalk conversation on Oranienstraße in front of our old Berlin office to a kinda-sorta global phenomenon is mind boggling to me. My eternal gratitude to Team Amsterdam around Monique, Iskander and Marcel, and the whole community.
///
Shameless plug
New launch over at our Zephyr Berlin shop! Version 1.1 of our Ultimate Men's Pants are out: Same top notch quality & classic cut as before, but with super deep pockets. Do you cycle a lot? Do you have a large phone? Do you carry loose coins in your pockets? Not a problem! (Direct link)
///
Chain block
Mathew Ingram writes about Civil, a startup trying to save journalism via crypto currency. Blockchain/crypto currency to save journalism? Hypothesis: as a rule of thumb, if blockchain is your solution, you probably haven't really understood the problem.
///
AI & MACHINE LEARNING
This AI Can Spot Art Forgeries by Looking at One Brushstroke (MIT Review).
If this works as advertised this is pretty damn amazing. But mostly, it's a good example of how narrow AI (or "little AI", as I enjoy calling it) reaches more and more into niches: Lots of narrowly trained neural networks doing one job, and one job only. And no niche is too small, because it gets easier and cheaper to train the networks every day! (We'll see lots of failure, too, because this democratization means that we'll see a lot of shitty machine learning, too; shitty both in implementation because of bad training data, and in intent.)
As an aside, this article also points in an interesting direction to make black box neural networks more understandable by complementing and contrasting their results with (more understandable and straightforward) algorithms.
x
New Theory Cracks Open the Black Box of Deep Learning (Quanta Magazine).
A fascinating and very accessible read on some theories of how we might be able to better understand deep learning networks in machine learning and artificial intelligence. It includes a good primer of the basic techniques, but also how a new—or at least freshly relevant—theoretical model called "the bottleneck" is giving a jolt to the research community. (The basic idea, as far as I understand it, over-simplified: it's not about what a neural network learns but how much we can get it to forget; the information it sheds shapes outcomes as much as what it retains.) And here't what I love about this: There's a researcher who's been thinking about this for 30 years, but it simply wasn't relevant until now; There's a large community of researchers who have a gut feeling that there's something there, but they don't quite know what and why (it "somehow smells right", one researcher says); and there's a good indicator of how these breakthroughs happen through re-applying knowledge and methods from other disciplines. Some of this was inspired from physics research, and now there's a debate about what we can learn from this about the way children learn how to recognize and write letters faster than neural networks. (By building on existing knowledge? By watching the process of drawing letters, not the results on paper?) There's no major point here but that it's beautiful to watch research unfold.
///
ETHICS & TRANSPARENCY
The Tech Ethics Curriculum (gDoc).
Promising collaborative list of resources around ethics and technology, instigated here.
x
The Trust Project
"Leading News Outlets Establish Transparency Standards to Help Readers Identify Trustworthy News Sources." Humans in the loop, I guess? But kidding aside, this initiative looks really promising. Their approach reads to me as a text book example of how to tackle consumer trustmarks.
///
Radars & Pivots
Old-school mouse maker Logitech has quietly become an edgy e-sports brand (Quartz). Fascinating example of how companies can shift to keep playing to their strengths even if the environment changes violently: Logitech, turns out much to my surprise, has not faded into oblivion but started making products for e-sports. What triggered the shift? According to the article, the CEO's kids were avid gamers and had wondered why Logitech didn't do good gaming stuff anymore, so he went in and shifted the company's focus. As someone who's both a bit of a sucker for process but also firmly believes in human touch, this is a fantastic example of the mix of both. And of course of the types of signal worth looking out for, which might not hide in corporate research reports but out in plain sight.
///
Enjoy the week. If you feel this might be interesting to your peers, please do forward.
Best,
P.