News from the Front Porch Republic
Greetings from the Porch,
We got our first substantial snow of the year this week, so school was closed for a day and sledding, fort building, and snowball throwing ensued in our neighborhood.
- In this week's Water Dipper, I recommend essays about Milosz, Butz, and Han.
- Elizabeth Stice proposes that citizen science might provide a model for humanistic work: "It’s time we started talking about citizen humanists. While the so-called professional humanities in higher education are imperiled, there are many other, non-academic ways to be involved in the humanities, and many people are already participating in endeavors related to the humanities."
- Adam Smith wrestles with the need to make vital distinctions and to recognize nuance: "A child mixes up the ugly with the beautiful all the time. If we’re called to be childlike, are we called in some sense to blur all the lines?"
- Jon Schaff reviews Nadya Williams's new book, Christians Reading Classics: "Her goal, it seems to me, is to give readers who may be either loosely familiar with or even quite ignorant of the authors she treats a brief introduction to their importance and what beauty can be found in each of them. This serves to whet the appetite, hopefully encouraging her readers to seek further by picking up these great works of antiquity."
- Andrew Mercer stakes out reasons to resist AI in all its forms: "There are a multitude of reasons not merely to approach AI with caution but to engage in determined opposition to it."
- Campbell Frank Scribner compares today's LLM-enabled dialogue with various forms of dialogue that educators have used in the past: "In trying to systematize relationships between words and humans, both medieval scholasticism and today’s automated dialogue sterilize the sources of human vitality."
- Michial Farmer listens to songs about light in this dark season of the year.
I’ve been reading a lot of books on AI, and one of the clearest books describing the technical workings of machine learning is Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. Mitchell worked with Douglas Hofstadter and follows him in writing clear prose and foregrounding the value of and need for human thought. Near the end, she speculates about various possibilities regarding the future of AI:
In any ranking of near-term worries about AI, superintelligence should be far down the list. In fact, the opposite of superintelligence is the real problem. Throughout this book, I’ve described how even the most accomplished AI systems are brittle; that is, they make errors when their input varies too much from the examples on which they’ve been trained. It’s often hard to predict in what circumstances an AI system’s brittleness will come to light. In transcribing speech, translating between languages, describing the content of photos, driving in a crowded city—if robust performance is critical, then humans are still needed in the loop. I think the most worrisome aspect of AI systems in the short term is that we will give them too much autonomy without being fully aware of their limitations and vulnerabilities. We tend to anthropomorphize AI systems: we impute human qualities to them and end up overestimating the extent to which these systems can be fully trusted.
Thanks for spending some time with us on the Porch,
Jeff Bilbro