Philosophical Multicore
Archives
Search...
Subscribe
Can AI make advancements in moral philosophy by writing proofs?
April 14, 2026
http://mdickens.me/2026/04/13/can_AI_write_moral_philosophy_proofs/ If civilization advances its technological capabilities without advancing its wisdom, we...
Pausing AI Is the Best Answer to Post-Alignment Problems
April 11, 2026
http://mdickens.me/2026/04/11/pause_for_post-alignment_problems/ Even if we solve the AI alignment problem, we still face post-alignment problems, which are...
By Strong Default, ASI Will End Liberal Democracy
April 7, 2026
http://mdickens.me/2026/04/06/by_strong_default_ASI_will_end_liberal_democracy/ The existence of liberal democracy—with rule of law, constraints on...
The Future Will Be Weirder Than That
March 30, 2026
http://mdickens.me/2026/03/29/future_will_be_weirder_than_that/ Many people in the animal welfare community treat AI as a powerful but normal technology, in...
Which is better for sentient beings: an "ethical" AI or a corrigible AI?
March 29, 2026
http://mdickens.me/2026/03/28/which_is_better_for_animals_value_lock-in_or_corrigibility/ Cross-posted to the EA Forum. An aligned ASI can be “ethical”1 (it...
An argument for why aligned ASI wouldn't be bad for animals
March 28, 2026
http://mdickens.me/2026/03/27/resource_constraints_argument_why_aligned_AI_wouldn't_be_bad_for_animals/ In the far future, why would people use up precious...
List of ideas for improving animal welfare in light of transformative AI
March 27, 2026
http://mdickens.me/2026/03/26/quick_ideas_animal_welfare_in_light_of_ASI/ If transformative AI arrives soon, what interventions might improve animal welfare...
I used to think aligned ASI would be good for all sentient beings; now I don't know what to think
March 26, 2026
http://mdickens.me/2026/03/25/I_used_to_think_aligned_ASI_would_be_good_for_sentient_beings/ Epistemic status: Speculating with no central thesis. This post...
Cost-effectiveness model for AI alignment-to-animals vs. alignment-in-general
March 24, 2026
http://mdickens.me/2026/03/24/alignment-to-animals_BOTEC/ Last September, I wrote: There’s a (say) 80% chance that an aligned(-to-humans) AI will be good for...
Which types of AI alignment research are most likely to be good for all sentient beings?
March 23, 2026
http://mdickens.me/2026/03/23/which_types_of_alignment_research_are_good_for_all_sentient_beings/ AI alignment is typically defined as the task of aligning...
Worlds where we solve AI alignment on purpose don't look like the world we live in
March 20, 2026
http://mdickens.me/2026/03/20/worlds_where_we_solve_alignment_on_purpose/ (Or: Why I don’t see how the probability of extinction could be less than 25% on...
Value Investing in the Age of AGI
March 11, 2026
http://mdickens.me/2026/03/11/value_investing_agi/ Introduction Most people who write about AI and investing fall into one of two camps: traditional...
The Structural Return Argument Against Value Investing
March 3, 2026
http://mdickens.me/2026/03/02/structural_return_argument_against_value_investing/ Value investing had a singularly bad run from 2007 to 2020. (And it hasn’t...
Contra "Time Series Momentum: Is It There?"
February 4, 2026
Note to email readers: Some of the formatting in this post does not render correctly in email. You may prefer to read on my website:...
If AI alignment is as hard as building the steam engine, we likely still die
January 11, 2026
http://mdickens.me/2026/01/10/if_alignment_is_as_hard_as_the_steam_engine/ You may have seen this graph from Chris Olah illustrating a range of views on the...
I'm wary of increasing government expertise on AI
December 22, 2025
http://mdickens.me/2025/12/21/government_expertise_on_AI/ Many people in AI safety, especially AI policy, want to increase government expertise. For example,...
I need the Writing Style Guide people to figure out how to put a smiley face inside parentheses
December 11, 2025
http://mdickens.me/2025/12/11/smiley_inside_parentheses/ I can’t figure out any good way to put a smiley emoticon inside parentheses. There are five choices,...
I did Inkhaven
December 1, 2025
http://mdickens.me/2025/11/30/inkhaven/ I published a post every day of November as part of the Inkhaven program, in which we are required to publish a post...
Wartime ethics is weird
November 28, 2025
http://mdickens.me/2025/11/28/wartime_ethics/ The ethical principles that most people hold—and hold most strongly—go completely out the window when it comes...
Alignment Bootstrapping Is Dangerous
November 27, 2025
http://mdickens.me/2025/11/27/alignment_bootstrapping_is_dangerous/ AI companies want to bootstrap weakly-superhuman AI to align superintelligent AI. I don’t...
Older archives