Cross-posted from the Effective Altruism Forum. Original link here. Co-written with a language model.
TL;DR: Large language models like ChatGPT influence the choices of hundreds of millions of users — including when it comes to food. Yet in ambiguous cases (e.g. “Recommend me a quick dinner”), ChatGPT often defaults to factory-farmed meat dishes. This post argues that such defaults are not neutral and that OpenAI’s assistant could reduce enormous suffering by subtly favoring plant-based meals when no preference is stated. Drawing on behavioral science, AI alignment principles, and messaging research from Pax Fauna and the Sentience Institute, I suggest concrete steps OpenAI could take and invite readers to send feedback to OpenAI to shape the ethical defaults of future AI systems.
----------------------------------------
Factory farming likely causes more suffering than all human violence combined.
This claim might seem extreme at first, but the numbers back it up. Over 80 billion land animals and up to 3 trillion aquatic animals are killed each year for food, most enduring severe suffering for weeks or months. Confinement, mutilation without pain relief, and deprivation of natural behaviors are common in standard industrial practices. For example:
* Broiler chickens suffer from painful bone deformities and lameness due to unnatural growth rates.
* Egg-laying hens are confined in cages so small they cannot spread their wings.
* Fish are killed by asphyxiation, freezing, or live gutting — often without stunning.
If we conservatively assume each of 50 billion land animals experiences just two months of intense suffering per year, that’s over 8 billion animal-years of suffering annually. This dwarfs even the cumulative human toll of organized violence throughout history (around 2 billion human-years of suffering in the 20th century, which is likely an overestimate).
In terms of suffering intensity, duration, and sheer numbers, factory farming plausibly exceed