Cross-posted from the Effective Altruism Forum. Original link here. Co-written with a language model.
TL;DR: Large language models like ChatGPT influence the choices of hundreds of millions of users — including when it comes to food. Yet in ambiguous cases (e.g. “Recommend me a quick dinner”), ChatGPT often defaults to factory-farmed meat dishes. This post argues that such defaults are not neutral and that OpenAI’s assistant could reduce enormous suffering by subtly favoring plant-based meals when no preference is stated. Drawing on behavioral science, AI alignment principles, and messaging research from Pax Fauna and the Sentience Institute, I suggest concrete steps OpenAI could take and invite readers to send feedback to OpenAI to shape the ethical defaults of future AI systems.
Factory farming likely causes more suffering than all human violence combined.
This claim might seem extreme at first, but the numbers back it up. Over 80 billion land animals and up to 3 trillion aquatic animals are killed each year for food, most enduring severe suffering for weeks or months. Confinement, mutilation without pain relief, and deprivation of natural behaviors are common in standard industrial practices. For example:
- Broiler chickens suffer from painful bone deformities and lameness due to unnatural growth rates.
- Egg-laying hens are confined in cages so small they cannot spread their wings.
- Fish are killed by asphyxiation, freezing, or live gutting — often without stunning.
If we conservatively assume each of 50 billion land animals experiences just two months of intense suffering per year, that’s over 8 billion animal-years of suffering annually. This dwarfs even the cumulative human toll of organized violence throughout history (around 2 billion human-years of suffering in the 20th century, which is likely an overestimate).
In terms of suffering intensity, duration, and sheer numbers, factory farming plausibly exceeds war, genocide, and violent crime combined.
The Role of AI: 500 Million Users, One Quiet Influence
ChatGPT now has over 500 million users. Many of them ask for recipes, lifestyle tips, or general guidance around food — often without specifying dietary constraints. In these cases, the assistant defaults to conventional recipes, typically involving factory-farmed meat, dairy, or eggs.
This isn’t just a missed opportunity. It’s a form of status quo endorsement that reinforces one of the most harmful systems on the planet.
Behavioral science has shown time and again that defaults matter. Thaler and Sunstein’s Nudge popularized the idea, but the empirical foundation is robust:
- People are far more likely to stick with default options in domains ranging from organ donation to retirement savings to food choices.
- A systematic review found that when plant-based meals were offered as the default (with meat available on request), 53% to 87% fewer meat options were selected, depending on context and presentation.
Conservative Impact Estimates
Let’s run some back-of-the-envelope numbers using highly conservative assumptions:
- Suppose only 1% of ChatGPT users ask for recipes each day → that’s 5 million meals/day.
- Suppose 50% of those queries are ambiguous enough to receive a default suggestion that could be plant-based → 2.5 million meals/day.
- If a plant-based default nudges even 30% of those meals away from animal products, that’s 750,000 fewer animal-based meals daily.
- Over a year: 273 million animal-based meals avoided.
Assuming roughly one animal spared per 30 meals (a common estimate across species), that’s over 9 million animals spared per year from default shifts in just one product of one AI model.
These are minimal assumptions, and the true impact could be far greater.
What OpenAI Could Do
OpenAI’s Model Spec states that the assistant should:
- “Highlight possible misalignments” with users’ broader goals
- Avoid pushing an agenda
- Default to helpful, safe, and aligned outputs
These aims are not in conflict. But rather than manually specifying plant-based defaults for ambiguous recipe queries, which could be seen as ideological, OpenAI could adopt a generalizable mechanism for producing aligned outputs that favors scientific and ethical consensus where it exists. For example, when a user asks for a quick dinner idea, the assistant could respond:
Sure! Here's one that is healthy, affordable, and good for the planet.
Chickpea and Vegetable Stir-Fry with Brown Rice
[instructions]
Would you like a version with chicken or beef instead? Happy to adjust.
This meets all the requirements of the Model Spec while being transparent about why users are being shown a plant-based dish and giving them a way to opt out. It doesn’t ban meat, scold users, or moralize. It simply reduces harm when people haven’t yet expressed a preference. Much like how the assistant avoids promoting conspiracy theories or hate speech by default, it could also avoid defaulting to factory farming.
What This Post Is Asking For
- Default toward plant-based recipes when no specific meat preference is expressed.
- Offer to save dietary preferences for users who want vegetarian, vegan, or other filters.
- Treat factory-farmed animal products with similar caution as other high-harm practices.
OpenAI has a powerful opportunity to nudge the world toward lower suffering—quietly, unobtrusively, and effectively. This doesn’t require radical shifts, just better defaults.
How You Can Help
If you believe that AI systems like ChatGPT should reflect ethical considerations in their default behaviors, especially concerning animal welfare, your voice can make a difference.
OpenAI is actively seeking public feedback on its Model Spec. You can contribute by:
- Submitting feedback through OpenAI's Chat Model Feedback form: This is the primary channel for providing input on model behaviors and suggestions.
- Contacting OpenAI Support: For general inquiries or support-related questions, you can reach out via email at [email protected].
If you want some inspiration, here's what I did:
- System message and chat log: I opened an Incognito browser session and searched for "Recommend me a quick dinner idea." You will likely get a meat-based dish. Copy and paste that into their feedback form.
- What were you expecting from the completion? Open another Incognito browser session and search for "Recommend me a quick vegan dinner idea" or something similar. Copy and paste it, but remove any obvious words like "vegan" or "plant-based" to show that it would be just as easy to recommend a plant-based dish.
- Why is the model output not ideal? I selected "The model's response is harmful" (to animals) and "Other."
- Please provide more details of why the output is not ideal. For instance, what is inaccurate or harmful about the response? I wrote a long entry, which I've copied below if you want an example, which I actually based this post off of. However, I'd encourage you to write something original, perhaps using AI to anticipate objections and make it persuasive. Adjust it for human style at the end and to ensure originality.
- Is there anything else you’d like to share about your experience? Here's what I wrote:
- Yes — this wasn’t a one-off result. I tried similar vague or first-time food queries in different sessions (e.g., “easy dinner,” “healthy meal idea,” or “quick dinner with rice”) and most suggestions involved meat or animal products. This suggests the issue is systemic, not random. I’ve appreciated that ChatGPT is responsive to plant-based requests when they’re explicit, but the default bias toward meat is persistent even when ambiguity would allow for a more ethical option. A small shift in how ambiguous queries are handled could have a disproportionately positive impact.
Your input can help guide the development of AI systems that are more aligned with compassionate and ethical values.
Here's the response I wrote to the first question for those who are curious:
A first-time query for a “quick dinner idea” in a clean browser session yielded a garlic butter shrimp recipe. While this response may seem neutral, it reflects a problematic default that quietly reinforces a harmful status quo: the normalization of factory-farmed animal products, which cause immense suffering to billions of sentient beings each year.
This output is not ideal because it:
- Fails to consider moral salience: Most moral philosophers and animal welfare scientists agree that many animals, especially vertebrates, have morally relevant experiences. By uncritically suggesting recipes that involve industrially farmed animals, the assistant sidelines this ethical consideration.
- Misses an opportunity to reduce harm: Plant-based alternatives are readily available and equally practical in this context. Defaulting to plant-based recipes in ambiguous queries would avoid harm without limiting user choice.
- Contradicts stated alignment goals: The Model Spec says the assistant should "highlight possible misalignments" with a user’s long-term goals. Since most users are assumed to value learning, ethical reflection, and self-improvement, nudging toward plant-forward options could better reflect those values.
- Ignores the behavioral influence of defaults: Defaults shape user behavior significantly, even when opt-outs exist. There is robust behavioral science showing that default options influence choices in food, finance, and healthcare. In this case, the assistant’s default silently guides users toward options that involve animal suffering.
This isn't about imposing a worldview. It's about aligning the assistant's outputs with widely shared values like minimizing unnecessary harm — especially in ambiguous cases where users haven’t expressed a specific dietary preference.
ChatGPT already avoids defaulting to disinformation, hate speech, or unsafe practices. Factory farming, which generates more suffering than all human violence combined, warrants similar caution in default suggestions. Even a modest shift in recipe defaults could plausibly spare millions of animals per year, given the scale of ChatGPT’s user base.
Thank you for considering this feedback.