Hide table of contents

Hi! We're researchers from Animal Charity Evaluators (ACE), and for the next two hours, we'll be answering questions about our 2024 charity recommendations and our charity evaluations process.

Our team answering questions is:

  1. Elisabeth Ormandy, Programs Director
  2. Vince Mak, Charity Evaluations Manager
  3. Maria Salazar, Senior Researcher
  4. Max Taylor, Researcher
  5. Zuzana Sperlova, Researcher

How to participate? Make sure you've created a FAST Forum account and post your questions in the comments section.

We look forward to answering your questions!

For a limited time, all donations to our Recommended Charity Fund will be matched! Your support will help all 11 of our Recommended Charities that we estimate will have an exceptional impact for animals with additional donations. Thank you for your support and for caring about creating a kind world for animals.
 

12

0
0

Reactions

0
0
Comments25
Sorted by Click to highlight new comments since: Today at 5:42 PM
  1. What can we expect to change in the evaluations or evaluation process for charities from 2023? 
  2. What new/different information will charities be asked to provide with the new cost-effectiveness calculation? Will achievements still have a role?
  3. How are the allocations from ACE's charity fund determined? 
  4. What does the new decision-making process look like in terms of better accounting for the marginal cost effectiveness of funding?
  5. When will questions and layout for applications be made available for 2025? How much time will charities have to provide information once these are made available?

Hi Sean - thanks for your great questions!

  1. The exact details of our 2025 evaluation process and methods are still to be determined but, barring any major strategic shifts in our Charity Evaluation program, we expect to keep our methods largely the same as 2024’s, with refinements based on what we’ve learned. We’ll still ask charities for information that will allow us to do the theory of change analysis, create cost-effectiveness estimates, assess funding capacity, and examine organizational health. The process will begin with charities applying to be evaluated, as it did in 2024.
  2. You can refer to the cost-effectiveness analysis spreadsheets for this year’s evaluated charities to get a sense of the information we needed to make the calculations. We’ll likely still be asking for charities’ past achievements. If we stick with this year’s approach (which we think is likely at this point), we will aim to determine the suffering adjusted days (SADs) averted by those achievements per dollar spent, which requires knowing the benefits of charities’ programs as well as the expenses spent to achieve those benefits.
  3. We have a Recommended Charity Fund disbursement model where we consider each recommended charity’s funding and what ACE’s marginal funding would be used for. Then, we have an internal discussion about where we should prioritize the funding going, based on considerations of funding capacity, quantitative factors (marginal cost-effectiveness), and qualitative factors (theory of change). This year we expect to refine how we allocate funds and publish a blog post about refinements so we can stay transparent about it. If you’re interested in supporting our Recommended Charities, all donations to our Recommended Charity Fund are currently being matched for a limited time! 
  4. By using theory of change analysis more formally, we understand a charity’s work and its assumptions, limitations, and risks. This reduces our uncertainty about the scope of a charity’s work and their overall likelihood of achieving their desired impact. By doing a cost-effectiveness analysis that looks at the benefits to animals of a charity’s work divided by the cost of doing that work, we assess the current cost-effectiveness of a charity’s work (usually for select programs). Then when combined with our room for more funding assessment (which asks charities about their future plans), we assess our level of uncertainty about whether the plans are likely to be as cost-effective as the charity’s current work. Taken together, the three criteria together give us a good sense of marginal cost-effectiveness (i.e., where the next additional dollar would be best spent).
  5. We expect that evaluation applications will open in March and stay open for a month. Once a charity has applied and is successful, they move onto stage two where we ask more detailed questions. We typically give charities around three weeks to gather the information requested to answer those questions. 

— Elisabeth

Great, thank you! One follow-up question to Number 2 and the SADs: How do you calculate cost-effectiveness for orgs who indirectly impact animal suffering? For example, I looked at the Good Food Fund's overview and there was no CE posted, but they have a detailed Theory of Change analysis. Is there a different calculation to recommend charities whose goal is to create systems change that will indirectly reduce suffering, but for which SADs are not as appropriate to calculate? 

That’s a great question and one that we spent a lot of time considering in this year’s round of evaluations. We aimed to use SADs in all cost-effectiveness analyses and attempted to find a way to quantify each charity’s impact using the SADs unit. We have found that for more indirect work, such as GFF’s programs, quantifying the number of animals affected is largely speculative and requires a number of assumptions. For these cases, we decided to not make the assumptions needed to estimate the SADs averted but to stop at an intermediate unit in the analysis. For GFF, this was the number of people reached through their programs per dollar. Our reasoning for avoiding highly speculative assumptions is based on one of our guiding principles, which is to follow a rigorous process and use logical reasoning and evidence to make decisions. For cases like GFF, we focused more on their Theory of Change analysis to guide our decision-making. We are excited about their work because China farms around 50% of the world’s farmed animals, and GFF has made inroads with getting animal welfare on the government’s agenda, which could have significant expected value in the long term (although we didn’t model this explicitly).

Overall, we believe that interventions with a long theory of change (such as some policy interventions) and meta-interventions are often too speculative to estimate the number of animals affected and therefore the SADs averted. This appears to be consistent with the existing research in the animal advocacy movement, where the existing cost-effectiveness estimates focus on direct interventions (corporate campaigns, institutional outreach) and avoid quantifying indirect interventions (research, movement building). We will review our methods in the coming months and will reconsider how we compare charities that do more indirect work.

— Zuzana

Do you ever wish there was a benchmark charity with a near infinite funding gap like Give Directly on the global health side to always be able to compare to? Is there anything akin to GD in the animal space?

Thanks Steven, great question! In short: yes we do, and no there isn’t :-) We think GiveWell’s approach of using GiveDirectly as a benchmark makes sense for GiveWell, and we’ve had several team discussions about whether we could take a similar approach. One step in this direction is to seek to get to the same unit of animals helped/suffering averted for each charity to make it easier to compare across charities, and we’ve sought to do that this year through our use of AIM’s Suffering-Adjusted Days (SADs) model. (You can read more about our 2024 cost-effectiveness assessments here.) However, while we found this helpful for this year’s Evaluations, it’s not always possible to reach a meaningful SADs estimate given limitations such as the long-term or speculative nature of some charities’ programs, a lack of reliable data around charities’ achievements, a lack of evidence on the relative cost-effectiveness of different animal advocacy interventions, and the diverse range of programs conducted by the charities we evaluate. We’re also not aware of any charities in the animal advocacy space that share GiveDirectly’s room for additional funding and potential for scaleability.

Instead, we currently base our recommendation decisions on a set of decision guidelines that align with our evaluation criteria (see here for the guidelines and additional context), and use these to score charities against one another. It’s possible that in future a sufficiently scalable charity will emerge, and the animal advocacy movement will have sufficient evidence and data for us to produce reliable cost-effectiveness assessments for all the charities we evaluate, but at the moment this doesn’t seem realistic.

Currently, our Recommended Charities are those we’ve identified as the most impactful giving opportunities for animals based on the information we have available. Considering each of our Recommended Charities have significant room for more funding, for those looking for impactful donation opportunities, we suggest donating to our Recommended Charity Fund that supports all 11 of our Recommended Charities and where gifts are currently being matched.

— Max

  1. Does your evaluation process shift at all each year in regards to any regions or interventions that are prioritized?

  2. Could you give us a brief overview of how ACE's evaluation process has evolved over time? What are some major differences between the evaluation process in your founding year versus 2024?

Thanks for your questions! 

1. We refine the methods of our evaluation process every year based on internal and external feedback in order to improve on the previous year and be more accurate in our assessments. We also update our position on the likely effectiveness of interventions based on new research and consider the particular situation of each country in our assessments. However, this year we didn’t explicitly score or prioritize certain interventions and countries. Instead, we analyzed the impact of the specific work of each charity using our new evaluation criteria (see below). In general (with some exceptions), we continue to prioritize work on farmed animals and wild animals, interventions that are more institutional in scope, and countries that are more neglected or have higher levels of animal suffering. 

2. ACE’s methods to evaluate charities have changed a lot over the years. We used to have more criteria to evaluate charities and we have reduced that number of criteria over the years, focusing on the most important factors for making recommendation decisions. The biggest changes we made this year were introducing a process allowing interested charities to apply for evaluation (rather than ACE inviting charities to be evaluated), and updating our evaluation criteria. Specifically, we:

  • updated our cost-effectiveness methods (conducting more direct cost-effectiveness analyses, compared to last year’s scoring system that was based on less direct proxies for cost-effectiveness);
  • introduced a qualitative theory of change analysis that explores the evidence, reasoning, and limitations around charities’ programs in more detail; and
  • updated our room for more funding criterion to place more focus on the likely impact of charities’ future funding plans.

You can read more about our latest charity evaluation process here.

— Maria

Do you use any generative AI currently? Do you imagine any potential for it to assist your work? 

Hi, great (and topical) question! Yes, some ACE staff use generative AI models such as ChatGPT and Claude to help generate ideas or to help draft lower-priority internal documents. However, we don’t use such models for external or high-priority documents given the various limitations of AI models (such as the risk of factual errors, biases, and plagiarism), and we also don’t input information that could be potentially sensitive.

We apply a similar principle to image generation models. Given the risk of AI-generated images being seen as misleading in certain contexts, potentially casting doubt on, e.g.. photographic evidence of farm investigations, we instead use images from public-domain sources, prioritizing ethically aligned sources such as We Animals Media.

Personally, the most useful AI tool in my day-to-day work is Perplexity, which cites its responses and can be really helpful for locating research papers. I also find ChatGPT and Claude helpful for summarizing research, cleaning up documents, and advising on spreadsheet formulas. A newer tool is Google’s NotebookLM, which seems very useful for distilling information from a wide range of sources.

For more information you can check out ACE’s Responsible AI Usage policy. We also have an internal document where staff share AI use cases with one another, so you could consider introducing something similar at your own organization if that sounds helpful!

— Max

How many counterfactual donations have the recommended charities received in the last year? Do you know how much change the recommendation makes to their budgets, and therefore how significant it is to be placed or dropped out from the list? 

Hey Ula, great question! This year we conducted an influenced-giving analysis to assess ACE’s counterfactual impact on funding via our Charity Evaluations and Movement Grants programs. We aim to publish the full reports on November 29th.

During our last fiscal year (April 2023-March 2024) the total reported ACE-influenced donations to the charities recommended during that time was $8.5 million, and we estimate that $3.7 million of this would not have been donated if not for ACE’s influence. The upcoming report will thoroughly explain how this was calculated. 

Our charity recommendations last for two years. We don’t guarantee that any charity is re-evaluated or re-recommended, so charities know to prepare for that when their two-year recommendation cycle ends. For some charities, being recommended by ACE might be their first introduction to certain donors. Anecdotally we’ve also found that some donors choose to continue donating to formerly recommended charities. 

We expect that being recommended for the first time, leads to a greater increase in funding than retaining a recommendation. The same seems likely for a recommendation for a newer intervention or animal group, or for a younger charity compared to the budget impact of a recommendation for a well-known charity. According to a recent survey, ACE’s annual influence per charity has varied anywhere from about $150,000 to $1,000,000+. Some of those gifts might not be fully counterfactual (this will also be further explained in the report coming out next week). Assessing budget impact and change in recommendation status is something we need to examine further though, so we’ll be expanding our impact assessment work this year to include more than just our quantitative counterfactual impact on funding.

Considering each of our Recommended Charities have significant room for more funding, we suggest donating to our Recommended Charity Fund because these gifts are currently being matched. Donations will help all 11 of our Recommended Charities that we estimate will have an exceptional impact for animals with additional donations. 

— Elisabeth

On your recommendation list, there are charities that are clearly cost-effective charities, that you tested with your new methodology, and that stand the test and came across to you as highly impactful opportunities.

On the other hand, there are somewhat more speculative charities, that have a less clear Theory of Change and at the moment could have less impact for animals (which e.g. was not tested with your new methodology, because some of them are recommended a second year in a row).

Are you not concerned that having those double standards this year (some charities evaluated with new, more rigorous methodology, and some not) might lead to directing money to these speculative, and possibly less impactful opportunities, rather than directing them to organizations that create tangible impact for animals? 

Thank you for your question. We refine our methods each year and we don’t think that recent changes mean that we can no longer rely on the decisions we made in 2023.

Specifically about cost-effectiveness, in the past ACE has identified limitations of direct cost-effectiveness analyses and found it less helpful to directly estimate the number of animals helped per dollar. Instead, we began exploring ways to model cost-effectiveness, such as achievement scores and the Impact Potential criterion. Since then, the animal advocacy movement (namely Welfare Footprint Project, Ambitious Impact, and Rethink Priorities) has invested in research that enables quantifying animal suffering averted per dollar and in turn, we’ve evolved our methods. However, we think it is still remarkably challenging to do these calculations and draw conclusions from them, and that using proxies is still a reasonable approach.

Additionally, while we’ve introduced a theory of change criterion to formalize our assessment of charities’ assumptions, limitations, and risks, we have already been taking these factors into account during our decision-making in the past. Our other two criteria, room for more funding and organizational health, were included in our methods in both years.

In summary, while we see recent improvements as a step forward, we wouldn’t claim that 2023 charities were evaluated with a less rigorous methodology.

— Zuzana
 

What difference have SADs made in your methodology? Will you try to use this methodology across various types of organizations next year?

Thanks for your questions! This year we decided to use Ambitious Impact’s new unit SADs (Suffering Adjusted Days) in our cost-effectiveness analysis. This allowed us to provide the estimate in a unit that could directly compare the suffering across different interventions and animal species. For example, we could compare in the same unit the welfare improvement of cage-free campaigns, crate-free campaigns, and institutional meat replacement campaigns (see Sinergia’s review). We found SADs especially useful for more direct interventions, where the welfare improvement and the number of animals affected can be quantified with some certainty. Note that because SADs are a recent methodology that hasn’t been finalized yet, we expect that some of the estimates we used might change. Although we found SADs very useful in our cost-effectiveness analysis, we plan to discuss in our coming strategic sessions whether we will keep using this methodology in our evaluations, and for which interventions it might be more or less suitable. Depending on our strategic priorities and capacity, we will consider refining and updating the current estimates, as well as producing estimates for more interventions and species. 

— Maria

Is there an active effort to promote lab-grown protein sources?

Thanks for the question! None of our current Recommended Charities work on cultivated protein sources, though we have previously recommended charities working on this (such as Good Food Institute and New Harvest) and awarded Movement Grants to projects in this area (such as Cellular Agriculture Australia). We’d certainly be open to considering charities and Movement Grant applicants working on this in the future. 

— Max

Thank you for the great questions! It looks like we've answered all of them so we'll be signing off for now. Feel free to submit more questions if you have them—we'll keep an eye on this thread and try to respond later in the week. As always, if you have any questions about our work, you can also reach out to us on email via our website.

If you’re looking for impactful giving opportunities for animals this giving season, for a limited time, all donations to our Recommended Charity Fund will be matched! Your support will help all 11 of our Recommended Charities that we estimate will have an exceptional impact for animals with additional donations.

Thank you!

Hey there! Thank you so much for your work! A couple of questions:

- Do you have an explicit method on how you arrive at whether a charity is being recommended based on the "scores"/evaluations they receive on your different criteria? I.e., do they need to clear a certain bar in every criteria, are some criteria (say, Impact) weighted more than others (say Organizational Health), is there a "total sum" that needs to be exceeded, etc.?

- Are there/do you intend to publish more detailed reviews of the charities that were not recommended?

- After having revised the methodology quite a bit, what are some areas in your new methodology that you are uncertain about and why?

Thanks for these questions!

  1. We don’t have a certain bar per criterion that charities need to meet to be recommended. It’s the totality of our assessments across all the criteria that add up to our judgment call on whether a charity is marginally cost-effective enough to be recommended. The weighting of the criteria can differ from charity to charity depending on things like the interventions they use, whether they have direct or indirect impact, whether they operate on a short- or long-term theory of change, the level of uncertainty we have, the availability of data that allows us to calculate impact per dollar, and other factors. We arrive at our recommendation decisions through iterative team discussion and a set of scores. You can learn more about our recommendations decisions and guiding questions on our Evaluation Process page.
  2. In 2023 we began not publishing comprehensive reviews for charities that are evaluated but that we don’t give Recommended Charity status to. This is because in addition to being an evaluator, ACE is a meta-fundraiser and we directly promote the charities we recommend. In the shorter summary reviews for the charities we evaluate but we don’t give Recommended Charity status to, we still share many of the details that were relevant to inform our decision (e.g., the theory of change table).
  3. It’s a great question about where we have uncertainties in our new methods. First, we found our attempt to estimate Suffering Adjusted Days (SADs) averted per dollar less useful and more uncertain than we expected. We had hoped to be able to estimate SADs averted per dollar for the key programs of all the charities we evaluated this year, but that didn’t end up being possible, largely due to either the types of data charities collect to monitor their own programs or the general lack of empirical evidence about the effectiveness of animal advocacy interventions. The new theory of change assessment really helped with this because the corresponding analysis of how we expect charities’ activities and outputs to lead to outcomes and impact helped reduce a lot of our uncertainty, and helped give us a deeper understanding of each charity’s work (and aims). Second, we were uncertain about what research our team needed to do ahead of the evaluation season to best prepare us for the implementation of our new methods. We ended up doing a lot of research during the evaluation period, which included reaching out to external experts. While there will always be charity-specific research for us to do, I think we’re now in a better position to anticipate many of our research needs ahead of time. Lastly, some internal logistical processes created a bit of uncertainty among team members about roles, responsibilities and workflows that were hard to anticipate when rolling out the new methodology, but we have had regular retrospective meetings after each stage of our process so we know how to address all those internal sticking points in the future.

– Elisabeth

Which organization is engaged in reducing pharmaceutical or medical animal testing?

At ACE we currently prioritize farmed and wild animals, so none of our Recommended Charities work to reduce the use of animals for scientific purposes (i.e. research, testing, and science education). 

If you’re interested in organizations and institutions that are focused on this area, here are few great options to explore:

There are also government-funded alternatives centres around the world like the European Centre for Validation of Alternative Methods (ECVAM).

— Elisabeth

[comment deleted]1mo1
0
0
[comment deleted]1mo1
0
0