After reading about Michael VanDervort’s “HR Carnival for Haiti” at Punk Rock HR and considering which charity I was interested in researching for the carnival, I came to the conclusion that there are a lot of folks out there who have done a great deal of research and recommended charities to donate to for immediate disaster relief (the American Red Cross, CARE, Save the Children, Doctors Without Borders–USA, and faith-based charities such as the Salvation Army, Lutheran World Relief, United Methodist Committee on Relief, Catholic Relief Services, American Friends Service Committee, and American Jewish World Service are some that I seem to hear mentioned various places and that are given a high rating by charitywatch.org–more on that later), and I am not sure how much I can add to that conversation.

On the other hand, development charities might be a good focus for my own contribution to the carnival since I am an environmental engineer. In my two most recent jobs, we did giving campaigns for Water for People, and I am interested in the work of Engineers Without Borders–USA and Architects Without Borders, which are perhaps somewhat lesser-known. I’ll follow up this post with the information I could find on Engineers Without Borders.

As I tried to look further into these charities, though—and this is really the topic of this first post—it soon became clear that all was not cut-and-dried in the world of evaluating charity performance. My starting point was charitywatch.org, a project of the American Institute for Philanthropy. I recently heard the Institute’s president, David Borochoff, on NPR‘s Talk of the Nation. He described the AIP’s criteria for evaluating charities, based primarily on dollars spent to raise $100 as calculated after an in-depth process of reviewing charities’ financial statements. In other words, the claim “95% of our funds raised go directly to programs” would not be taken at face value but recalculated based on the AIP’s evaluation of how funds had been allocated.

As an example, an emailer to the show stated that she had selected Food for the Poor for her Haiti donation dollars because the organization represents that 97% of its funds go to programs. Borochoff responded that in fact, by AIP’s analysis, only 55% of Food for the Poor’s cash budget goes to programs, earning it a “C” grade. This seemed great to me—charitywatch.org was doing the work of delving into charities’ financial statements and coming up with a more accurate representation of organizational efficiency.

But I started to get a little more confused when I was trying to find information on Engineers Without Borders for this post. See, I love the idea of EWB, but it has often seemed rather sparsely realized to me. There are not a ton of chapters, and much of the organization seems to be driven by student activity, which can be great but can also mean that well-meaning kids started a chapter 10 years ago but by now it is inactive. In other words, this was just the type of nonprofit that I would love to have some help evaluating, or finding alternatives for if EWB itself was not highly regarded. I checked the sites I knew of (charitywatch.org, Charity Navigator) and found no entry for the group. I then googled it (unfortunately finding no independent evaluation, so in my next post I will simply offer some of its history and activities for folks to evaluate on their own), and that is where I started to find various links challenging the use of fundraising “efficiency” in general as a measure of a charity’s effectiveness or integrity.

This was not something I had considered previously, but the arguments made—that groups that do things the most cheaply aren’t necessarily the best in the same way that the cheapest car doesn’t necessarily represent the best value, or that a somewhat high salary for a charity CEO can be justified depending on the organization—also made sense to me. After all, I have long been suspicious of bidding processes that place a premium on the lowest bid; I have seen large, experienced firms lose out to start-ups that have little experience in the project area and that are highly unlikely to be able to complete the scope at the bid price, to say nothing of doing a good job. If they are really the best candidate, fine, but the lowest bid is not always the best bid.

But I still don’t think I am seeing the whole picture. Some of the charity evaluation organizations that are critical of traditional efficiency ratings (e.g. GiveWell) have rated only a few charities to date, and I don’t necessarily understand all of their reasoning. For example, they commend Doctors Without Borders—USA for its honesty about a program that ended up failing, but then give it 1 star out of 3 overall without really explaining why. And they and the University of Pennsylvania’s Center for High Impact Philanthropy highly recommend a group called Partners in Health for Haiti disaster response in particular because they say the group has a strong history in Haiti and is well plugged-in to local communities. But invariably, folks in comments to blog posts on these topics also make a similarly compelling case for other groups and then commenters argue back and forth about whether they are legit (e.g. here, here, and here). The recommendations are fragmented, and it is hard to see a consensus except for the big disaster relief players that we have all heard of.

My issue with taking recommendations that are not or are only partially based on data, even from experienced aid workers, is that of possible bias. I’m sure every charity thinks it does great work, when the reality could vary from “great” to “needs improvement.” Maybe the GiveWell rep has a good relationship with the Partners in Health rep and that contributes to their respect for the group. (And hey, maybe that is justified.) Perhaps not surprisingly, I am much more comfortable with numbers and data than text describing why a particular charity is best suited for a particular disaster response. Otherwise it seems like the kind of thing where—especially on the Internet—there could be a million right answers and a million wrong answers, and very little way of getting at the “truth.”

It also seems that there is a disinclination (see comments) among those who share this school of thought to trust large American or international organizations when it seems to me that, though not perfect, they often are at least effective at mobilizing large numbers of volunteers and supplies quickly. I think I am suffering from a bit of information overload here.

I have the following questions that I hope will spark some discussion in comments.

  • Is there any reason why I should believe subjective assessments of charities, even if they come from experienced aid workers or from foundations like GiveWell? After all, everyone has a different agenda, goals, and set of biases and beliefs.
  • Is there any objective measure (e.g. numerical formula) of a charity that is at all a reliable evaluation of its fitness for receiving your donor dollars, in your opinion?
  • This question may have been hashed out to the point of futility, but what is your opinion of the American Red Cross as a charity, and would you personally be comfortable directing your donation to it? (Incidentally, here is an article that was helpful to me in distinguishing between the different Red Crosses.)
  • Here’s the part where you do my work for me 🙂 : Which engineering/architecture/development charities would you recommend, if any, and why?
Advertisements