Do online videos of farmed animal cruelty change people’s diets and attitudes? Mercy For Animals, an organization that invests a lot in trying to make people watch online videos, recently contracted an independent research firm to investigate this question.
ACE (Animal Charity Evaluators), an organization that specializes in assessing which groups, campaigns, strategies etc are effective in making things better for animals, called the MFA study the “highest quality randomized controlled trial (RCT) so far of an animal advocacy intervention.”
Probably somewhat unexpectedly, the results of the research did not confirm that people who watched the video would eat less animal products than those who had seen an unrelated video instead (the control group).* This was, of course disappointing, especially since online videos are a relatively cheap, practical and measurable way of campaigning.
Mercy For Animals explains that there are several factors that make it difficult to draw any concrete and practical conclusions from the study: the study could only distinguish between increases or decreases in consumption of at least ten percent; the sample size may need to be a lot bigger; self-reports on dietary choices are very unreliable; the study only took into account people who clicked the ad, while apparently the majority of impact comes from people merely seeing the ad.

The bottom line is that MFA feels the results can’t provide any practical guidance and hence will not cause MFA to reallocate funding for their online advertising.
Now my point, in this post, is not at all to tell you that having people watch these videos is of no use. I don’t think we can say that yet. My point is on a meta-level, about the mere researching itself. Here are some things that are great about what’s happened here:
- an organization (MFA in this case) really wants to know if its allocation of resources (money for the Facebook ads etc) is efficient
- donors are giving money to carry out that research
- MFA contracted an independent firm to help guarantee a professional study design and execution
- the research results, including raw data, have been shared with other groups like ACE, and results can be used by our whole movement
- lessons on doing research have been learned, and new research questions have arisen.
But mostly, and this is the point of my post, MFA was not afraid to publish results that did not confirm the strategies they have been heavily investing in.
This may sound very obvious: we want to help the animals, right? So we do something and check if it works, right? And if it doesn’t seem to work, we stop doing it, right?
I hope you can see that in our movement, actually this kind of attitude is not so obvious at all. Not everyone of us is results-driven, and some of us are more interested in being truthful to an ideology or long followed strategy. Most groups don’t spend all that much on research and assessing what works. Many of us will cherry pick, using and publishing and talking about only the research that suits us. Confirmation bias may lead us to too quickly accept results that confirm our investments, and too quickly reject those that contradict them. While MFA and other organizations investing in online videos should obviously not disregard the results of their own study too quickly (which feel they won’t do), I have already seen other voices doing the opposite: they believe this research confirms that online videos don’t work.
We are all, to some degree, invested in our attitudes, our organization, our groups, our ideologies, our rules, our lifestyle, our identity. Confrontation with things that contradict whatever we are invested in, may be uncomfortable. But we should be willing to feel uncomfortable at times if we really want to help animals.
I have quoted this Tolstoy quote before, but I want to use it again here:
“I know that most people, including those at ease with problems of the greatest complexity, can seldom accept the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they had proudly taught to others, and which they have woven, thread by thread, into the fabrics of their lives.”
* actually, if anything, the results showed that people having watched the videos ate slighly more animal products than the control group. It seems very unlikely that this result was significant, which is why here it is in a footnote 🙂
I can’t understand people actually eating more eat after watching the abuse videos. Earthlings, Meet Your Meat, really? Hope isn’t lost though. Something is happening where it seems currently a lot more people are aware of both the health benefits and ethics of being vegan. I know that for some there’s a “cool” factor but hopefully the action will stick and people continue to become more educated on the issues.
Peace
My own personal experience is that a Mercy for Animals video on the dairy industry changed me from a vegetarian to a vegan. I don’t think those who do not empathize with the suffering of living creatures would have responded in the same way though. These people can be reached, if at all, with health benefit tactics.
I couldn’t agree more, Tobias. I think perhaps the single one biggest factor in being effective in anything is the ability to be honest with ourselves about what is or isn’t working. This is true for anything we wish to be effective at or accomplish, whether making it a more humane world or fixing a broken sink.
I think another big factor is that many activists don’t want to be honest and admit something isn’t working, because then they think they’d be admitting that they themselves are “wrong”.
I think it’s very important to try to remember that when we do find that something isn’t working, that doesn’t mean it’s “wrong”, or that we ourselves are “wrong”. The actions we’ve chosen is the only thing that’s “wrong”, and only in the sense that they aren’t effective or working in the current situation.
The only time I personally consider something to be “wrong” is when you do realize something isn’t working, but you choose to continue to do it anyway. As Albert Einstein said:
“Insanity is doing the same thing over and over again and expecting different results.”
If I can take the liberty of using your sentence above, Tobias:
“This may sound very obvious: we want to help the animals, right? So we do something and check if it works, right? And if it doesn’t seem to work, we stop doing it, right?”
Now fill in the blank with whatever you want to accomplish in the blank below:
“This may sound very obvious: we want to ____________, right? So we do something and check if it works, right? And if it doesn’t seem to work, we stop doing it, right?
Put in “fix the sink”
or “stop fighting with our spouse”
or “train my dog to sit”
or “get my child to eat their broccoli”
or “take over the world”
🙂
While its nice that they tried to do some research…….I don’t think it was a sincere attempt to discover the truth of the matter. Their reaction says it all, they had the study conducted…..yet when the results weren’t as expected they have a list of excuses and aren’t going to change any of their actions. Why in the world did they fund a study where the results were going to be too weak to act on? I reckon if the study ended differently…….we’d be hearing a much different story. This is all exactly what you’d expect when someone conducts “research” on something that they have a financial interests in…
This is why vegan groups are no match for industry, industry understands these issues and will make goal directed actions based on hard data.
So vegans, what is your goal….how are you going to track it…..and how long is it going to take you to change strategies when its proven to be ineffective? Clock is ticking….industry is miles ahead of you.
First of all, I completely agree with Mr. Toad. It seems that MFA is trying to get the upsides of using research – appearing serious about evaluating effectiveness – without the downside of actually taking that research into account when it contradicts your assumptions.
Second, while it’s good to test what tactics are most effective in changing diet, it ignores a more fundamental and maybe more important question: should this movement be so heavily focused on diet in the first place? Is converting people into reducetarians, vegetarians, and vegans actually helping to build a movement that can effect policy change? By restricting our attention to questions which can be answered with rigorous methods like RCTs, we are perhaps distracting ourselves from questions that are more difficult to answer, but much more central to the fate of the movement.
I believe that the two of you are very much mistaken on the nature of statistical inference. First of all, if you obtain a surprising results from a statistical study, you should not abandon whatever you previously believed. Frequentist methods do not allow us to be exact on this matter, but Bayesian methods provide tool to quantify our prior certainty. For example, suppose that you survey 10 people, and ask each if they are atheists. You have prior knowledge on prevalence of atheism, so when none of them answers yes, you should not jump to the conclusion that there are no atheists (which would be the maximum likelihood estimate for the fraction of atheists in the world, although more sophisticated frequentist methods allow for more nuanced view, e.g. constructing Wilson’s score confidence interval). Instead you should update your prior beliefs to take into account this new evidence which suggests that there are less atheists than you thought. If you google bayesian inference I’m sure you will find examples where these notions are made exact by dirichlet priors and so forth.
I dunno I could write a lot more on this theme, but you guys are so fundamentally wrong that I don’t know where to begin. I suggest that you never decide anything on basis of statistical evidence until you have read some popular introductions to probability (better yet, study a non-popular text).
“Why in the world did they fund a study where the results were going to be too weak to act on?”
How did they know beforehand what the results were going to be in your view? Sure, they geared the power of the test towards 10 %, but for example it might have been that 97 % of those who viewed the videos dropped meat entirely. Had this occurred, they research would have given strong evidence in favor of efficiency of videos.
Hi asdfas =) I am well aware of Bayesian inference. Did this study use Bayesian methods? Did they report their priors before conducting the study, update conditional on the evidence, and conclude based on the result that the shouldn’t change their behavior at all? No. In fact they effectively say that the study provides *no information* about what they should do, which is quite un-Bayesian.
I don’t think they effectively said that, imo they effectively said that “this study lessens our confidence in utility of posting videos and especially gives strong evidence against the proposition that it’s supereffective, but given the uncertainties caused by the design we will not jump into the surprising conclusion that posting videos is entirely pointless”, which to me seems quite reasonable.