Tag Archives: morality

How Much Should Vegans Focus on Purity?

I recently found out that most brands of condoms and birth control pills are not vegan. They both contain animal products and are tested on animals. Sigh. One more way I will never be a perfect vegan. But I’m okay with that. I think focusing on vegan purism unhelpful, unrealistic and harmful. It is not an effective way to help animals.

Firstly, when you start cutting out animal products from your diet you quickly hit a point of diminishing returns in reduction of animal suffering. This is because of the lesser known animal products that vegans try to avoid like casein, cochineal, gelatin, isinglass, lanolin (thank you Wikipedia) are by-products of the meat industry. Factory farmers only make a fraction of their profits from these products, the majority comes from the more well know products like meat, eggs, and dairy. If no one ate these by-products*, there would still be factory farming, it would just be slightly less profitable meaning the meat would be more expensive and a smaller percentage of animals would be saved. If everyone stopped eating meat but continued to be fine eating by-products, factory farms only of making money would be by selling these by-products. This would mean there fixed costs would remain about the same, but there revenue would be much smaller, causing the by-products to be so expensive that cheaper non animals products alternatives would likely be used instead.

Secondly, and tying into the first point, it is completely unrealistic to be 100% pure vegan. Unfortunately, animal products or products that involved animal cruelty are everywhere. Sugar, orange juice, [more stuff here] wheat and harvested grain kill field mice and other wildlife, almost ever pharmaceutical drug or medical producer was at some point tested on animals.

The time requirements and reduction of quality of life to be 100% pure vegan is much higher than just not eating meat, dairy, and eggs. And while it makes me so happy that people are willing to work that hard to help animals, I don’t think it the most effective use of their altruistic budget. One way of helping animals that I think is extremely neglected in the vegan community is donating money to effective animal charities. An example would be Vegan Outreach that produces leaflets and coordinates their distribution by volunteers at university campuses. I have not yet researched the exact numbers, but it is entire possible that donating a few hundred dollars to an effective animal charity would cause the same reduction in animal suffering as being vegan for a year. So if your primary concern is reducing animal suffering, I think this is a much better path to go down than vegan purism.

Another reason to avoid purism is the risk of relapse. For psychological reason humans tend to have an all or nothing mentality to begin vegetarian or vegan. I don’t know anyone who only eats 3 meat meals a week. When my friend quit being vegetarian, she didn’t try having meat a few days of the week to see if she could manage that, she went straight back to full meat consumption. There are also terrifying statistics on vegetarian/vegan recidivism. According to a study done by the Humane Research Council “86% of people who go vegetarian lapse back into meat-eating, and 70% of those who go vegan lapse.” Even adjusting for people who go vegetarian for health reasons and then decide to stop, those are scary numbers. So if there is even a small chance that trying to be pure vegan will make you burnout and give up and go back to eating meat, then you shouldn’t do it. Long term thinking is important here, think about you impact over your whole life time not just this year.

The final reason why I think vegan purism is unproductive is how it effects the perceptions of meat eaters. Converting meat eaters to veganism should be a big priority for all vegans. If you convert one meat eater to being vegan for the rest of their life you have doubled the impact you have on animal welfare from being vegan yourself. So anything that makes the meat eaters in your life less interest in veganism, for example the vegans they know obsessing over minute traces of animal products or refusing to eat birthday cake at an office party, will probably do much more harm to animals than buying something with gelatine in it once a month.

I think the intentions of purist vegans are positive reinforcement worthy but I think they are mistaken that vegan purism is the best way to help animals and that it is in fact unproductive relative to a more relaxed form veganism. But different things work for different people so if you feel vegan purism is right for you than go for it. Just remember to focus on what will help animals, not what will make you personally feel better. Valuing the personal good feeling you get from vegan purism over animals lives isn’t that different to what meat eaters do.

* To avoid misinterpretation, I am not making a argument from universalizability. You should base your actions on their marginal effect rather than the hypothetical world where everyone does the same as you. I am talking about what would happen if everyone stopped eating animal by-products to illustrate the economic affect more clearly

A Moral Dilemma Dilemma

The following quote by Peter Singer presents a moral thought experiment:

To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.

I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.

Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself? Virtually all agree that distance and nationality make no moral difference to the situation. I then point out that we are all in that situation of the person passing the shallow pond: we can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world – and overseas aid agencies like Oxfam overcome the problem of acting at a distance

So Singer presents two situations, saving a drowning child and donating to a charity to save the life of a child in a developing country, and then argues that we should take our moral intuitions in the first case and apply them to the second case because the differences, such as physical location, are not morally relevant.

This is the basic strategy I have been using for as long as  can remember when thinking about moral questions. If two intuitions contradict, I think of hypothetical situations and use them to analyses what it is I value. Another example of this is the trolly problem.

Unfortunately I am feeling less confident in this method than I used to. My problem is that there is no good way of knowing which direction you should universalize your moral intuitions/values in. What if a student responded to Peter Singer with:

Well clearly there is a contradiction between my intuitions that I should save the child and my intuition that I am not obligated to give to charity. So I will universalism my intuitions and because there is no morally relevant difference between the child in the pond and the children in developing countries I clearly shouldn’t care about the former, just like I don’t seem to care about latter.

Another way of stating this problem comes from a less wrong comment that I read a while ago but can’t find anymore. The user was saying how he cares a lot when he hears about one person dying or being injured but doesn’t seem to care as much when he here about a million people dying (definitely not a million times as much). The commenter was wondering whether they should “Shut Up and Multiply” meaning that they should take the intuitive value that they assigns to the individual and multiply that by a million to find the actual value of the million or whether they should “Shut up and Divide” meaning they should take the value of the million and divide it by a million to reach the actual value of the individual.

One way I can think of solving this is by letting the stronger intuitions win. But often intuitions are very close to being equal (otherwise the contradiction would have been solved by now) and I am worried that initial conditions in my reflection (the react details of the hypothetical, how it would affect my other beliefs and life decisions, even how I am feeling that day) may have large affects on the conclusions I reach.

Another way is to go with the “Near” intuitions, the intuitions that are generated by using smaller numbers, more real world/practical examples etc over the “Far” intuitions, the opposite of near intuitions based on the justification that we are better suited to reason about things Near us due to evolution . This is a good approximation of what i have already been doing so has the emotional upside of agreeing with most of my  intuitive reasoning I have so far done. But my moral intuitions that suffering is bad was also produced by evolution, and I don’t believe that the source of someone’s values alone should affect whether or not they endorse them.

Finally, I can just accept that just in the same way that values are subjective, so if one person values happiness and another disvalues happiness neither is wrong but just have different subjective preferences, strategies for reflecting on on values are also neither right and wrong but are determined by subjective preferences. I rejected objective morality to long ago to remember if I felt any emotional loss at no longer being able to tell people who want to torture  and kill babies that they are wrong, but I think I feel a similar feeling in not being able to tell someone who chooses to not ignore the child in the pond/the “Shut Up and Divide” side that they are wrong.

But I want my beliefs to match reality, not what I wish reality was like.

“You shouldn’t feel guilty for being born with so much more than others”

A friend of mine once had a semi emotional breakdown about the fact that the world is so horrible, there are so many people suffering etc. In a way I was kind of insecure about this, because I consider myself to care more about that kind of thing than most people and I am doing more to help then she is, yet I don’t experience these negative emotions to the same degree she did. But then I reminded myself that 1) outwards burst of emotion like the one she had aren’t an accurate sign of a person’s emotional state and 2) it doesn’t matter how strongly I feel about something or how much I want something beyond how much that motivates me to act. What matters is what I actually do to steer the future in a better direction.

A friend of hers told her (paraphrased obviously):

“You should feel guilty about the fact that you have so much more than other people. You didn’t choose to be more in to a rich country with well off parents etc.”

(Her emotions at the time seemed to be more of the form of “I have so much, others have so little, I feel guilty” whereas mine are usually closer to “others have so little, actually, no one really has anything compared to the ideal situation, I need to do everything I can to make it better”)

When she told me about this I disagreed. Firstly I don’t have a guilt based moral system, but even if I did this argument wouldn’t completely resolve me of my hypothetical guilt. The example I gave was to imagine that everyone is created in a box, all able to see each others boxes but unable to leave our them. Also each box is a different size and has different amounts of food and other resources delivered to the box each day. In this scenario it would indeed be pointless to feel guilty for being created in a larger, more resource filled box that others that you can observe.

But if the scenario was changed so that you could divert resources from your box to other boxes and chose not to, than clearly you should feel guilty because you are choosing for them to not have the resources they need more than you.

Clearly we happen to be in the universe where you can divert resources from your box to others.

But I added, to help her through the emotional negatives she was going through, the way I get around thoughts of “oh god I’m not doing enough I’m bad arrrg self loathing” is to remind myself that I am much more motivated by positive emotions rather than negative emotions.

Example when I was in high school and I had an assignment, if it was behind schedule and I was worried I wouldn’t finish it on time I would hide in my room under my covers and not do anything. but if I think I can achieve my goal of completing the assignment I am much more likely to try and actually do it. In the same way if every time I thought about EA stuff I felt bad for not doing more I would just not dor EA stuff or not think about EA stuff.

There is a part of me that is worried that this isn’t true and that I am just rationalizing to avoid going down the unpleasant path of guilt as a motivator even if that path does more good. I guess we we’ll see what happens.

Thoughts on Antinatalism Part 4: Benatar’s Asymmetry

This post is a response to another post by Tremblay defending Antinatalism, Benatar’s Asymmetry . In the post Bentar summarises and defends an argument in Better Never to Have Been by David Benatar. The summary of the argument is:

“(1) If a person exists, then eir pain is a bad thing.
(2) If a person exists, then eir pleasure is a good thing.
(3) What does not exist cannot suffer (therefore this non-existing pain is a good thing).
(4) What does not exist cannot be deprived of any pleasure (therefore this non-existing pleasure is not a bad thing).”

Due to the asymmetry between 3 and 4, Benatar and Tremblay argue that creating new people is bad.

This asymmetry seems incorrect to me, the logic that seems to be behind 3 and 4 does not seem consistent.

Tremblay writes: “[people who reject (4)]argue that to not start new lives is a deprivation of pleasure. But for whom is this a deprivation? It cannot be a deprivation to the non-existent, since that which cannot exist cannot be deprived. Is it a deprivation to the parent, or to humanity? ”

But isn’t this argument also an argument against (3)? If non existence isn’t deprivation for a person that would experience suffering. how can non existence be salvation for a person who would experience pain?

Tremblay writes: “We can imagine that the world might contain 12 billion people. That’s a whole 5 billion people that do not actually exist. And yet no one is mourning the loss of pleasure of these 5 billion imaginary people. A mother may regret that an expected child was stillborn, but the person whose death she regrets exists solely in her imagination. That which does not exist cannot be a person, or anything else.”

But what if circumstances were such that the mother didn’t want the child to be born because she knew it would person who would experience immense suffering (maybe she is trapped in a forced labour camp) and so is very relieved that the baby is still born? Is this irrational because ” the person whose death she [celebrates] exists solely in her imagination” ?

Here is the way I think about it. I have a choice to either create Person A or not create them. If I create A and they are happy, that is good, if I create A and they suffer that is bad. If I don’t create A and they would have suffered if I had created them than that is good. And if I don’t create A and they would have been happy, this is bad.

This is because of the opportunity cost of the choice to create A. The opportunity cost of a choice is the value of the highest value option that you didn’t choose. So if I don’t create A the opportunity cost is the value of A existing and being happy. Because I value that more than the option if they don’t exist I should choose to create them.

To illustrate my point further imagine a Paperclip Maximiser, a super intelligent AI that’s terminal value is to create paperclips. But this is a special paperclip maximiser, it values the creation of red paperclips but disvalues the creation of all paperclips of other colours. For simplicities sake, let’s say these values and disvalues are even, so it cares about creating one red paperclip the same amount as preventing one red paperclip from being created.

So if this Paperclip Maximiser was given a process that creates paperclips, but it wasn’t sure whether it made red paperclips or blue paperclips, would it also feel there was an asymmetry in the different possible outcomes? It seems clear that it wouldn’t. The Paperclip Maximiser only wants to increase the amount of red paperclips and minimise the amount of non red paperclips. it would view a situation where a red paperclip could have been created but wasn’t analogous to (4) as bad.

Now just to clarify my opinion, I reject (4) not because I think that nonexistent babies are floating around in nonexistent space and being deprived of pleasure. it is because I value a universe with a happy person in it more than a universe without one all other things being equal. Just like the Paperclip Maximiser values a universe with a red paperclip more than one which doesn’t have a red paperclip, all other things being equal. So I disagree with the ” therefore this non-existing pleasure is not a bad thing) ” part of (4).

Tremblay has made another post defending Benatar’s argument Clearing out confusion about Benatar’s Asymmetry. The post rephrases some content from the original, the first argument in the second post that s not in the first is that ” (4) cannot be worse than (2) because pleasure in fulfilment of a need is not any better than the absence of need in the first place. ” Tremblay appears to be saying that pleasure/happiness is of the same value as the non existence of the person experiencing that pleasure/happiness. This may be what Tremblay values (although in a comment in part 2 of this series he told me he agree with the statement “happiness is good,” maybe he meant only for people who already exist?) but I value someone existing in a state of happiness more than someone non existing, and if I could choose I would choose the former, all other things being equal.

Thoughts on Antinatalism Part 1: Introduction + Strong and Weak Antinatalism

This is the first of several posts I will be making about Antinatalism. First this post will be introducing Antinatalism and my position towards it, then in late posts I will be directly engaging with Antinatalism arguments and finally presenting a way that Antinatalism doesn’t have to lead to a controversial conclusion

Antinatalism is the ethical position that the creation of new (usually human) life is morally wrong. Before arguing against this belief I would like to make a distinction between two different types of Antinatalism. It may be a distinction that already exists or it may not (I don’t remember seeing someone make a formal distinction between them), and I’m not sure how Antinatalists would feel me making it, but I think it will assist in our conceptual analysis of Antinatalism.

I am going to define Weak Antinatalism as the position that creating new life in our “current situation” is immoral but that it is not immoral in principle and there could be situations in the future where it would not be immoral to create new people. On the other hand, strong Antinatalism is the position that creating new life in principle is immoral regardless of the circumstances.

A Weak Antinatalist may use arguments such as “we currently have overpopulation so more children would be bad right now” or “the resources you use to raise one child could be used to help hundreds of children that already exist” and is more likely to identify with consequentialist moral framework. A Strong Antinatalist may use, along with the previously mentioned arguments used by Weak Antinatalists, arguments such as “a non-existent person can’t consent to being created, so by creating them you are violating their rights” or “it is always morally wrong to create harm and we can never guarantee that a person will not experience harm during their life time, therefore creating new people is immoral” and are more likely to identify with a deontological framework.

(Note: an Antinatalist doesn’t necessarily think that it is never the best available option to create new life such as in situation where someone says “make a baby or ill torture the whole human population for a million years” Antinatalists might see creating new life as a negative that can be outweighed by more negative but just that they would never see it as a positive in itself)

I make this distinction partly because I think it is a real distinction that relates to different groups of arguments and different underlying ethical frameworks and partly because I agree quite strongly with Weak Antinatalism and disagree quite strongly with Strong Antinatalism. I think that in the current situation that I am in (and that people like me are in) it is extremely selfish and immoral to have children and I will be making posts defending this position in the future. But I think that there are certain circumstances (that I hope will one day exist) where creating new humans would have extremely positive value.

The Antinatalist arguments I will be arguing against in the future posts are all Strong Antinatalism and I will usually refer to them as just Antinatalists. But when it is a relevant distinction to make I will make it and after arguing against Strong Antinatalism I will make posts arguing for Weak Natalism that will cause disagreement with Antinatalists and Natalists alike.