The following quote by Peter Singer presents a moral thought experiment:
To challenge my students to think about the ethics of what we owe to people in need, I ask them to imagine that their route to the university takes them past a shallow pond. One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself? Virtually all agree that distance and nationality make no moral difference to the situation. I then point out that we are all in that situation of the person passing the shallow pond: we can all save lives of people, both children and adults, who would otherwise die, and we can do so at a very small cost to us: the cost of a new CD, a shirt or a night out at a restaurant or concert, can mean the difference between life and death to more than one person somewhere in the world – and overseas aid agencies like Oxfam overcome the problem of acting at a distance
So Singer presents two situations, saving a drowning child and donating to a charity to save the life of a child in a developing country, and then argues that we should take our moral intuitions in the first case and apply them to the second case because the differences, such as physical location, are not morally relevant.
This is the basic strategy I have been using for as long as can remember when thinking about moral questions. If two intuitions contradict, I think of hypothetical situations and use them to analyses what it is I value. Another example of this is the trolly problem.
Unfortunately I am feeling less confident in this method than I used to. My problem is that there is no good way of knowing which direction you should universalize your moral intuitions/values in. What if a student responded to Peter Singer with:
Well clearly there is a contradiction between my intuitions that I should save the child and my intuition that I am not obligated to give to charity. So I will universalism my intuitions and because there is no morally relevant difference between the child in the pond and the children in developing countries I clearly shouldn’t care about the former, just like I don’t seem to care about latter.
Another way of stating this problem comes from a less wrong comment that I read a while ago but can’t find anymore. The user was saying how he cares a lot when he hears about one person dying or being injured but doesn’t seem to care as much when he here about a million people dying (definitely not a million times as much). The commenter was wondering whether they should “Shut Up and Multiply” meaning that they should take the intuitive value that they assigns to the individual and multiply that by a million to find the actual value of the million or whether they should “Shut up and Divide” meaning they should take the value of the million and divide it by a million to reach the actual value of the individual.
One way I can think of solving this is by letting the stronger intuitions win. But often intuitions are very close to being equal (otherwise the contradiction would have been solved by now) and I am worried that initial conditions in my reflection (the react details of the hypothetical, how it would affect my other beliefs and life decisions, even how I am feeling that day) may have large affects on the conclusions I reach.
Another way is to go with the “Near” intuitions, the intuitions that are generated by using smaller numbers, more real world/practical examples etc over the “Far” intuitions, the opposite of near intuitions based on the justification that we are better suited to reason about things Near us due to evolution . This is a good approximation of what i have already been doing so has the emotional upside of agreeing with most of my intuitive reasoning I have so far done. But my moral intuitions that suffering is bad was also produced by evolution, and I don’t believe that the source of someone’s values alone should affect whether or not they endorse them.
Finally, I can just accept that just in the same way that values are subjective, so if one person values happiness and another disvalues happiness neither is wrong but just have different subjective preferences, strategies for reflecting on on values are also neither right and wrong but are determined by subjective preferences. I rejected objective morality to long ago to remember if I felt any emotional loss at no longer being able to tell people who want to torture and kill babies that they are wrong, but I think I feel a similar feeling in not being able to tell someone who chooses to not ignore the child in the pond/the “Shut Up and Divide” side that they are wrong.
But I want my beliefs to match reality, not what I wish reality was like.