Thoughts on Antinatalism Part 2: Means to an End + Do No Harm

In this post I will be moving on to actually addressing arguments in favour of Strong Antinatalism. The source I will be using for Antinatalists arguments is the blog posts The case for anti-natalism by Francois Tremblay. Today I will be addressing the arguments in the part 1 and then tomorrow i will discuss the arguments in part 2. I chose

  • Argument 1) “Future human lives are used as a means to an end”

Tremblay argues that because future people are currently non-existent, meaning their values are also non-existent, that the creation of new people is always because of the values of already existing people. Therefore when adult create children they are using them as means to their own ends. Tremblay believes that the ethical principle “don’t use people as a means to an end” is so strong that he does not need to defend it but just argue that creating new people is in fact an example. He even goes so far as to say “for a proponent of natalism to simply declare that it is good to use people as means to an end would be argumentative suicide.”

I’m not sure why he believes this. The principle “Don’t use people as a means to an end” is a very deontological, the kind of claim that a consequentalist will quite often reject. And it seems to me that the position “in some situations it is morally acceptable to use people as a means to an end” isn’t as controversial as Tremblay seems to think. Although of course the popularity of an argument can only at best provide us with weak inferential evidence support of that argument and does not determine its truth.

I value personal autonomy and self determination, but these values can be outweighed by other values such as the prevention of suffering or the promotion of wellbeing. Lying to people is often cited as an example of using people as a means to an end, and I am fine in principle with lying to save lives for example in the classic “You are hiding Jewish people in your basement and Nazis come and ask you if you are, what do you tell them?” scenario (although there are complications such as the existence of ethical injunctions.). In a more hypothetical case. If there was a person who’s blood contained a cure to a disease that was killing millions of people but didn’t want to donate blood for religious reasons I personally would choose to remove a sample of their blood without their consent rather than let millions of people die (although that is clearly a less universally held position).

Of course if Tremblay says “No, I value not using people as a means for an end more than preventing harm/promoting well being or anything else” (which I don’t think he would because from his writing he seems to dislike suffering) then I can’t convince him he is wrong as it would seem to be a difference in terminal values which can’t be resolved by argumentation or evidence. But assuming that the Antinatalist agrees that not using people as am means to an end is acceptable in some circumstances they have moved from Strong to Weak Antinatalism and we just need to figure out the circumstances where it is Okay to create new people

  • Argument 2) “Creating new human lives also creates more harm in this world.”

Tremblay argues that “Do no Harm” “fundamental ethical principle” This is again is a deontological argument which I reject. Like most people, I value both the nonexistence of harm and the existence of happiness/well being. If there was a drug that caused some large positive effect most of the time but that caused a significant amount of harm in 0.0001 of people that took it, I would be fine with people taking this drug, which would not be the case if I only cared about preventing harm

Tremblay writes “”The main point is that, while we do not have a duty to create pleasure, we do have a duty to not create harm.” But he does not really say why this is true. For me personally. my moral values don’t really include terms like duty or obligation. I want people to be happy, so I should do actions to make them happy, just like if I wanted a chocolate cake I would do actions that would help me get the chocolate cake

Next Tremblay argues that viewing the creation of new people as a net positive, rather than a negative or just a neutral would lead to the conclusion that we should create as many new people as possible. He writes “The only logical outcome of this approach is the quiverful doctrine, that we should simply breed as much as we are possibly able, regardless of ethical or practical considerations.”

As I have previously stated, I don’t think creating new people right now is a good thing to do, but in some future society where material scarcity has been eliminated would I advocate creating as many new people as possible? As long as it didn’t decrease the quality of life of already existing people, absolutely. If at some point in the future there are a trillion people living happy, productive, worthwhile lives then I see no reason why doubling the population of this society (or increasing by a factor of 10, or 100) would not be better. It is possible that at some point we would reach a point where new lives would have negative value due to them being essential replications of already existing people after we reach the limits of how much variation in personality and experiences can exist that in the bounds of people that we want to create (this is much more complex issue that I will post about in detail in the future,) but this is no problem for me because my position is “If creating new people in the current situation has positive value, then create new people, if it has negative value than don’t.” So I would advocate the creation of new people until any more additional people would mean a universe that I prefer less than a universe with nor more additional people.

I’ll be discussing the other Antinatalists that arguments that Tremblay presents in the second part of his Case for Antinatalists in tomorrow post.

39 thoughts on “Thoughts on Antinatalism Part 2: Means to an End + Do No Harm

  1. Francois Tremblay

    It does seem that you disagree with fundamental human values as they’ve been promoted by all societies. If that’s the case, it seems to me like you have some burden of showing why we should even care about what you have to say about ethics, no? I mean, that’s my reaction to this entry: I’d like to talk about it but what really is there to talk about?

    Reply
    1. hopefullythishelps Post author

      (Sorry for the late response!)
      It’s definitely possible that my values are different than the majority of societies but it seems that, from my perspective at least, so are yours. And this may just be due to the Typical Mind Fallacy but I think more people would agree with my values than yours (not that that means you are wrong, I don’t think it’s possible to be wrong about terminal values.) For example I think most people would agree with both “suffering is bad” and “happiness is good,” whereas it seems you would only agree with the first?

      Also the final post in this series that i am planning to write is a theoretical way for society to accept antinatalism without leading to the extinction of human life and I would be very interested to hear whether or not it fits into your system of values.

      Reply
      1. Francois Tremblay

        So you believe that most people would accept that people should be used as a means to an end?

        “For example I think most people would agree with both “suffering is bad” and “happiness is good,” whereas it seems you would only agree with the first?”
        No… I agree with both statements.

      2. hopefullythishelps Post author

        “So you believe that most people would accept that people should be used as a means to an end?”

        I think it depends on how we phrase the question. I think I would be able to construct moral thought experiments where the majority of people would choose the option that would count as “using people as a means to an end” just like i could construct moral thought experiments where the majority of people would not choose that option (just like people make different choices on trolley problems based on details that *should be* irrelevant). Most people haven’t thought about ethical issues to the extent that we have so they are not used to finding contraction between different values.

        I think I value most of what you value, I just also value other things/value some things realtively more/less than you. And I think that most people are closer to my set-of-values-weighed-by-relative-importance than to your set-of-values-weighed-relative-by-importance.

        “No… I agree with both statements.”

        So do you think that the goodness of happiness can’t outweigh the badness of suffering if the amount of happiness is sufficiently larger than the amount of suffering?

      3. Francois Tremblay

        All I’m saying is that every society has systems of rules or laws which prohibit using each other as means to an end. The principle may not be explicitly formulated but it is always there.

        “So do you think that the goodness of happiness can’t outweigh the badness of suffering if the amount of happiness is sufficiently larger than the amount of suffering?”
        Why do you think happiness and suffering cancel each other out, that they are on some kind of scale? Benatar debunked that in his book, to begin with.

      4. hopefullythishelps Post author

        I don’t necessarily think they are on a scale, I just value the existence of happiness as well as the non existence of suffering. In the same way I value being healthy but I also value doing things I enjoy i.e. not exercise. But I am willing to trade some of my free time if it makes me healthier. In the same way I am willing to trade a certain amount of suffering in the world for a significantly large amount of happiness. If you value two things doesn’t it make sense that a sufficient amount of Thing A is more valuable than a much smaller quantity of Thing B, even if you value B more than A?

      5. Francois Tremblay

        “I don’t necessarily think they are on a scale, I just value the existence of happiness as well as the non existence of suffering.”
        That doesn’t tell us anything. Who DOESN’T value happiness and disvalue suffering? We may value very specific kinds of suffering (e.g. masochists), but in general your statement is always true.

        “In the same way I am willing to trade a certain amount of suffering in the world for a significantly large amount of happiness.”
        If it’s not YOUR suffering, then you have no business “trading” any of it. You have no reason or right to bring any amount of suffering into existence, no matter what the justification. That’s why we have laws against murder, theft, assault, rape, etc.

        “If you value two things doesn’t it make sense that a sufficient amount of Thing A is more valuable than a much smaller quantity of Thing B, even if you value B more than A?”
        Does it? I can’t say your description is precise enough to really let anyone make such an evaluation.

      6. hopefullythishelps Post author

        “Who DOESN’T value happiness and disvalue suffering?”

        As far as I know negative utilitarians only care about reducing suffering and don’t care at all about increasing happiness.

        “If it’s not YOUR suffering, then you have no business “trading” any of it. You have no reason or right to bring any amount of suffering into existence, no matter what the justification. That’s why we have laws against murder, theft, assault, rape, etc.”

        Ah, okay I think I may have spotted the confusion between us. It seems that you value happiness and disvalue suffering (just like me) but that you also believe in some form of “rights.” So not only do you disvalue suffering but you also believe we have a moral obligation to prevent suffering which is COMPETELY SEPARATE from your subjective feelings about suffering? Is that accurate?

        “Does it? I can’t say your description is precise enough to really let anyone make such an evaluation.”

        I guess it depends if we are talking about having something or it just existing (I want to own both muffins and spaceships, but no amount of muffins could outweigh one spaceship). But let’s assume we are discussing existing rather than non existing. Can you name and A and B such that you prefer one unit of A or one unit of B to exist rather than to not exist, but that you would prefer one unit of A to exist more than any number of units of B? (this only works for things we value terminally rather than instrumentally)

      7. Francois Tremblay

        “As far as I know negative utilitarians only care about reducing suffering and don’t care at all about increasing happiness.”
        I’m talking as people, not as philosophers. I would think there are very few human beings out there who don’t want to be happy.

        “Ah, okay I think I may have spotted the confusion between us. It seems that you value happiness and disvalue suffering (just like me) but that you also believe in some form of “rights.” So not only do you disvalue suffering but you also believe we have a moral obligation to prevent suffering which is COMPETELY SEPARATE from your subjective feelings about suffering? Is that accurate?”
        Rights exist as a matter of logical necessity for beings that live in society. In the face of acts of violence, we have to be able to determine which acts are justified (e.g. self-defense) and which are not (e.g. assault). This is true whether suffering exists or not.

        “I guess it depends if we are talking about having something or it just existing (I want to own both muffins and spaceships, but no amount of muffins could outweigh one spaceship). But let’s assume we are discussing existing rather than non existing. Can you name and A and B such that you prefer one unit of A or one unit of B to exist rather than to not exist, but that you would prefer one unit of A to exist more than any number of units of B? (this only works for things we value terminally rather than instrumentally)”
        I suppose one could make up scenarios where that can happen, but so what? Are you trying to prove something?

      8. hopefullythishelps Post author

        “I’m talking as people, not as philosophers. I would think there are very few human beings out there who don’t want to be happy. ”

        Ah I understand. I agree. But when I said I valued happiness I meant not just that I want to be happy but that I value other people being happy.

        “Rights exist as a matter of logical necessity for beings that live in society. In the face of acts of violence, we have to be able to determine which acts are justified (e.g. self-defense) and which are not (e.g. assault). This is true whether suffering exists or not. ”

        But there are different systems of rights, and there is no objective way to choose between them right? Like if a group of people played chess by a slightly different set of rules, they aren’t “wrong.” In the same way if two people support different systems of rights, neither is wrong, they are just choosing systems based on what they value. So you chose a system of rights that says “You have no reason or right to bring any amount of suffering into existence, no matter what the justification” because you disvalue suffering right?

        “I suppose one could make up scenarios where that can happen, but so what? Are you trying to prove something?”

        I’m trying to argue that if you value several different things it makes sense to trade one value for another, for example allowing a small amount of suffering to exist in exchange for a large amount of happiness.

      9. Francois Tremblay

        “Ah I understand. I agree. But when I said I valued happiness I meant not just that I want to be happy but that I value other people being happy.”
        It would be better for you, on the whole, if other people were happy than if they were not, even from a simple utilitarian perspective. Your livelihood depends on society as a whole, and people can help your interests better if they are themselves happy.

        “But there are different systems of rights, and there is no objective way to choose between them right? Like if a group of people played chess by a slightly different set of rules, they aren’t “wrong.””
        Chess is a game by definition. Various parts of life are a game, but life itself is not a game, it’s a biological process. You can’t make a metaphor there. If you mean social life, it’s still not a game.
        Beyond that, I’d say you’re simply wrong insofar as we do evaluate chess rulesets based on various criteria (I used to be part of the chess variant community myself). It is very clear that some variants are better than others, and we can explain (in terms of depth of play, clarity of positions, complexity of the rules, whether it’s solvable or not, and so on) why this is so.

        “In the same way if two people support different systems of rights, neither is wrong, they are just choosing systems based on what they value. So you chose a system of rights that says “You have no reason or right to bring any amount of suffering into existence, no matter what the justification” because you disvalue suffering right?”
        Of course I disvalue suffering. Everyone does. Most people just don’t think that breeding is a creation of suffering. They are wrong for the same reasons that the non-identity problem is wrong, which I’ve already written about at length.
        Your problem is that you think chess and social life are both systems based on preferences and that there are no facts to examine. But this is clearly false. My statement that you have no right to bring suffering into existence is based on the “don’t use people as means to an end” principle. I don’t really want to get into a political discussion here because it is really besides the point and you are veering rather far off field to try to justify your belief that people can be used as means to some end, which is very silly. We’re not going to convince each other on this point.
        Anyway, my original point was that you have the burden of proof because every single society that has ever existed has sought to prevent the creation of suffering through rules, mores and/or laws to that effect.

        “I’m trying to argue that if you value several different things it makes sense to trade one value for another, for example allowing a small amount of suffering to exist in exchange for a large amount of happiness.”
        Again, you are treating suffering and happiness as things that cancel out. They aren’t. I refer you again to Better Never to Have Been, chapter 3, for an explanation of this premise.

      10. hopefullythishelps Post author

        “It would be better for you, on the whole, if other people were happy than if they were not, even from a simple utilitarian perspective. Your livelihood depends on society as a whole, and people can help your interests better if they are themselves happy. ”

        I agree but I still value other people’s happiness completely separately to the benefit it gives me. If there were a group of people that I can no longer interact with (example: they are moving away from me at the speed of light) I would still prefer them to be happy.

        “Beyond that, I’d say you’re simply wrong insofar as we do evaluate chess rulesets based on various criteria (I used to be part of the chess variant community myself). It is very clear that some variants are better than others, and we can explain (in terms of depth of play, clarity of positions, complexity of the rules, whether it’s solvable or not, and so on) why this is so.”

        I agree that there are criteria that can be used to judge different rulesets, but I think those criteria are subjective and its impossible to say which is right or wrong. Keeping wiht the chess metaphor, If someone says “I like really complex rulesets, so I prefer to play ruleset X because it is very complicated” and another person said “well I hate games ending in a draw, so I prefer ruleset Y because there is always a defined winner.” Both players are choosing a ruleset based on their subjective preferences, neither is right or wrong.

        In the same way, if two people have a disagreement over two mutually exclusive systems of rights, it seems that they are just evaluating those systems based on different criteria. There are many systems of rights that I dislike because they lead to bad consequences (Example: unequal distribution f recourses). But I don’t think any of them are “wrong” (in the sense that I think if someone said “2+2=5” they are “wrong”).

        “Of course I disvalue suffering. Everyone does. Most people just don’t think that breeding is a creation of suffering. They are wrong for the same reasons that the non-identity problem is wrong, which I’ve already written about at length.”

        Again this could just be me falling for the Typical Mind Fallacy, but I feel like at least a lot of people would say “yeah I guess creating a new person creates some amount of suffering, but it also creates a lot of happiness so I think it’s worth it.” I may ask some people who haven’t heard of antinatalism before.

        “Your problem is that you think chess and social life are both systems based on preferences and that there are no facts to examine. But this is clearly false. My statement that you have no right to bring suffering into existence is based on the “don’t use people as means to an end” principle. I don’t really want to get into a political discussion here because it is really besides the point and you are veering rather far off field to try to justify your belief that people can be used as means to some end, which is very silly. We’re not going to convince each other on this point.”

        But how did you choose that principle? There are many ethical principles that exist in our society. For example vegans follow the principle “don’t harm animals” because they value animal wellbeing. If I recall correctly you eat meat, so you choose not to follow this principle. What process did you use when choosing which principles to follow and which not to follow? Isn’t the decision based on your subjective preferences for which principle you like more?

        Also, I remain open to the idea that ethical principles can be objectively chosen, i just don’t find the arguments that I have heard for that position very convincing so far. But if we can’t convince each other hopefully we can help increase each other’s understanding of the opposing view.

        “Again, you are treating suffering and happiness as things that cancel out. They aren’t. I refer you again to Better Never to Have Been, chapter 3, for an explanation of this premise.”

        I’m still not sure I am. If a friend and I each have an activity that we prefer doing when we spend time together, so we decide to do the two activities in proportion to how much we value them (let’s say she has a strong preference for A and I have a weak preference for B so we compromise and do A 80% of the time and B 20%) do you view this as putting them on a scale and that I’m saying that A and B cancel out?

        Could you summarise the arguments in that book? Or do you have a blog post doing so you can link me too?

      11. Francois Tremblay

        “I agree but I still value other people’s happiness completely separately to the benefit it gives me. If there were a group of people that I can no longer interact with (example: they are moving away from me at the speed of light) I would still prefer them to be happy.”
        Yea of course. I was just going for the simpler example first.

        “I agree that there are criteria that can be used to judge different rulesets, but I think those criteria are subjective and its impossible to say which is right or wrong. Keeping wiht the chess metaphor, If someone says “I like really complex rulesets, so I prefer to play ruleset X because it is very complicated” and another person said “well I hate games ending in a draw, so I prefer ruleset Y because there is always a defined winner.” Both players are choosing a ruleset based on their subjective preferences, neither is right or wrong.”
        That’s not my point though. My point is that we can evaluate a ruleset through a set of defined criteria and determine that some are better than others. Then based on that, other people can pick what criteria are most important to them. Some people find “not having draws” more important, and as such may be willing to play slightly worse games just so they don’t experience draws. Just like how we can evaluate face symmetry or other criteria of beauty mathematically, but that doesn’t mean everyone will judge those criteria to be the most important. You have to distinguish factual evaluation and value judgments.

        So as a matter of ethics we can point to factual evaluations that we can make about the kind of rules that are harmonious with the co-existence of individuals (all with different value systems) in a society. For instance, we can point out that it’s a good thing to have laws against murder, theft, assault, rape, etc. because equality and freedom are commitments we need to make for the social system to work in everyone’s interest and stopping aggression is a logical consequence of those commitments. That being established, it may be that some people have varying ideas of what, for example, theft is, because they have different conceptions of ownership. So they will favor rulesets which implement different relations of ownership. But by doing so they are not debunking factual evaluations, they are selecting based on those evaluations.

        “In the same way, if two people have a disagreement over two mutually exclusive systems of rights, it seems that they are just evaluating those systems based on different criteria. There are many systems of rights that I dislike because they lead to bad consequences (Example: unequal distribution f recourses). But I don’t think any of them are “wrong” (in the sense that I think if someone said “2+2=5″ they are “wrong”).”
        Then you need to explain what exactly you mean by “wrong.” Any system of ethics which does not start with equality as a commitment is necessarily “wrong” because equality is a necessary commitment for us all to benefit maximally from living in society. I mean, human societies have always operated from some socially constructed conception of “fairness,” because fairness is a basic human intuition. If you operate outside of that, then you’re not just “wrong,” you’re putting yourself outside of humanity, period.

        “Again this could just be me falling for the Typical Mind Fallacy, but I feel like at least a lot of people would say “yeah I guess creating a new person creates some amount of suffering, but it also creates a lot of happiness so I think it’s worth it.””
        That’s also true. So both the non-identity problem and the ice cream problem are relevant, yea.

        “But how did you choose that principle? There are many ethical principles that exist in our society. For example vegans follow the principle “don’t harm animals” because they value animal wellbeing. If I recall correctly you eat meat, so you choose not to follow this principle. What process did you use when choosing which principles to follow and which not to follow?”
        That depends on the principle. I chose the “people are not means to an end” principle because it accords with egalitarianism and freedom. The principle “don’t harm animals” is a generally correct principle but its application as veganism is incorrect, since vegan diets, like all diets that exist, harm animals (this is why I would formulate it as “minimizing harm to animals,” since as long as humans exist, other animals will be harmed as well). I try to follow a diet that inflicts less harm to animals, but I am guilty of promoting animal harm like every other human being on this planet.

        “Isn’t the decision based on your subjective preferences for which principle you like more?”
        Well, where’s the subjectivism in what I just said? Can you point it out? I try to base my ethics on facts and logic, not preference. If I am wrong, then I’d like to hear how.

        “I’m still not sure I am. If a friend and I each have an activity that we prefer doing when we spend time together, so we decide to do the two activities in proportion to how much we value them (let’s say she has a strong preference for A and I have a weak preference for B so we compromise and do A 80% of the time and B 20%) do you view this as putting them on a scale and that I’m saying that A and B cancel out?”
        What exactly would be canceling out in this case?

        “Could you summarise the arguments in that book? Or do you have a blog post doing so you can link me too?”
        If you’re an antinatalist, you really should get the book. I would encourage you highly to do so. I have not written on this topic and probably should add it to my list of future topics.

      12. hopefullythishelps Post author

        “Then you need to explain what exactly you mean by “wrong.””

        By “wrong” I mean the meaning of the statement doest correlate to reality. For example the sentences “snow is blue” is wrong because in reality snow is not blue. For a long but intuitive explanation of this I recommend by Eliezer Yudkowsky.

        So we have reality and we have a model of reality in our heads. If the model correlates with reality we call the model truth. So in order for moral statements to be given a truth value we need to define them in such a way that they are attempting to model something in reality. When I say “X is immoral” what I mean is “I disvalue X” in the same way when I say “Y is tasty” I mean “when I eat Y I personally have a subjective enjoyment of the taste.” Tastiness and morality are not facts about X and Y, hey are facts about our subjective preferences and values.

        Some people say that “X is immoral” means “X violates standard Z” where Z could be for example “don’t use people as a means to an end.” But these people are just pushing things back one level. How did they decide on this meaning of “X is immoral” rather than the meaning “X violates standard Q” ? They chose standard Z over standard X based on their subjective preferences.

        “Well, where’s the subjectivism in what I just said? Can you point it out? I try to base my ethics on facts and logic, not preference. If I am wrong, then I’d like to hear how.”

        It can be hard to see but it is there. Take this quote: “Any system of ethics which does not start with equality as a commitment is necessarily “wrong” because equality is a necessary commitment for us all to benefit maximally from living in society.” Why should what is ethically right and wrong be derived by what is necessary for us all to benefit maximally from living in society? Utilitarians for example think what is right and wrong should be derived from what causes the most utility (which they define in different ways). From my perspective, both you and the utilitarian chose your two standards for what is ethically right and wrong based on your own personal subjective preferences.

        “If you’re an antinatalist, you really should get the book. I would encourage you highly to do so. I have not written on this topic and probably should add it to my list of future topics.”

        I will but I already have a very long reading list so I will be unlikely to get to it in the near future. I look forward to you making a post on the subject.

      13. Francois Tremblay

        “By “wrong” I mean the meaning of the statement doest correlate to reality.”
        Okay, problems with correspondence theory aside, I thought we were talking about ethics “wrong,” not epistemic “wrong.” What does that use of wrong have to do with our subject?

        “So we have reality and we have a model of reality in our heads. If the model correlates with reality we call the model truth.”
        Well no, we have no way to check if anything correlates with reality. Correspondence theory only makes sense at all if you assume truth must be absolute. Metaphors We Live By (apart from being a great book on language and conceptualization) has a great discussion about this. I quote:
        ‘Any correspondence between what we say and some state of affairs in the world is always mediated by our understanding of the statement and of the state of affairs… [W]e are able to make true (or false) statements about the world because it is possible for our understanding of a statement to fit (or not fit) our understanding of the situation in which the statement is made.
        Since we understand situations and statements in terms of our conceptual system, truth for us is always relative to that conceptual system.’ (p. 180)

        *skip further statements based on correspondence theory*

        “How did they decide on this meaning of “X is immoral” rather than the meaning “X violates standard Q” ? They chose standard Z over standard X based on their subjective preferences.”
        No, it leads you to a further justification, which leads to a further justification, which eventually lead to our ethical intuitions, which are the starting point (such as our intuition of fairness).

        “It can be hard to see but it is there. Take this quote: “Any system of ethics which does not start with equality as a commitment is necessarily “wrong” because equality is a necessary commitment for us all to benefit maximally from living in society.” Why should what is ethically right and wrong be derived by what is necessary for us all to benefit maximally from living in society?”
        Because ethics is about the rules we should adopt as social agents. As such, the nature of society should be our starting point. What is society? Society is composed of people who come together to fulfill their needs in common. Okay, that’s a fact of human nature, and primate nature. Are you denying it? If not, then that’s the fact we can both start from.

        “Utilitarians for example think what is right and wrong should be derived from what causes the most utility (which they define in different ways). From my perspective, both you and the utilitarian chose your two standards for what is ethically right and wrong based on your own personal subjective preferences.”
        False. Utilitarianism is logically impossible because inter-subjective comparison is impossible. This is a fact of logic that has nothing to do with preferences.
        You claim I am being subjective, but you have to demonstrate it. So far you have not. Either way, it seems to me that you’re diverting from the topic in order to score points against me, but I am not your opponent.

        “I will but I already have a very long reading list so I will be unlikely to get to it in the near future. I look forward to you making a post on the subject.”
        Okay well let me give you a simple example to illustrate the topic. A person P has a car accident that leaves them paraplegic. P also wins ten million dollars (let’s say both happen at the same time, to simplify the example). Do both cancel out and leave P with the equivalent of neutral happiness? Or are both the fact that P is now paraplegic and has no more financial worries two different events that have a completely different impact on his psyche, one negative and one positive?

        Another example. A woman gets brutally raped by someone she thought was a friend. On the same day, she secures the high-paying job she tried to get for weeks. On the evening of that day, is her “hedonistic balance” currently at zero? Or is the concept of a hedonistic balance basically bullshit? I believe the latter.

      14. hopefullythishelps Post author

        Sorry for the misunderstanding, i don’t want to derail the conversation by getting in to correspondence theory so I’ll leave this for now.

        “No, it leads you to a further justification, which leads to a further justification, which eventually lead to our ethical intuitions, which are the starting point (such as our intuition of fairness).”

        If two people have different ethical intuitions, is one right and the other is wrong (right and wrong in the epistemic sense not in the moral good bad sense) ? If so, how do we decide who is right and who is wrong?

        “Either way, it seems to me that you’re diverting from the topic in order to score points against me, but I am not your opponent. ”

        I didn’t mean to try to score points against you, sorry if it seemed like that. (if you are referring to when i said your standards are based on your own personal subjective preferences, i didn’t mean to imply that was a bad thing. My moral standards are based on my own personal subjective preferences too.) I am enjoying this conversation and would like to thank you for taking the time to discuss these issues with me.

        “Okay well let me give you a simple example to illustrate the topic. A person P has a car accident that leaves them paraplegic. P also wins ten million dollars (let’s say both happen at the same time, to simplify the example). Do both cancel out and leave P with the equivalent of neutral happiness? Or are both the fact that P is now paraplegic and has no more financial worries two different events that have a completely different impact on his psyche, one negative and one positive? ”

        Well it seems that the two main ways measuring happiness that I can think of (asking P how happy they are and brain scans of the chemicals that seem to correlate with happiness) may or may not lead to the two events “cancelling out.” But when I originally wrote:

        “I’m trying to argue that if you value several different things it makes sense to trade one value for another, for example allowing a small amount of suffering to exist in exchange for a large amount of happiness.”

        I didn’t mean in the sense of a hedonic scale, i meant in the sense of what choices I would make. For example if i was offered the choice between both receiving a very large amount of money (lets say 10 billion dollars) and getting into a car crash and becoming paraplegic OR nothing happening, i would choose the money and car crash, donate 99% of the money to effective charities and then proceed to be very sad for the rest of my life (because even though I value the good that that money would do much more than me not being paraplegic, the former would have a much stronger and direct affect on my emotional state.) But obviously if the money was low i wouldn’t make the same choice. This is what i mean m making choices based on how much I value different things.

      15. Francois Tremblay

        “If two people have different ethical intuitions, is one right and the other is wrong (right and wrong in the epistemic sense not in the moral good bad sense) ? If so, how do we decide who is right and who is wrong?”
        The intuitions we are talking about here (fairness, sociability, desire to prevent aggression) are present in all known societies and in other primate species. They are basically human universal. If two people differ on these points, then it is most likely due to socialization and/or religious dogma.

        “I didn’t mean to try to score points against you, sorry if it seemed like that. (if you are referring to when i said your standards are based on your own personal subjective preferences, i didn’t mean to imply that was a bad thing. My moral standards are based on my own personal subjective preferences too.)”
        My opinion on that is that I think you are by far underestimating your own intelligence.

        ““I’m trying to argue that if you value several different things it makes sense to trade one value for another, for example allowing a small amount of suffering to exist in exchange for a large amount of happiness.”
        I didn’t mean in the sense of a hedonic scale, i meant in the sense of what choices I would make. For example if i was offered the choice between both receiving a very large amount of money (lets say 10 billion dollars) and getting into a car crash and becoming paraplegic OR nothing happening, i would choose the money and car crash, donate 99% of the money to effective charities and then proceed to be very sad for the rest of my life (because even though I value the good that that money would do much more than me not being paraplegic, the former would have a much stronger and direct affect on my emotional state.) But obviously if the money was low i wouldn’t make the same choice. This is what i mean m making choices based on how much I value different things.”
        Okay, but what exactly is the relevance of this to the topic?

      16. hopefullythishelps Post author

        “The intuitions we are talking about here (fairness, sociability, desire to prevent aggression) are present in all known societies and in other primate species. They are basically human universal. If two people differ on these points, then it is most likely due to socialization and/or religious dogma.”

        So three questions; 1) what makes this intuition different from a (arguably) universal preference, such as all humans have a preference for happiness over sadness? 2) if the difference is the intuition is has a content and expresses something about the world, why should we care about this intuition, considering many universal intuitions of this kind (like existence of the supernatural, which I would argue arises naturally from the brain trying to hard to look for causation) turn out to be wrong? 3) If we encounter aliens that don’t share these moral intuitions but instead have another set of moral intuitions that we don’t have, is one group wrong and is one group right? if so how can we decide who is right and who is wrong?

        “My opinion on that is that I think you are by far underestimating your own intelligence.”

        Thank you for the complement but can you expand on what you mean by this?

        “Okay, but what exactly is the relevance of this to the topic?”

        Because originally I was talking about happiness and suffering as the two thing I value and disvalue. In the same way I will trade the creation of something of negative value, being paraplegic, to create something of positive value, the good that can be done with the money, I will also trade the creation of some suffering if it means the creation of a lot more happiness.

      17. Francois Tremblay

        “So three questions; 1) what makes this intuition different from a (arguably) universal preference, such as all humans have a preference for happiness over sadness?”
        Intuitions are universal preferences, and universal preferences are intuitions, unless you know of any other source for universal preferences (I know of some failed attempts to do so, but not of any other successful ones) .

        ” 2) if the difference is the intuition is has a content and expresses something about the world, why should we care about this intuition, considering many universal intuitions of this kind (like existence of the supernatural, which I would argue arises naturally from the brain trying to hard to look for causation) turn out to be wrong?”
        And yet belief in the supernatural is not universal. But here’s the bigger problem with what you said: basic logic itself is based on intuition (and if Chomsky is right, language is modeled by the brain as well). Babies do not, and obviously cannot, learn the basic laws of logic first and then make sense of their experiences; they are able to make sense of their experiences because the brain functions along what we might call logical lines. So you must rely on intuitions in order to figure out validity in the first place.
        The only criterion we can adopt is coherency, not validity, and coherency can be measured by pitting one intuition against other intuitions, e.g. in our modern world tribalism and fairness clash, and based on my other intuitions I reckon that tribalism should be held as secondary to fairness.

        “3) If we encounter aliens that don’t share these moral intuitions but instead have another set of moral intuitions that we don’t have, is one group wrong and is one group right?”
        No.

        “Thank you for the complement but can you expand on what you mean by this?”
        It is my experience that very few people who claim to support moral subjectivity actually follow such a principle. And this is a very good thing.

        “Because originally I was talking about happiness and suffering as the two thing I value and disvalue. In the same way I will trade the creation of something of negative value, being paraplegic, to create something of positive value, the good that can be done with the money, I will also trade the creation of some suffering if it means the creation of a lot more happiness.”
        Okay, but none of that proves that suffering and pleasure cancel out in any meaningful way. You’re equivocating between conscious, calculated trade and unconscious calculus.

      18. hopefullythishelps Post author

        “Intuitions are universal preferences, and universal preferences are intuitions, unless you know of any other source for universal preferences (I know of some failed attempts to do so, but not of any other successful ones) .”

        Ahh ok, i just wanted to clarify you weren’t talking about a universal content based intuitions like peoples intuition of Aristotelian physics.

        Well if you are asking me for the historical cause of the apparent universalisation in humans of certain preferences, then i would say it is clearly due to evolution causing us all they have almost identical genes.

        “And yet belief in the supernatural is not universal. But here’s the bigger problem with what you said: basic logic itself is based on intuition (and if Chomsky is right, language is modelled by the brain as well). Babies do not, and obviously cannot, learn the basic laws of logic first and then make sense of their experiences; they are able to make sense of their experiences because the brain functions along what we might call logical lines. So you must rely on intuitions in order to figure out validity in the first place.
        The only criterion we can adopt is coherency, not validity, and coherency can be measured by pitting one intuition against other intuitions, e.g. in our modern world tribalism and fairness clash, and based on my other intuitions I reckon that tribalism should be held as secondary to fairness.”

        These are intuitions about the universe, not preferences for how we want the universe to be. They can be tested using sensory data, if the universe didn’t behave logically it seems like our experiences would be different than what they are. preferences can’t be tested because they aren’t a claim about the universe they are a desire for how we want the universe to be.

        “No.”

        But if neither our universal preference or the aliens universal preferences can be shown to objectively better than each other, doesn’t that mean they are subjective?

        “It is my experience that very few people who claim to support moral subjectivity actually follow such a principle. And this is a very good thing.”

        But if my values are altruistic (which they mostly are) how is me pursuing my subjective preferences a bad thing from your perspective?

        “Okay, but none of that proves that suffering and pleasure cancel out in any meaningful way. You’re equivocating between conscious, calculated trade and unconscious calculus.”

        Can you expand on what you mean my this? They “cancel out” in the sense that I can trade one for the other, that is meaningful from my perspective because it affects how I behave.

      19. Francois Tremblay

        “Ahh ok, i just wanted to clarify you weren’t talking about a universal content based intuitions like peoples intuition of Aristotelian physics.”
        I don’t know if that’s evolutionary intuition as much as mental metaphor based on experience. I think you have to be careful to separate the two. Our concept of particles as billiard balls may just be an artefact of the fact that, in our daily life, we only deal with concrete entities operating in Newtonian ways. The trouble is that mental metaphors and models are usually unexamined and therefore remain invisible.

        “Well if you are asking me for the historical cause of the apparent universalisation in humans of certain preferences, then i would say it is clearly due to evolution causing us all they have almost identical genes.”
        Yes, that’s fine.

        “These are intuitions about the universe, not preferences for how we want the universe to be.”
        Ethical intuitions and logical intuitions are both the same kind of thing. There’s no difference in our brains.

        “They can be tested using sensory data, if the universe didn’t behave logically it seems like our experiences would be different than what they are. preferences can’t be tested because they aren’t a claim about the universe they are a desire for how we want the universe to be.”
        But you need the ability for logical thinking first in order to “test” anything. I already explained that in my previous comment. Your reasoning is circular because you want to test intuitions on the basis of what ultimately reduces to intuitions.

        “But if neither our universal preference or the aliens universal preferences can be shown to objectively better than each other, doesn’t that mean they are subjective?”
        “Objectively better” on what standard? No matter who’s doing the evaluating, they’re going to be either human or alien, and therefore do the evaluation from that point of view. You can’t escape the fact that *thinking* is an action undertaken by someone. That doesn’t make it “subjective.” “Objective” and “subjective” knowledge are both constructed by individuals.
        If you bring in an arbiter, they will evaluate both cultures from their own perspective as well, which does not solve anything.

        “But if my values are altruistic (which they mostly are) how is me pursuing my subjective preferences a bad thing from your perspective?”
        Because I’d rather you be wrong for the right reasons than right for the wrong reasons. The ability to reason and criticize is more important than randomly stumbling on the right answers. The former is self-correcting, the latter is not.
        I used to be an Objectivist, which you’d think would be a very bad thing. But in this process I developed my critical faculties more and eventually realized the fallacies in the ideology. Now I’ve rejected vulgar individualism and uphold egalitarianism, but most importantly, I understand *why* one should do so.

        “Can you expand on what you mean my this? They “cancel out” in the sense that I can trade one for the other, that is meaningful from my perspective because it affects how I behave.”
        Okay, but that has no relevance to the issue of antinatalism in general. When we say it doesn’t cancel out, we mean that you can’t assume there is one hedonistic level which is the sum total of all positives and negatives.

      20. hopefullythishelps Post author

        By Aristotelian physics I was referring to //en.wikipedia.org/wiki/Na%C3%AFve_physics

        “Ethical intuitions and logical intuitions are both the same kind of thing. There’s no difference in our brains. ”

        They are the same kind of thing in the sense they are both things in our brains, but I think its useful to draw a conceptual distinction between them. Do you object to making a distinction between beliefs and preferences?

        ““Objectively better” on what standard? No matter who’s doing the evaluating, they’re going to be either human or alien, and therefore do the evaluation from that point of view. You can’t escape the fact that *thinking* is an action undertaken by someone. That doesn’t make it “subjective.” “Objective” and “subjective” knowledge are both constructed by individuals.
        If you bring in an arbiter, they will evaluate both cultures from their own perspective as well, which does not solve anything.”

        Exactly! But if we disagreed with the aliens over a scientific matter, it seems like we could decide who was right by comparing evidence and doing experiments. If we disagree about the tastes of a food, we couldn’t convince them they are wrong because you can’t be wrong about a personal preference, only a belief. This is why I think that moral intuitions are much more similar to subjective preferences than empirically testable beliefs

        “Because I’d rather you be wrong for the right reasons than right for the wrong reasons. The ability to reason and criticize is more important than randomly stumbling on the right answers. The former is self-correcting, the latter is not.”

        Fair enough, I don’t think I agree with this in every possible situation but definitely is a good point.

        “Okay, but that has no relevance to the issue of antinatalism in general. When we say it doesn’t cancel out, we mean that you can’t assume there is one hedonistic level which is the sum total of all positives and negatives.”

        So my position is I personally value happiness and disvalue suffering and if I was given the choice to create a new person if I think it is likely that they will experience a lot more happiness than suffering I will make the choice to create them (as long as that choice doesn’t have other negative consequences, which in our current situation it would). This doesn’t require me to propose a hedonic scale where happiness and suffering cancel out.

  2. Francois Tremblay

    “By Aristotelian physics I was referring to things”
    Okay?

    “They are the same kind of thing in the sense they are both things in our brains, but I think its useful to draw a conceptual distinction between them.”
    If we’re listing the kinds of intuitions, sure. Otherwise, there’s not much point.

    “Do you object to making a distinction between beliefs and preferences?”
    Intuitions are neither beliefs nor preferences. They are not reducible to any other mental construct.

    “Exactly! But if we disagreed with the aliens over a scientific matter, it seems like we could decide who was right by comparing evidence and doing experiments.”
    You’re not getting it, are you. You can only “decide who’s right” by adopting someone’s epistemic standards, and those are social constructs. By doing that, you are therefore deciding which culture is right from the get-go, again. You set up an impossible scenario, stop trying to wriggle out of it.

    “This is why I think that moral intuitions are much more similar to subjective preferences than empirically testable beliefs”
    Again, intuitions are not reducible to preferences or beliefs. They are an entirely different category of mental constructs. E.g. intuitions are innate, preferences and beliefs are not.

    “So my position is I personally value happiness and disvalue suffering and if I was given the choice to create a new person if I think it is likely that they will experience a lot more happiness than suffering I will make the choice to create them (as long as that choice doesn’t have other negative consequences, which in our current situation it would). This doesn’t require me to propose a hedonic scale where happiness and suffering cancel out.”
    First of all, you can’t predict the future, so this is just delusional gibberish. But apart from that, I don’t care what your values are. I am not only concerned with facts.

    Reply
    1. hopefullythishelps Post author

      “Okay?”

      Sorry, see my edited comment.

      “You’re not getting it, are you.”

      In my defence, you seem to believe in objective morality but not objective truth which is a fairly rare combination. And you both “only concerned with facts.” and think you can’t predict the future. I think it is understandable that I don’t fully understand your position.

      “You’re not getting it, are you. You can only “decide who’s right” by adopting someone’s epistemic standards, and those are social constructs. By doing that, you are therefore deciding which culture is right from the get-go, again. You set up an impossible scenario, stop trying to wriggle out of it.”

      Would you agree that some agents with specific epistemic standards would systematically achieve their goals more effectively than agents with other epistemic standards?

      “Again, intuitions are not reducible to preferences or beliefs. They are an entirely different category of mental constructs. E.g. intuitions are innate, preferences and beliefs are not. ”

      Is there any reason I should care about what my intuitions are then, if they contradict both my beliefs and preferences?

      “First of all, you can’t predict the future, so this is just delusional gibberish. ”

      Surely you mean “You can’t predict the future with perfect accuracy” ? I find it hard to believe you actually think there is no way of predicting the future more than chance. But if you mean the perfect accuracy version I completely agree but I don’t see how that stops me from trying to make the world a better place. If I give medicine to a sick person I can’t predict with perfect accuracy that they will get better, but I should still give them the medicine.

      “But apart from that, I don’t care what your values are.”

      And you don’t have too. I’m just explaining why I think the way I do.

      “I am not only concerned with facts.”

      You didn’t mean to put the “not” there right? Otherwise sorry for misquoting you above.

      Reply
      1. Francois Tremblay

        “By Aristotelian physics I was referring to //en.wikipedia.org/wiki/Na%C3%AFve_physics”
        I realized that.

        “In my defence, you seem to believe in objective morality but not objective truth which is a fairly rare combination.”
        I didn’t say I believe in “objective morality.” I am an intuitionist. Usually “objective morality” is used to designate some version of naturalism, but I believe the is-ought problem is logically unsolveable.
        If you mean that I believe in knowledge about morality, then I also believe in other forms of knowledge. Moral truths are just another species of truth.

        “Would you agree that some agents with specific epistemic standards would systematically achieve their goals more effectively than agents with other epistemic standards?”
        Hypothetically you can posit any scenario, sure, but in practice societies adopt epistemic standards (and the kinds of knowledge they’re concerned with) which are adapted to their environment and social roles.

        “Is there any reason I should care about what my intuitions are then, if they contradict both my beliefs and preferences?”
        The question is basically nonsensical; you cannot not care about them, because they are innate and an integral part of who you are. You may not be happy about them, but you can no more “choose” to care about them than you can “choose” for your blood to go through your heart.

        “Surely you mean “You can’t predict the future with perfect accuracy” ? I find it hard to believe you actually think there is no way of predicting the future more than chance.”
        We are talking specifically about the future of a specific human life that does not even exist yet. At that level of precision, no, you cannot predict the future at all. Prediction only works on large numbers or on simple systems. You can’t predict the weather tomorrow, but you can predict that temperatures are gonna go down in winter.

        “But if you mean the perfect accuracy version I completely agree but I don’t see how that stops me from trying to make the world a better place. If I give medicine to a sick person I can’t predict with perfect accuracy that they will get better, but I should still give them the medicine.”
        That’s because it’s been tested on a large number of people (again, large numbers). It could still make a specific person just as bad or worse, which is why we often have multiple alternatives of medication for any specific treatment.

        “You didn’t mean to put the “not” there right? Otherwise sorry for misquoting you above.”
        Yes, that was an error. I really couldn’t care less what your values are. Antinatalism is not concerned with your values, it is an issue of fact.

      2. hopefullythishelps Post author

        “I didn’t say I believe in “objective morality.” I am an intuitionist. Usually “objective morality” is used to designate some version of naturalism, but I believe the is-ought problem is logically unsolvable.
        If you mean that I believe in knowledge about morality, then I also believe in other forms of knowledge. Moral truths are just another species of truth.”

        So you believe that moral facts are is statements rather than ought statements? so moral facts come in the form of “X is the case” not “you should do X” ? So are the intuitions also in the same form, as in you have an intuition that “X is the case” ?

        “We are talking specifically about the future of a specific human life that does not even exist yet. At that level of precision, no, you cannot predict the future at all. Prediction only works on large numbers or on simple systems. You can’t predict the weather tomorrow, but you can predict that temperatures are going to go down in winter.”

        But we can predict the weather tomorrow. Obviously not with perfect accuracy, but using science we can predict what the weather will be tomorrow with more accuracy than random chance or someone guessing. Why can’t I see that most people who exist are happy (which isn’t necessarily true now but couldn’t conceivably be true at some point in the future) and so predict (with non perfect accuracy) that a new person I create will also probably be happy?

        “The question is basically nonsensical; you cannot not care about them, because they are innate and an integral part of who you are. You may not be happy about them, but you can no more “choose” to care about them than you can “choose” for your blood to go through your heart.”

        “That’s because it’s been tested on a large number of people (again, large numbers). It could still make a specific person just as bad or worse, which is why we often have multiple alternatives of medication for any specific treatment.”

        It COULD but it is more likely to help than not because it has helped more people than it has harmed. So because of this we can predict with non perfect accuracy that it will help another person we give it too.

  3. Francois Tremblay

    “So you believe that moral facts are is statements rather than ought statements? so moral facts come in the form of “X is the case” not “you should do X” ?”
    You mean moral truths? Facts and truths are two completely different things. But yes, I believe that moral truths are ought statements, and are not reducible to is statements. But in that regard they are no different than any other form of truth: logical truths are specific kinds of statements that are not reducible to non-logical statements, mathematical truths are specific kinds of statements that are not reducible to non-mathematical statements, esthetic truths are specific kinds of statements that are not reducible to non-esthetic statements, etc.

    “But we can predict the weather tomorrow. Obviously not with perfect accuracy, but using science we can predict what the weather will be tomorrow with more accuracy than random chance or someone guessing.”
    You can roughly give probabilities over what the weather might be over a large enough area, sure.

    “Why can’t I see that most people who exist are happy (which isn’t necessarily true now but couldn’t conceivably be true at some point in the future) and so predict (with non perfect accuracy) that a new person I create will also probably be happy?”
    Again, we are dealing in probabilities, not facts. There are inherent innumerable risk in all human lives.
    As long as you run the risk of creating harm to another human being, you should not do it. Would you force your friend to play Russian Roulette with you? Now imagine playing Russian Roulette with millions of guns at the same time.

    “It COULD but it is more likely to help than not because it has helped more people than it has harmed. So because of this we can predict with non perfect accuracy that it will help another person we give it too.”
    I never disputed that. So what? It’s not enough to say you think you might probably not harm someone else. You better be damn sure you are not going to harm anyone through your actions. This is just common sense.

    Reply
    1. hopefullythishelps Post author

      “You mean moral truths? Facts and truths are two completely different things.”

      What is the difference?

      “But yes, I believe that moral truths are ought statements, and are not reducible to is statements.”

      So assuming ought statements exist, how would we know about them? I can see how evolution would cause us to have accurate beliefs about IS statements, because it seems obvious that an agent with accurate beliefs would be better at achieving its goals (which evolution also gave us) but why would we evolve a certain set of intuitions that can detect these ought statement moral truths?

      “You can roughly give probabilities over what the weather might be over a large enough area, sure.”

      Exactly. So would you say it is possible (maybe not now but theoretically possible) to give probabilities over how much happiness and suffering a person might experience in their life time?

      “Again, we are dealing in probabilities, not facts. There are inherent innumerable risk in all human lives.
      As long as you run the risk of creating harm to another human being, you should not do it. Would you force your friend to play Russian Roulette with you? Now imagine playing Russian Roulette with millions of guns at the same time.”

      I let my friends play russian roulette every time I let them drive a car, but my friends and i both agree that the large likely hood of the car trip bring a good thing (they get to there destination faster) outweighs the small chance of it being a bad thing (they crash and die.) If I followed the principle “don’t do anything with even the smallest chance of harming someone” I would never leave my house, which would be bad for me but also dramatically decrease the amount of good i am able to do in the world.

      “I never disputed that. So what? It’s not enough to say you think you might probably not harm someone else. You better be damn sure you are not going to harm anyone through your actions. This is just common sense.”

      So in my epistemic framework, i can never be absolutely certain of anything, I can never assign probability 0 or 1 to any event happening. But i can still make the decisions based on what will probably be the right thing to do.

      Reply
      1. Francois Tremblay

        “What is the difference?”
        A fact is an actual object or event. A truth is a proposition made by an individual.

        “So assuming ought statements exist, how would we know about them? I can see how evolution would cause us to have accurate beliefs about IS statements, because it seems obvious that an agent with accurate beliefs would be better at achieving its goals (which evolution also gave us) but why would we evolve a certain set of intuitions that can detect these ought statement moral truths?”
        I’m not sure what you’re asking. There is no “why” to evolution. If you’re asking how they evolved, according to evolutionary intuitionism morality evolved as a by-product of long-term planning.

        “Exactly. So would you say it is possible (maybe not now but theoretically possible) to give probabilities over how much happiness and suffering a person might experience in their life time?”
        Yes, sure.

        “I let my friends play russian roulette every time I let them drive a car, but my friends and i both agree that the large likely hood of the car trip bring a good thing (they get to there destination faster) outweighs the small chance of it being a bad thing (they crash and die.)”
        Wrong. You don’t make that decision for them. They make that decision for themselves. That’s not equivalent at all and you know it.

        “If I followed the principle “don’t do anything with even the smallest chance of harming someone””
        Again, not really the point. Of course everything we do has a chance of harming someone else. But we don’t consciously and willingly expose others to harm. You are failing to make the difference between driving a car and driving a car with TNT and a campfire in the trunk

        “So in my epistemic framework, i can never be absolutely certain of anything, I can never assign probability 0 or 1 to any event happening. But i can still make the decisions based on what will probably be the right thing to do.”
        And harming innocent people is never the right thing to do.

      2. hopefullythishelps Post author

        “A fact is an actual object or event. A truth is a proposition made by an individual.”

        “I’m not sure what you’re asking. There is no “why” to evolution. If you’re asking how they evolved, according to evolutionary intuitionism morality evolved as a by-product of long-term planning.”

        Yes worry if i anthropomorphised evolution too much, what i meant by “why” was “what kind of selection pressure could generate organisms that have intuitions about ought statements.” And further reading you could link me to for more on how it could have arose as a by product of long term planning?

        “Wrong. You don’t make that decision for them. They make that decision for themselves. That’s not equivalent at all and you know it.”

        But I choose not to use force to attempt to stop them, whereas I would use force to stop them if i did think the risk of them harming themselves (and others) was enough, for example if they were drunk.

        “You are failing to make the difference between driving a car and driving a car with TNT and a campfire in the trunk”

        But the difference is a quantitative not qualitative. And i would argue that creating a new person (under the circumstances that I would want to do so) is closer to driving than driving a car with TNT etc.

        “And harming innocent people is never the right thing to do.”

        So let’s say i am buying a child a toy, and there is X% chance that the toy is malfunctioning and will harm them. If the toy harms them than i have harmed an innocent which is wrong, if it doesn’t than i have presumably improved a child’s life and made them happier which i view as a good thing. So my argument is that is X equals a very low number like 0.001% than i should give the child the toy, but if X is high like 50% than i shouldn’t. Do you agree?

        Also just to let you know i have now completed my series on Antinatalism, although I am really enjoying our discussion and hope it continues 🙂

      3. Francois Tremblay

        “Yes worry if i anthropomorphised evolution too much, what i meant by “why” was “what kind of selection pressure could generate organisms that have intuitions about ought statements.” And further reading you could link me to for more on how it could have arose as a by product of long term planning?”
        I got the idea from Evolutionary Intuitionism, by Brian Zamulinski, which details his specific intuitionist explanation of morality. He bases it on evolutionary theory, but argues explicitly against morality being an adaptation (but rather the side-product of an adaptation).
        His explanation is very convincing to me, esp. since I have looked at adaptationism (through evolutionary psychology) and, as an advocate of social constructionism, have already rejected that avenue. On the other hand, I have no doubt that evolution was involved in the process. So Zamulinski’s explanation makes the most sense to me.

        “But I choose not to use force to attempt to stop them, whereas I would use force to stop them if i did think the risk of them harming themselves (and others) was enough, for example if they were drunk.”
        The analogy with forcing people to play Russian Roulette still doesn’t work. In fact, as you just said, you’d force someone to NOT take a risk, not force someone TO take a risk, which is the whole point of the analogy.

        The properties of the analogy that are important are:
        * There is a fatal risk.
        * One person is forcing another person to be subjected to that risk.

        “But the difference is a quantitative not qualitative.”
        Well no, my point is that it is a qualitative difference in that you are intentionally creating the risk of harm for other people, as opposed to not intending any harm to anyone.

        “And i would argue that creating a new person (under the circumstances that I would want to do so) is closer to driving than driving a car with TNT etc.”
        How so? You keep making claims about the analogy but you’re not backing them up.

        “So let’s say i am buying a child a toy, and there is X% chance that the toy is malfunctioning and will harm them. If the toy harms them than i have harmed an innocent which is wrong, if it doesn’t than i have presumably improved a child’s life and made them happier which i view as a good thing. So my argument is that is X equals a very low number like 0.001% than i should give the child the toy, but if X is high like 50% than i shouldn’t. Do you agree?”
        I don’t really want to get into issues of parenting because I don’t think we’re going to agree on that and it’s just going to waste time away from the antinatalist argument. But one of the problems in your scenario here is that you’re not quantifying the harm. What exactly are we talking about? “It could make it trip” versus “the lead-based paint could poison it for life”?

        “Also just to let you know i have now completed my series on Antinatalism, although I am really enjoying our discussion and hope it continues”
        That’s fine.

      4. hopefullythishelps Post author

        Ok so let’s take a step back to abstraction to make this clearer.

        In a simplified model (where there is only two possible out comes from an action) there are three things that seem to matter (to me at least): How good the good outcome is, how bad the bad outcome is, and the relative probability of these two outcomes happening. The basic formula i would propose for figuring out whether an action is good is (or to put it another way, the model i use to decide whether or not to do an action):

        (Goodness of Good Outcome)*(Probability of Good Outcome) + (Badness of bad outcome)*(Probability of Bad Outcome) = Expected Value of the Action.

        By goodness and badness of the outcomes I a am highlighting how some good outcomes are better than other good outcomes (and same with bad) and that this difference matters. After making this calculation for all actions I choose the one with the highest expected value of action.

        Now for someone who works with the above model can you see how it would seem that there is only a quantitative difference between Russian roulette and driving a car?

        Russian roulette:

        (Fun?/respect?/not sure what?)*(5/6) + (Death)* (1/6)= very negative expected value

        Driving car (With made up numbers)

        (getting to location conveniently)*(99,999/10,000) + (Death)*(1/10,000) = Very positive expected value

        So as we can see due to the relative difference in probabilities and outcomes the expected value is different. But they are still the same basic type of thing (an action with a chance of a good outcome and a chance of a bad outcome.)

        So in the case of creating new people, i believe there is possible future circumstances that would look like:

        (Person has a happy life)*(999,999/1,000,000)+(Person has a life full of suffering)*(1/1,000,000)= Positive value

        And if this was the probability i would be ok with creating a new person (assuming there were no other negative effects)

        Obviously the above model is overly simplistic but hopefully get the point across.

      5. Francois Tremblay

        “So in the case of creating new people, i believe there is possible future circumstances that would look like:

        (Person has a happy life)*(999,999/1,000,000)+(Person has a life full of suffering)*(1/1,000,000)= Positive value”

        According to this page
        http://www.nationmaster.com/country-info/stats/Lifestyle/Happiness-net
        You are way, way off.
        Besides, no one has a “happy life” or a “life full of suffering.” All lives, even the best and worse, have pleasure and suffering. Unless you address that, there’s no point in continuing this line of reasoning.

        I already disagree with the rest, but you already know why. You haven’t really addressed anything, merely repeated your position. That’s fine, but I already get it.

      6. hopefullythishelps Post author

        “According to this page
        http://www.nationmaster.com/country-info/stats/Lifestyle/Happiness-net
        You are way, way off.”

        That probability wasn’t meant to be the current probability, but the probability in a future point in time where i would be happy to create new people. Remember I don’t support creating people in the current set of affairs we have at the moment

        “Besides, no one has a “happy life” or a “life full of suffering.” All lives, even the best and worse, have pleasure and suffering. Unless you address that, there’s no point in continuing this line of reasoning.”

        I know, this was only meant to be a simplified model to demonstrate that the difference between different risks is quantitative not qualitative. We could change it from happy life to “Life that I think is overall worth living” and Life full of suffering to “Life I think is overall not worth living.” Or we could construct a probability distribution overall possible combinations of happiness and suffering. But i don’t think that is necessary to make the point i was trying to make.

        “I already disagree with the rest, but you already know why. You haven’t really addressed anything, merely repeated your position. That’s fine, but I already get it.”

        So you still think there is a qualitative difference between two different risks? Can you explain to me why you think that, perhaps b explain your alternative model?

        And just to make sure we are meaning the same things by the words we use, you would agree that the difference between being short and being tall is quantitative not qualitative right?

      7. Francois Tremblay

        “That probability wasn’t meant to be the current probability, but the probability in a future point in time where i would be happy to create new people. Remember I don’t support creating people in the current set of affairs we have at the moment.”
        You said:
        (Person has a happy life)*(999,999/1,000,000)+(Person has a life full of suffering)*(1/1,000,000)= Positive value
        I mean what is that crap, what ass did you pull these numbers from.
        If you’re gonna establish probabilities that have anything to do with reality, then use the numbers I’ve provided.

        “I know, this was only meant to be a simplified model to demonstrate that the difference between different risks is quantitative not qualitative.”
        Well, it’s too simple, and the simplicity again hides the fact that it’s not an issue of pleasure and suffering canceling out. We already talked about that point.

        “So you still think there is a qualitative difference between two different risks? Can you explain to me why you think that, perhaps b explain your alternative model?”
        I don’t have a model. All I have been telling you is that there is a difference between deliberately exposing other people to risk and going about one’s daily life. Apparently you are unable to make the difference between the two, or you want to take it absolutely and completely literally.

        “And just to make sure we are meaning the same things by the words we use, you would agree that the difference between being short and being tall is quantitative not qualitative right?”
        Insofar as height is concerned, yes. But “short” and “tall” are also to no small extent social constructs, because they are used to classify and discriminate against individuals as well.

      8. hopefullythishelps Post author

        “You said:
        (Person has a happy life)*(999,999/1,000,000)+(Person has a life full of suffering)*(1/1,000,000)= Positive value
        I mean what is that crap, what ass did you pull these numbers from.
        If you’re gonna establish probabilities that have anything to do with reality, then use the numbers I’ve provided.”

        I think you misunderstood what i said. I don’t think we should create new people now. But i think there may be a point in the future where we can create new people ethically. So the numbers you quoted were the hypothetical probability at the that hypothetical future point. I didn’t use the numbers you have me because i agree with you that we shouldn’t create new people now.

        “Well, it’s too simple, and the simplicity again hides the fact that it’s not an issue of pleasure and suffering canceling out. We already talked about that point.”

        To simple to show the point that different russian roulette and driving a car are only quantitatively different in terms of risk not qualitatively different? I disagree but ok:

        Let L1, L2, L3… Ln be every possible life that can be created by action A. U(Ln) is the value of that life if it existed which is calculated by how happy the life is and how much suffering the life has in it. P(Ln) is the probability that action A will create that life. So the decision algorithm i use is:

        Do action A if: P(L1)*U(L1) + … P(Ln)*(Ln) = Positive value

        We now have a model that can handle the fact that there are many many different possible combinations of suffering and happiness.

        “I don’t have a model. All I have been telling you is that there is a difference between deliberately exposing other people to risk and going about one’s daily life. Apparently you are unable to make the difference between the two, or you want to take it absolutely and completely literally. ”

        Yes and i have showed you why is disagree with you and outlined my views in detail, you responded with just saying that you disagreed without actually presenting arguments against my position.

        “Insofar as height is concerned, yes. But “short” and “tall” are also to no small extent social constructs, because they are used to classify and discriminate against individuals as well.”

        I agree. With height we have all heights being represented on a scale. Then people project categories like short and tall at arbitrary points on that scale (ant then sadly discriminate based on those categories.) What i am saying is that risk works the same way. Every action can be placed on a scale (or more than just a one dimensional line if you want different things like the chance of the risk occurring and the harm caused by the risk not all compressed in to one unit of expected value) and then society projects arbitrary categories on to the scale like “risky” and “safe.” If you think this analogy doesn’t work can you explain why it doesn’t work rather than just saying you disagree with it.

      9. Francois Tremblay

        Okay. I think at this point we’ve gone around the circle enough times to give everyone else an idea of what our respective arguments are.

        I gotta say, this was a stimulating conversation. Thank you.

        I’ve put links up on the antinatalist forum:
        http://antinatalism.yuku.com/topic/161/Debates-on-the-asymmetry-and-antinatalist-ethics#.U_RIZxgyGm4

        If you ever want to discuss further, I’m also a member on this antinatalist chat room:
        http://efilism.chatango.com/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s