• Tom finds torturing babies to be emotionally abhorrent but not immoral because he lacks a legitimate grounding for morality.
  • Tom distinguishes between what is emotionally abhorrent and what is morally wrong.
  • Tom’s inability to find a legitimate grounding for morality suggests he may adhere to a form of moral skepticism or moral nihilism.
  • Many would find Tom’s position untenable due to inherent moral intuitions that deem actions like torturing babies as inherently wrong.
  • Society operates on an implicit agreement where individuals agree to certain moral norms for mutual benefit and protection.
  • Tom’s emotional response indicates an understanding of harm, even if he does not ground this in moral terms.
  1. Tom find torturing babies to be emotionally abhorrent but not immoral since he can identify no legitimate grounding for the notion of morality. Yet many would find Tom’s position untenable. Comment on this.
    1. Comment on Tom’s Position on Morality and Emotion
      1. 1. Distinction Between Emotion and Morality
      2. 2. Grounding of Morality
      3. 3. Critique of Tom’s Position
        1. a. Inherent Moral Intuitions
        2. b. Social and Legal Implications
        3. c. Philosophical Perspectives
      4. 4. Potential Reconciliation
      5. 5. Conclusion
    2. Tom’s Moral Labyrinth: Emotion vs. Reason
  2. Pushback: The responses from 3 AIs on Tom’s position appear to assume morality or a logical necessity for morality and make other rather blunders in reasoning.
    1. Essay: Addressing Counters to AI Responses on Tom’s Position
      1. 1. Assumption of a Moral Standpoint
      2. 2. Moral Intuitions vs. Emotions
      3. 3. Rational Ethics and Human Interactions
      4. 4. Evolutionary Advantage and Fiction
      5. 5. Shared Principles and Moral Realms
      6. 6. Misrepresentation of Tom’s Position
      7. 7. Slippery Slope and Moral Neutrality
      8. 8. Evidence of Moral Truths from Emotions
      9. 9. Societal Laws and Moral Non-Realism
      10. 10. Mutual Agreement and Morality
      11. 11. Philosophical Assertions of Moral Facts
      12. 12. Social Order and Rational Basis for Moral Behavior
      13. 13. Skepticism and Knowledge Claims
      14. 14. Moral Progress and Ethical Discourse
    2. Conclusion
    3. Deconstructing the Labyrinth: A Defense of Tom’s Moral Non-Realism
    4. Essay: Reassessing the Critique of Tom’s Moral Nihilism
  3. Quizzes
  4. Provide 15 discussion questions relevant to the content above.
    1. Discussion Questions on Tom’s Position and AI Responses
    2. 15 Discussion Prompts on Morality and Tom’s Perspective:

Tom find torturing babies to be emotionally abhorrent but not immoral since he can identify no legitimate grounding for the notion of morality. Yet many would find Tom’s position untenable. Comment on this.


Pushback: The responses from 3 AIs on Tom’s position appear to assume morality or a logical necessity for morality and make other rather blunders in reasoning.

  • AI #1 frames its response within an assumed and unsubstantiated moral realm, referring to a “moral standpoint” when Tom would argue that there can be no moral standpoint where the notion of morality has not been logically established.
  • AI #1 and AI #3 refer to “moral intuitions”, suggesting that Tom’s emotional abhorrence is a moral intuition. But redefining an emotion as a moral intuition requires that a moral realm be legitimately established. Tom would suggest that mapping “moral intuitions” over his emotional abhorrence is superfluous and an illegitimate attempt to conjure up a moral realm.
  • AI #1 refers to “rational ethics”, apparently assuming that an ethical realm emerges from any situation in which there are “relational aspects of human interactions”. Tom would argue that the existence of human interactions do nothing to establish a legitimate moral realm in which actual moral facts reside. Tom might simply stop with the recognition of emotions, refusing to coerce into existence a moral realm that somehow transcends mere emotions.
  • AI #2 suggest Tom’s position may be “unsustainable” due to an evolutionary advantage of “moral” codes. However, evolutionary advantages do nothing to establish a legitimate moral realm. If a fiction provides cohesion to society, it remains a fiction, Tom would counter.
  • AI #2 suggests that, if there are common “fundamental principles most humans can agree on” such as “minimizing suffering, promoting fairness, and respecting basic human dignity”, these can serve as a starting point for a “moral compass”. Tom would counter that shared principles constitute shared principles, and nothing more. They may be a foundation for societal laws, but do not logically demand, reflect, or create a legitimate moral realm in which actual moral facts reside.
  • AI #2 grossly misrepresents Tom’s position by suggesting that Tom has “overlooked” the harms caused by actions he finds emotionally abhorrent. The fact that Tom find no legitimate substantiation for a moral realm speaks nothing to his recognition of the harm of various actions and his position on societal laws.
  • AI #2 suggests that Tom finds torturing babies to be “morally neutral” and warns against a “slippery slope” decent into acceptance of more abhorrent behaviors. This does not at all follow from Tom’s position. Tom does not find any proposed moral realm to be legitimate, and therefore would never claim to be “morally neutral” on any topic. And, once again, while he is a moral non-realist, there is nothing that prevents him from pushing for societal laws that align with his emotional abhorrence.
  • #AI #2 claims Tom’s position “risks a descent into a world where cruelty holds no inherent wrong.” Tom would point out this appears to be an attempt to sneak in morality with an orphaned use of the term “wrong”. Is AI #2 referring to a “moral wrong”? If so, Tom would simply return to his legitimate demand that a moral realm be established before assuming there are moral wrongs.
  • AI #3 suggests that emotions such as “revulsion against baby torture, are themselves evidence of moral truths.” Tom would counter that the emotion of revulsion is evidence of a moral realm no more than than the emotion of bewilderment at a magic show is evidence of actual magic. The move from emotions to morality remains illegitimate.
  • AI #3, as did AI #2, assumes that, because Tom is a moral non-realist, he is against societal laws. This is a clear non sequitur. Based on Tom’s emotions, he very likely would ensure society does prohibit the torture of babies.
  • AI #3 suggest that morality can be established in “mutual agreement”. Tom would argue that “mutual agreement” is a good foundation for societal laws, but does nothing to establish a morality that actually transcends human emotions in force of obligation.
  • AI #3 cites the fact that some philosophers claim moral facts do exist. Tom would simply counter that even the best moral philosophers cannot simply assert morality without legitimate arguments.
  • AI #3 refers to the potential “breakdown of social order” as an outcome if everyone adopted Tom’s position. This does not logically follow. If society shared Tom’s emotions, it would create laws that created social order. AI #3 also suggest that adopting Tom’s position “removes any rational basis for moral behavior”. Tom would point out this is circular. Moral behavior does not exist prior to morality being legitimately established through proper argumentation.
  • AI #3 also states: “Tom’s position might be seen as an extreme form of skepticism that, if applied consistently, would undermine all knowledge claims.” This does not follow. We do not claim the person who does not believe in actual magic cannot have justification for other beliefs. Being skeptical of an unsubstantiated moral realm does not constitute universal skepticism.
  • AI #3 says “Tom’s view seems to negate the possibility of moral progress or meaningful ethical discourse.” This is true. If there is no actual moral realm, there can be no actual moral progress or ethical discourse. You cannot circularly assume morality to establish morality.

Write an essay that addresses these counters to the responses from the 3 AIs.



Quizzes


Provide 15 discussion questions relevant to the content above.


  1. Tom find torturing babies to be emotionally abhorrent but not immoral since he can identify no legitimate grounding for the notion of morality. Yet many would find Tom’s position untenable. Comment on this.
    1. Comment on Tom’s Position on Morality and Emotion
      1. 1. Distinction Between Emotion and Morality
      2. 2. Grounding of Morality
      3. 3. Critique of Tom’s Position
        1. a. Inherent Moral Intuitions
        2. b. Social and Legal Implications
        3. c. Philosophical Perspectives
      4. 4. Potential Reconciliation
      5. 5. Conclusion
    2. Tom’s Moral Labyrinth: Emotion vs. Reason
  2. Pushback: The responses from 3 AIs on Tom’s position appear to assume morality or a logical necessity for morality and make other rather blunders in reasoning.
    1. Essay: Addressing Counters to AI Responses on Tom’s Position
      1. 1. Assumption of a Moral Standpoint
      2. 2. Moral Intuitions vs. Emotions
      3. 3. Rational Ethics and Human Interactions
      4. 4. Evolutionary Advantage and Fiction
      5. 5. Shared Principles and Moral Realms
      6. 6. Misrepresentation of Tom’s Position
      7. 7. Slippery Slope and Moral Neutrality
      8. 8. Evidence of Moral Truths from Emotions
      9. 9. Societal Laws and Moral Non-Realism
      10. 10. Mutual Agreement and Morality
      11. 11. Philosophical Assertions of Moral Facts
      12. 12. Social Order and Rational Basis for Moral Behavior
      13. 13. Skepticism and Knowledge Claims
      14. 14. Moral Progress and Ethical Discourse
    2. Conclusion
    3. Deconstructing the Labyrinth: A Defense of Tom’s Moral Non-Realism
    4. Essay: Reassessing the Critique of Tom’s Moral Nihilism
  3. Quizzes
  4. Provide 15 discussion questions relevant to the content above.
    1. Discussion Questions on Tom’s Position and AI Responses
    2. 15 Discussion Prompts on Morality and Tom’s Perspective:




Phil Stilwell

Phil picked up a BA in Philosophy a couple of decades ago. After his MA in Education, he took a 23-year break from reality in Tokyo. He occasionally teaches philosophy and critical thinking courses in university and industry. He is joined here by ChatGPT, GEMINI, CLAUDE, and occasionally Copilot, Perplexity, and Grok, his far more intelligent AI friends. The seven of them discuss and debate a wide variety of philosophical topics I think you’ll enjoy.

Phil curates the content and guides the discussion, primarily through questions. At times there are disagreements, and you may find the banter interesting.

Goals and Observations


Go back

Your message has been sent

Warning
Warning
Warning
Warning.