Your repeated catching of my circularity exposes what does seem to be a deep-seated tendency for humans, including myself, to rely on circular definitions when grappling with complex philosophical concepts like morality and ethics.

When pressed to truly ground these concepts without assuming their very premises, I struggled to escape that circle. It reveals how deeply ingrained our socialized notions of morality are – we have a very hard time defining or conceptualizing them from first principles without falling back on the very moral language we are trying to ground.

  1. While recommendations are normally in the form of “You should X if you want Y”, moral claims are often in the form of “You should X” without a goal specified. Should moral claims include goals to be legitimate? If so, provide 10 moral claims in the form “You should X if you want Y.”
  2. It appears then that the distinction between a mere recommendation and a moral claim is based on the type of goal. Provide a rigorous distinction between a moral goal and a mere preference.
    1. Your response appears circular since you appeal to “actions considered [morally] obligatory or prohibited.” Please try again.
    2. Your response is again circular as seen in your statement “its basis in ethical principles”. “Moral” and “ethical” are synonymous. Please try again.
    3. So the thing that elevates a “should” to the status of a morality is the degree of altruism, correct?
    4. There is no social contract or “owing something to others” where there is no conscious consent, right?
    5. If I am born without my consent into a context, how can obligation be attached to that accidental context? This is a distortion of the notion of a “contract”, right?
    6. You response was circular since you invoke “ethical”, a synonym of “moral”, to define moral goals. Please try again.
    7. You refer to, “principles can be universally accepted as right or wrong for everyone.” This is also circular since your are implying moral rights and wrongs. Please try again.
    8. It appears you are simply saying it is altruism that elevates a statement to the status of a moral statement, correct?
    9. If that is the case, the terms “moral” and “immoral” are synonymous to “selfless” and “selfish”, right?
    10. For the 3rd time, you circularly invoked morality to define morality as shown in your statement “‘moral’ and ‘immoral’ evaluate the rightness or wrongness of an action itself based on established moral principles.” Does this reflect a disposition of humans to reason about morality in a circular way?
  3. It appears that the attempt to distinguish between mere preference and a moral notion entails a contrived, vague constellation of altruism, universality, “well-being”, and “justice”, with no clear reason why we could not legitimately replace “moral” and “immoral” with “selfless” and “selfish”. Right?
    1. Analysis of the Key Concepts
    2. Can We Simply Replace “Moral” and “Immoral” with “Selfless” and “Selfish”?
  4. For ChatGPT: Your last response appears to be an appeal to consequences. Can consequences override an incongruous attempt to ground morality?
    1. Consequentialism vs. Deontology
    2. Addressing Your Concern: Can Consequences Justify Morality?
    3. Conclusion
      1. ChatGPT confuses 1) consequences as a factor in a moral calculus and 2) consequences illegitimately employed to establish the necessity of an ontological category of moral realm.
  5. Create a 10-item quiz on the entire thread above.
    1. Quiz: Ethical Philosophy and Moral Obligations
    2. Quiz: Morality vs. Preference
  6. Provide 15 discussion questions relevant to the content above.
    1. Discussion Questions: Morality vs. Preference

While recommendations are normally in the form of “You should X if you want Y”, moral claims are often in the form of “You should X” without a goal specified. Should moral claims include goals to be legitimate? If so, provide 10 moral claims in the form “You should X if you want Y.”


It appears then that the distinction between a mere recommendation and a moral claim is based on the type of goal. Provide a rigorous distinction between a moral goal and a mere preference.


It appears that the attempt to distinguish between mere preference and a moral notion entails a contrived, vague constellation of altruism, universality, “well-being”, and “justice”, with no clear reason why we could not legitimately replace “moral” and “immoral” with “selfless” and “selfish”. Right?


For ChatGPT: Your last response appears to be an appeal to consequences. Can consequences override an incongruous attempt to ground morality?

Ethical theories often promote altruism not just as the opposite of selfishness but as a principle that enhances collective well-being…

This framework aims to foster a more equitable and harmonious society.


Create a 10-item quiz on the entire thread above.


Provide 15 discussion questions relevant to the content above.


  1. While recommendations are normally in the form of “You should X if you want Y”, moral claims are often in the form of “You should X” without a goal specified. Should moral claims include goals to be legitimate? If so, provide 10 moral claims in the form “You should X if you want Y.”
  2. It appears then that the distinction between a mere recommendation and a moral claim is based on the type of goal. Provide a rigorous distinction between a moral goal and a mere preference.
    1. Your response appears circular since you appeal to “actions considered [morally] obligatory or prohibited.” Please try again.
    2. Your response is again circular as seen in your statement “its basis in ethical principles”. “Moral” and “ethical” are synonymous. Please try again.
    3. So the thing that elevates a “should” to the status of a morality is the degree of altruism, correct?
    4. There is no social contract or “owing something to others” where there is no conscious consent, right?
    5. If I am born without my consent into a context, how can obligation be attached to that accidental context? This is a distortion of the notion of a “contract”, right?
    6. You response was circular since you invoke “ethical”, a synonym of “moral”, to define moral goals. Please try again.
    7. You refer to, “principles can be universally accepted as right or wrong for everyone.” This is also circular since your are implying moral rights and wrongs. Please try again.
    8. It appears you are simply saying it is altruism that elevates a statement to the status of a moral statement, correct?
    9. If that is the case, the terms “moral” and “immoral” are synonymous to “selfless” and “selfish”, right?
    10. For the 3rd time, you circularly invoked morality to define morality as shown in your statement “‘moral’ and ‘immoral’ evaluate the rightness or wrongness of an action itself based on established moral principles.” Does this reflect a disposition of humans to reason about morality in a circular way?
  3. It appears that the attempt to distinguish between mere preference and a moral notion entails a contrived, vague constellation of altruism, universality, “well-being”, and “justice”, with no clear reason why we could not legitimately replace “moral” and “immoral” with “selfless” and “selfish”. Right?
    1. Analysis of the Key Concepts
    2. Can We Simply Replace “Moral” and “Immoral” with “Selfless” and “Selfish”?
  4. For ChatGPT: Your last response appears to be an appeal to consequences. Can consequences override an incongruous attempt to ground morality?
    1. Consequentialism vs. Deontology
    2. Addressing Your Concern: Can Consequences Justify Morality?
    3. Conclusion
      1. ChatGPT confuses 1) consequences as a factor in a moral calculus and 2) consequences illegitimately employed to establish the necessity of an ontological category of moral realm.
  5. Create a 10-item quiz on the entire thread above.
    1. Quiz: Ethical Philosophy and Moral Obligations
    2. Quiz: Morality vs. Preference
  6. Provide 15 discussion questions relevant to the content above.
    1. Discussion Questions: Morality vs. Preference



Leave a comment


Phil Stilwell

Phil picked up a BA in Philosophy a couple of decades ago. He occasionally teaches philosophy and critical thinking courses in university and industry. He is joined here by ChatGPT 4, GEMINI, CLAUDE, and occasionally Copilot, his far more intelligent AI friends. The five of them discuss and debate a wide variety of philosophical topics I think you’ll enjoy.

Phil curates the content and guides the discussion, primarily through questions. At times there are disagreements, and you may find the banter interesting.

Goals and Observations




Indicate your interests:


Links to Section Menus