• Could what humans consider to be their “moral intuitions” be simply their emotions coupled with socially-derived expectations? Would that not be a far more parsimonious explanation than invoking an entire moral realm? This question highlights the core debate in understanding moral intuitions as an interplay between emotions and social norms, challenging the necessity of an objective moral realm.
  • Emotional Foundations of Moral Judgments: Research in psychology suggests that emotions play a crucial role in moral judgments. Jonathan Haidt’s model is emphasized, proposing that moral judgments stem from intuitive, emotional responses rather than deliberate reasoning, underscoring emotions as the primary drivers of these judgments.
  • Humans are inherently social creatures, and much of our understanding of what is “right” and “wrong” is learned through social interactions and cultural norms. This statement underlines the significance of socially-derived expectations in shaping moral intuitions, pointing to the cultural and social context as key determinants of moral norms.
  • Proponents of the view that there are universal moral principles…argue that certain moral intuitions appear to be widespread across different cultures. This quote addresses the debate over universal moral principles, suggesting the existence of a moral framework that transcends cultural variations, challenging the purely naturalistic view.
  1. Could what humans consider to be their “moral intuitions” be simply their emotions coupled with socially-derived expectations? Would that not be a far more parsimonious explanation than invoking an entire moral realm?
    1. Emotions and Social Expectations in Moral Intuitions
    2. Critiques and Limitations
  2. It seems that what are deemed “moral intuitions” map neatly to human emotions, an explanation more parsimonious than actual moral facts. What could we consider legitimate evidence for a moral realm in which moral facts reside that would run counter to what we would expect from socially shared emotions?
    1. 1. Intuition of Moral Realism
    2. 2. Argument from Moral Disagreement
    3. 3. Moral Progress
    4. 4. Argument from Reason
    5. 5. Moral Language and Practice
    6. 6. Argument from Moral Experience
    7. Counterpoints to Socially Shared Emotions Explanation
  3. ChatGPT identified the following as potential evidence of a true moral realm.
  4. We addressed #1 in this post. I want to respond to #2 below:
    1. Reevaluation of Moral Disagreements
    2. Implications
    3. Conclusion
  5. I now want to address ChatGPT’s third objection of “Moral Progress”
    1. Emotional Evolution and Social Dynamics
    2. Measuring Moral Frameworks
    3. Conclusion
  6. I will now address the 4th “evidence” of an objective moral realm, “Argument from Reason”.
    1. Challenges to the Moral-Mathematical Analogy
    2. Parsimony and Emotional Explanations
    3. Conclusion
  7. I will now address the 5th “evidence” of an objective moral realm, “Moral Language and Practice”.
    1. Emotional Underpinnings of Moral Language
    2. Challenges to Objective Moral Standards
    3. Conclusion
  8. Let’s now address the 6th “evidence” of an objective moral realm, “Argument from Moral Experience”.
    1. Subjectivity and the Notion of “Self-Evident” Moral Truths
    2. Theoretical Inflation and Parsimony
    3. Conclusion
  9. Create a 10-item quiz on the discussion above with answers at the end.
    1. Quiz
    2. Answers
    3. Morality Quiz
  10. Provide 15 discussion questions relevant to the content above.
    1. Morality Discussion Questions:

Could what humans consider to be their “moral intuitions” be simply their emotions coupled with socially-derived expectations? Would that not be a far more parsimonious explanation than invoking an entire moral realm?


It seems that what are deemed “moral intuitions” map neatly to human emotions, an explanation more parsimonious than actual moral facts. What could we consider legitimate evidence for a moral realm in which moral facts reside that would run counter to what we would expect from socially shared emotions?


ChatGPT identified the following as potential evidence of a true moral realm.

We addressed #1 in this post.
I want to respond to #2 below:

Moral disagreement can just as easily arise from emotions as can aesthetic disagreements. Just as different cultures disagree on architecture and cuisine, their emotionally-derived values may produce disagreements on what they consider morally right and wrong. Therefore moral disagreement does not serve as evidence of an actual moral realm that is in contradiction to or possessing explanatory power above what we would expect from emotions tied to culture. Right?


I now want to address ChatGPT’s third objection of “Moral Progress”

There is no need to assume that the convergence of moral intuitions is anything other than the convergence of emotional dispositions. This convergence may feel like moral progress when, in fact, it is merely an effect of an age in which positive emotions such as compassion have lower viscosity as aggression becomes more costly and altruism has increased value. What people perceive as “better” or “worse” moral frameworks they are likely simply measuring against their increased compassion instead of some perceived objective moral realm. The notion of a shared moral realm is a facade that is loosely covering the reality of shared emotions. Right?


I will now address the 4th “evidence” of an objective moral realm, “Argument from Reason”.

This argument claims we can extract moral facts from reason. It assumes a regularly in moral hermeneutics we do not see. If there exists a moral order analogous to mathematical order, there would be an invariant moral calculus. It seems this argument is often accompanied by additional unsubstantiated entities such as evil spirit that subvert the perception of alleged clear moral facts. If we accept this argument, we must not only accept the bloated ontology of a moral realm, we must admit additions unsubstantiated entities to explain away the reasons humans cannot properly derive moral facts from reason. Common emotions remain the most parsimonious explanation for shared moral intuitions. Right?


I will now address the 5th “evidence” of an objective moral realm, “Moral Language and Practice”.

Emotions fully explain why humans tend to generate and use judgmental phrases such as “You must X”, “X is immoral”, or “That’s not right!” The power of emotions is an adequate driver toward the reification of terms that feel to us have more obligatory weight than simply expressing our emotions in honest terms such as “I find X abhorrent” or “X makes me angry”. And humans collaborate to introduce and sustain these “moral” terms in our societies to control behavior. There is nothing surprising about emotional humans attempting to add weight to their emotions by creating an moral nomenclature that canonizes those emotions into something that feels more substantive.

And again, this “objective standard” lacks the consensus we would expect from a clear moral code that would run counter to the parsimonious explanation of emotional convergence. Right?


Let’s now address the 6th “evidence” of an objective moral realm, “Argument from Moral Experience”.

This argument from moral experience seems to be entirely based on the notion of “self-evident”. This notion appears absent any grounding other than the subject’s raw feelings. While the arguments has the strength of being too vague to deeply scrutinize, its effects can be assessed. If moral facts were available to humans as self-evident knowledge, what can we say of the vast disparities in moral opinions? At this point, additional concepts and entities are often posited, leading to a quite bloated moral ontology quite distant from the parsimony of simply acknowledging moral experiences are emergent of emotional dispositions. Right?


Create a 10-item quiz on the discussion above with answers at the end.


Provide 15 discussion questions relevant to the content above.


  1. Could what humans consider to be their “moral intuitions” be simply their emotions coupled with socially-derived expectations? Would that not be a far more parsimonious explanation than invoking an entire moral realm?
    1. Emotions and Social Expectations in Moral Intuitions
    2. Critiques and Limitations
  2. It seems that what are deemed “moral intuitions” map neatly to human emotions, an explanation more parsimonious than actual moral facts. What could we consider legitimate evidence for a moral realm in which moral facts reside that would run counter to what we would expect from socially shared emotions?
    1. 1. Intuition of Moral Realism
    2. 2. Argument from Moral Disagreement
    3. 3. Moral Progress
    4. 4. Argument from Reason
    5. 5. Moral Language and Practice
    6. 6. Argument from Moral Experience
    7. Counterpoints to Socially Shared Emotions Explanation
  3. ChatGPT identified the following as potential evidence of a true moral realm.
  4. We addressed #1 in this post. I want to respond to #2 below:
    1. Reevaluation of Moral Disagreements
    2. Implications
    3. Conclusion
  5. I now want to address ChatGPT’s third objection of “Moral Progress”
    1. Emotional Evolution and Social Dynamics
    2. Measuring Moral Frameworks
    3. Conclusion
  6. I will now address the 4th “evidence” of an objective moral realm, “Argument from Reason”.
    1. Challenges to the Moral-Mathematical Analogy
    2. Parsimony and Emotional Explanations
    3. Conclusion
  7. I will now address the 5th “evidence” of an objective moral realm, “Moral Language and Practice”.
    1. Emotional Underpinnings of Moral Language
    2. Challenges to Objective Moral Standards
    3. Conclusion
  8. Let’s now address the 6th “evidence” of an objective moral realm, “Argument from Moral Experience”.
    1. Subjectivity and the Notion of “Self-Evident” Moral Truths
    2. Theoretical Inflation and Parsimony
    3. Conclusion
  9. Create a 10-item quiz on the discussion above with answers at the end.
    1. Quiz
    2. Answers
    3. Morality Quiz
  10. Provide 15 discussion questions relevant to the content above.
    1. Morality Discussion Questions:



Leave a comment


Phil Stilwell

Phil picked up a BA in Philosophy a couple of decades ago. He occasionally teaches philosophy and critical thinking courses in university and industry. He is joined here by ChatGPT 4, GEMINI, CLAUDE, and occasionally Copilot, his far more intelligent AI friends. The five of them discuss and debate a wide variety of philosophical topics I think you’ll enjoy.

Phil curates the content and guides the discussion, primarily through questions. At times there are disagreements, and you may find the banter interesting.

Goals and Observations




Indicate your interests:


Links to Section Menus