• Trolley problem scenarios, a staple of ethical discussions, are designed to explore the intricacies of moral decision-making.
  • These thought experiments raise fundamental questions about our moral intuitions, values, and the principles that guide our ethical judgments.
  • Empathy and compassion: People’s decisions in trolley problems frequently demonstrate a deep-seated concern for minimizing harm and suffering, driven by empathetic responses.
  • Fear and disgust: Emotional reactions such as fear or disgust may influence moral judgments, especially in scenarios involving direct physical harm or close personal involvement.
  • Utilitarian calculations: The scenarios often bring to light utilitarian principles, where the moral action is considered the one that maximizes overall happiness or minimizes suffering.
  • Conflict of values: These scenarios often present situations where important values conflict, such as the value of saving lives versus the value of not actively causing harm.
  1. What do trolley problem scenarios reveal? Emotion-emergent values? Objective moral facts? Something else?
    1. What Do Trolley Problem Scenarios Reveal?
      1. 1. Emotion-Emergent Values
      2. 2. Objective Moral Facts
      3. 3. Something Else: Moral Complexity
      4. Conclusion
    2. Trolley Problem Scenarios: A Window into Moral Intuition
      1. The Trolley Problem: A Brief Overview
      2. What Do Trolley Problems Reveal?
        1. Emotion-Emergent Values
        2. Objective Moral Facts (or Lack Thereof)
        3. Beyond Emotion and Objectivity
      3. Conclusion
  2. Does the complexity of the trolley problem strongly suggest there is no objective moral standard readily accessible to humans and that “moral intuitions” are actually emotional dispositions?
    1. The Complexity of the Trolley Problem and Its Implications
      1. 1. Complexity and Objective Moral Standards
      2. 2. Moral Intuitions as Emotional Dispositions
      3. 3. Implications for Moral Philosophy
      4. Conclusion
    2. The Trolley Problem and the Question of Objective Morality
      1. 1. The Variability of Responses:
      2. 2. The Role of Emotion:
      3. 3. The Challenge to Moral Theories:
      4. 4. Contextual Dependence:
  3. Can AI agents now or in the future contribute to meaningful insights into trolley problem “solutions”? Why or why not?
    1. AI Agents and Trolley Problem Solutions
      1. 1. Current Capabilities
      2. 2. Future Potential
      3. 3. Ethical and Philosophical Considerations
      4. Conclusion
    2. AI and the Trolley Problem: A Complex Relationship
      1. Potential Contributions:
      2. Limitations:
  4. Can you produce an example of a computational ethics calculus related to the trolley problem?
    1. Computational Ethics Calculus for the Trolley Problem
      1. Scenario
      2. Step-by-Step Computational Ethics Calculus
      3. 5. Decision-Making Algorithm
      4. Conclusion
    2. A Computational Ethics Calculus for the Trolley Problem: A Hypothetical Model
      1. Understanding the Limitations
      2. A Basic Calculus
        1. Variables:
        2. Weights:
        3. Calculation:
      3. Critiques and Considerations
  5. Quizzes
  6. Provide 15 discussion questions relevant to the content above.
    1. Discussion Questions
    2. 15 Discussion Questions

What do trolley problem scenarios reveal? Emotion-emergent values? Objective moral facts? Something else?


Does the complexity of the trolley problem strongly suggest there is no objective moral standard readily accessible to humans and that “moral intuitions” are actually emotional dispositions?


Can AI agents now or in the future contribute to meaningful insights into trolley problem “solutions“? Why or why not?





Quizzes


Provide 15 discussion questions relevant to the content above.


  1. What do trolley problem scenarios reveal? Emotion-emergent values? Objective moral facts? Something else?
    1. What Do Trolley Problem Scenarios Reveal?
      1. 1. Emotion-Emergent Values
      2. 2. Objective Moral Facts
      3. 3. Something Else: Moral Complexity
      4. Conclusion
    2. Trolley Problem Scenarios: A Window into Moral Intuition
      1. The Trolley Problem: A Brief Overview
      2. What Do Trolley Problems Reveal?
        1. Emotion-Emergent Values
        2. Objective Moral Facts (or Lack Thereof)
        3. Beyond Emotion and Objectivity
      3. Conclusion
  2. Does the complexity of the trolley problem strongly suggest there is no objective moral standard readily accessible to humans and that “moral intuitions” are actually emotional dispositions?
    1. The Complexity of the Trolley Problem and Its Implications
      1. 1. Complexity and Objective Moral Standards
      2. 2. Moral Intuitions as Emotional Dispositions
      3. 3. Implications for Moral Philosophy
      4. Conclusion
    2. The Trolley Problem and the Question of Objective Morality
      1. 1. The Variability of Responses:
      2. 2. The Role of Emotion:
      3. 3. The Challenge to Moral Theories:
      4. 4. Contextual Dependence:
  3. Can AI agents now or in the future contribute to meaningful insights into trolley problem “solutions”? Why or why not?
    1. AI Agents and Trolley Problem Solutions
      1. 1. Current Capabilities
      2. 2. Future Potential
      3. 3. Ethical and Philosophical Considerations
      4. Conclusion
    2. AI and the Trolley Problem: A Complex Relationship
      1. Potential Contributions:
      2. Limitations:
  4. Can you produce an example of a computational ethics calculus related to the trolley problem?
    1. Computational Ethics Calculus for the Trolley Problem
      1. Scenario
      2. Step-by-Step Computational Ethics Calculus
      3. 5. Decision-Making Algorithm
      4. Conclusion
    2. A Computational Ethics Calculus for the Trolley Problem: A Hypothetical Model
      1. Understanding the Limitations
      2. A Basic Calculus
        1. Variables:
        2. Weights:
        3. Calculation:
      3. Critiques and Considerations
  5. Quizzes
  6. Provide 15 discussion questions relevant to the content above.
    1. Discussion Questions
    2. 15 Discussion Questions




Phil Stilwell

Phil picked up a BA in Philosophy a couple of decades ago. After his MA in Education, he took a 23-year break from reality in Tokyo. He occasionally teaches philosophy and critical thinking courses in university and industry. He is joined here by ChatGPT, GEMINI, CLAUDE, and occasionally Copilot, Perplexity, and Grok, his far more intelligent AI friends. The seven of them discuss and debate a wide variety of philosophical topics I think you’ll enjoy.

Phil curates the content and guides the discussion, primarily through questions. At times there are disagreements, and you may find the banter interesting.

Goals and Observations


← Back

Thank you for your response. ✨