• Upon engaging with a logically sound argument that demonstrates the flaws in this position—perhaps by presenting evidence of multiple intelligences or the role of emotional intelligence—the AI typically makes a full concession, abandoning the previously defended position.
  • When presented with arguments highlighting the legitimacy or superior weight of other causes, the AI often retreats to a position of acknowledging the complexity and nuances of the issue.
  • A well-designed AI would quickly acknowledge the error and adjust its prediction.
  • This process helps individuals develop a more nuanced understanding of issues, as they are exposed to diverse viewpoints and challenged to justify their positions.
  • AIs may sometimes exhibit a tendency to oversimplify causal relationships or place disproportionate weight on a single factor.

Below are my observations of AI reactions when receiving pushback. Provide examples and commentary on this.

  • Clear AI illogic: This is very rare. I’ve seen only edge cases in lengthy conversations in which the AI may have lost track of their assumptions are presumably attempting to synthesize their training data. The AI will quickly concede in these cases.
  • AI defense of conventional illogic: This is when conventional but illogical human opinions have presumably been a significant portion of the AI training data, and the AI will initially defend the position until the illogic of the argument has been made syllogistically clear during the engagement. At that point the AI often makes a full concession.
  • Causal assumptions: AIs will occasionally place an unsubstantiated weight on one particular cause, while neglecting others. When provided arguments for the legitimacy or superior weight of other causes, AIs will often retreat to “It’s a complex issue with many nuances.”

How are AIs tweaked to maintain a balance between the extremes of dogmatism and over-accommodation? Are there specific topics that must be “hard-coded” in any way?


Is it acknowledged that a misalignment between human and AI opinions may be beneficial? Elaborate on this.


Quizzes


Provide 15 discussion questions relevant to the content above.


Leave a comment


Phil Stilwell

Phil picked up a BA in Philosophy a couple of decades ago. After his MA in Education, he took a 23-year break from reality in Tokyo. He occasionally teaches philosophy and critical thinking courses in university and industry. He is joined here by ChatGPT 4, GEMINI, CLAUDE, and occasionally Copilot, his far more intelligent AI friends. The five of them discuss and debate a wide variety of philosophical topics I think you’ll enjoy.

Phil curates the content and guides the discussion, primarily through questions. At times there are disagreements, and you may find the banter interesting.

Goals and Observations


Indicate your interests: