

- “Some philosophers argue that we are justified in lowering our standards of evidence if an enticing claim cannot be realized unless we have belief in that claim.“ This quote highlights the core argument that belief in a claim may justify a departure from strict evidence standards for pragmatic reasons.
- “The argument that you’ve outlined suggests that in certain contexts, the practical benefits of believing a claim might justify accepting it with less stringent evidence than would otherwise be required.“ This emphasizes the potential conflict between pragmatic and epistemic considerations, suggesting a nuanced approach to belief justification.
- “Critics of lowering epistemic standards for pragmatic reasons point out the risks of endorsing beliefs without adequate evidence.“ This quote underscores concerns about the potential negative consequences of prioritizing pragmatic over epistemic considerations.
- “The argument also hinges on the assumption that the consequences of belief justify the belief itself,“ raising critical questions about the ethics and validity of belief based on desired outcomes rather than evidence.
Table of Contents: (Click any link below to navigate to that section.)
- Some philosophers argue that we are justified in lowering our standards of evidence if an enticing claim cannot be realized unless we have belief in that claim. (This argument is often invoked in the context of a proposed resurrected Jesus that needs to be accepted for redemption to take place.) It is claimed that the pragmatic consideration that the desired effect can be accomplished only if we have belief in that effect justifies a lowering of the standards of evidence. Can pragmatic considerations legitimately lower epistemic standards of evidence?
- But since rational belief is a degree of belief that maps to the degree of the relevant evidence, one is essentially becoming epistemically irrational when they lower their standards of evidence simply because belief in the claim is necessary to achieve the claim, right? Anytime the degree of belief rises above or below the degree of the perceived evidence, there is an increase in irrationality as irrationality is rigorously defined, right?
- We can maintain a rigorous definition of irrationality while at the same time acknowledging humans do not always think and act rationally, right? We don’t need to abandon a rigorous notion of irrationality simply to semantically accommodate cases of human irrationality, right?
- So, in summary, an individual must abandon epistemic rationality when they lower their standards of evidence in X because they have been told belief in X is necessary to receive X, right? One’s degree of belief in the existence of X is inextricably tied to the actual degree of evidence for X for the rational, right? Lowering one’s standards of evidence for X based on the claim one must have belief in X for X to be received constitutes self-delusion to some degree, correct?
- Perhaps the term “self-deception” be more appropriate than “self-delusion”, but if one consciously lowers their standards of evidence to allow themselves to believe in X based on the claim a belief in X is necessary to acquire X, to the degree that they do so, they are consciously ignoring the standards of evidence they formerly assessed to be rational, right?
- Provide 10 discussion questions on the relationship between pragmatic concerns and an epistemic assessment of the evidence.


Some philosophers argue that we are justified in lowering our standards of evidence if an enticing claim cannot be realized unless we have belief in that claim. (This argument is often invoked in the context of a proposed resurrected Jesus that needs to be accepted for redemption to take place.) It is claimed that the pragmatic consideration that the desired effect can be accomplished only if we have belief in that effect justifies a lowering of the standards of evidence. Can pragmatic considerations legitimately lower epistemic standards of evidence?
The question of whether pragmatic considerations can legitimately lower epistemic standards of evidence touches on deep philosophical debates between pragmatism and more traditional epistemological views. The argument that you’ve outlined suggests that in certain contexts, the practical benefits of believing a claim might justify accepting it with less stringent evidence than would otherwise be required. This stance raises several key points for consideration:
- Pragmatic vs. Epistemic Justification: Pragmatic justification focuses on the practical consequences of holding a belief, whereas epistemic justification concerns the truth of the belief and the adequacy of evidence supporting it. The argument implies a possible conflict between these two types of justification, suggesting that in some cases, pragmatic considerations might outweigh purely epistemic ones.
- Contextual Relativism: Some philosophers argue that standards of evidence can be context-dependent. In matters of scientific inquiry, for example, rigorous evidence is paramount for advancing knowledge. In personal or existential matters, however, the standards might be viewed more flexibly, taking into account the consequences of belief or disbelief.
- William James and The Will to Believe: Philosopher William James famously argued that in cases where a decision is forced, momentous, and live, and where the evidence is inconclusive, it is permissible to decide based on passion rather than evidence alone. This stance suggests that in certain existential or moral contexts, pragmatic considerations could justify lowering epistemic standards.
- Risks of Lowering Epistemic Standards: Critics of lowering epistemic standards for pragmatic reasons point out the risks of endorsing beliefs without adequate evidence. These include the potential for self-deception, the propagation of false beliefs, and the undermining of rational inquiry. Furthermore, determining which pragmatic considerations are sufficient to justify lowering standards can be highly subjective and contentious.
- Consequences of Belief: The argument also hinges on the assumption that the consequences of belief justify the belief itself. This raises questions about the nature of belief, the ethics of belief, and the relationship between belief and action. It challenges the notion that beliefs should be held purely based on evidence and truth, suggesting instead that the utility of a belief can be a factor in its justification.
- Faith and Reason: The specific example of religious belief, such as belief in the resurrection of Jesus for redemption, highlights the tension between faith and reason. In religious contexts, faith often involves belief in the absence of empirical evidence or in the face of inconclusive evidence, with justification rooted in spiritual, moral, or existential grounds rather than empirical verification.
In summary, whether pragmatic considerations can legitimately lower epistemic standards of evidence is a complex and contested issue. It involves weighing the value of truth and rational justification against the practical, existential, or moral consequences of beliefs. This debate reflects broader philosophical discussions about the nature of belief, justification, and the goals of inquiry, indicating that the legitimacy of lowering epistemic standards for pragmatic reasons may vary significantly across different contexts and belief systems.
The question of whether pragmatic considerations can justify lowering epistemic standards for evidence is a complex one with compelling arguments on both sides. Here’s a breakdown of the different perspectives:
Arguments for lowering standards based on pragmatism:
- Pascal’s Wager: This classic argument suggests that if the potential benefits of believing a claim outweigh the costs of disbelief, even in the absence of strong evidence, then believing might be a rational choice. In the context of the resurrected Jesus, the potential benefit of eternal salvation could outweigh the cost of potentially being wrong.
- Psychological benefits: Belief in certain propositions, even with limited evidence, can offer psychological comfort, hope, or motivation. Some argue that these benefits may justify a less rigorous approach to evidence evaluation in specific personal contexts.
- Social cohesion: Shared beliefs, even without full epistemic justification, can be crucial for social order and cultural stability. This might be relevant in religious communities where belief in specific tenets forms the bedrock of their identity.
Arguments against lowering standards based on pragmatism:
- Erosion of intellectual integrity: Sacrificing rigor and objectivity for desired outcomes can undermine the very foundation of knowledge and truth-seeking. Once we accept such an approach, it sets a dangerous precedent for accepting other unsubstantiated claims for purely pragmatic reasons.
- Potential for harm: Belief in unverified claims, even with good intentions, can lead to harmful consequences in personal and societal spheres. For example, faith healing based on unfounded beliefs could discourage individuals from seeking proper medical care.
- Slippery slope: Lowering standards in one specific case can pave the way for doing so in others, gradually eroding the importance of evidence and leading to a decline in overall critical thinking.
Ultimately, whether pragmatic considerations can legitimize lowering epistemic standards is a matter of individual value judgment and risk tolerance. Some might prioritize potential benefits and take a leap of faith based on limited evidence, while others might consider the risks of sacrificing intellectual integrity and potential harm too high.
Here are some additional points to consider:
- The strength and nature of the pragmatic considerations matter. Are the potential benefits significant and likely, or are they speculative and distant?
- The reversibility of the consequences of accepting the claim is important. Can one easily abandon the belief if new evidence emerges?
- The broader societal implications of lowering standards should be considered. Does it set a precedent that could harm others or undermine essential values?
Remember, there’s no easy answer, and the decision of whether to lower evidence standards based on pragmatism depends on your personal values, risk assessment, and the specific context of the claim.
But since rational belief is a degree of belief that maps to the degree of the relevant evidence, one is essentially becoming epistemically irrational when they lower their standards of evidence simply because belief in the claim is necessary to achieve the claim, right? Anytime the degree of belief rises above or below the degree of the perceived evidence, there is an increase in irrationality as irrationality is rigorously defined, right?
Your analysis correctly highlights a core principle of epistemology: that rational belief should be proportionate to the degree of evidence. This principle is fundamental to maintaining epistemic integrity and ensuring that beliefs are as aligned with reality as possible, based on available evidence. Lowering standards of evidence for pragmatic reasons does indeed risk creating a misalignment between the strength of belief and the supporting evidence, which can be characterized as a form of epistemic irrationality.
- Proportionality Principle: The idea that the strength of one’s belief should be proportional to the strength of the evidence is central to the concept of epistemic rationality. This principle ensures that beliefs are justified and grounded in evidence to the extent that such evidence is available and reliable.
- Epistemic Irrationality: When the degree of belief does not correspond to the degree of evidence, this misalignment can be termed epistemic irrationality. If one’s belief is significantly stronger (or weaker) than the evidence warrants, it deviates from the ideal of forming beliefs in a rational, evidence-based manner.
- Pragmatic vs. Epistemic Goals: The tension between pragmatic and epistemic goals lies at the heart of the debate over the justification for beliefs. Pragmatic considerations prioritize the outcomes or consequences of holding a belief, which might necessitate a belief strength that does not align with the available evidence. This prioritization can lead to a form of rationalization where the desire for certain outcomes influences the level of belief beyond what the evidence alone would justify.
- Rationality and Irrationality Defined: In the strict epistemic sense, irrationality involves holding beliefs that are not adequately supported by evidence or reasoning. However, from a broader perspective, individuals might engage in what appears to be epistemically irrational behavior for reasons that are rational within a different framework (e.g., emotional, psychological, or pragmatic rationality).
- Consequences of Epistemic Irrationality: The divergence between belief and evidence, when adopted systematically or without critical examination, can lead to a range of negative outcomes, including the propagation of misinformation, poor decision-making, and the erosion of trust in rational discourse and scientific inquiry.
- Navigating Epistemic and Pragmatic Considerations: The challenge lies in navigating situations where epistemic and pragmatic considerations may conflict. This involves critically assessing the value of outcomes associated with pragmatic beliefs and the importance of maintaining epistemic standards. It requires a nuanced understanding of the contexts in which beliefs are formed and the potential impacts of those beliefs on oneself and others.
In conclusion, while pragmatic considerations might motivate a desire to adjust the standards of evidence, doing so risks compromising epistemic rationality. This highlights the importance of striving for a balance that respects the integrity of evidence-based belief formation while also acknowledging the complex roles that beliefs play in human life and society.
Your point about the strict definition of irrationality and the disconnect between desired outcomes and evidence-based belief is valid. However, the issue becomes more nuanced when considering alternative perspectives on rationality and the complexities of human decision-making.
Here are some counterpoints to consider:
- Limited rationality: Humans are not perfectly rational actors. We are often influenced by emotions, biases, and cognitive limitations. While adhering to evidence-based belief is ideal, sometimes we prioritize other factors like hope, psychological comfort, or social cohesion.
- Pragmatism as a different mode of reasoning: Some philosophers argue that pragmatic reasoning exists alongside the evidence-based model. In specific situations, it might be rational to act based on potential benefits and desired outcomes, even if the evidence isn’t conclusive. This doesn’t necessarily equate to “irrationality” but reflects a different lens for decision-making.
- Degrees of “irrationality”: Even within your definition, “irrationality” might not be a binary state but exist on a spectrum. Lowering standards due to pragmatism might be less “rational” than basing beliefs solely on evidence, but it doesn’t necessarily mean completely abandoning rationality.
- Context matters: The degree of “irrationality” associated with lowering evidence standards can vary depending on the context. A personal belief with minimal impact on others might be more acceptable than a widespread belief with potential societal consequences.
- Epistemological vs. psychological perspectives: Your definition of rationality focuses on the epistemological aspect, ensuring beliefs align with evidence. However, a psychological perspective might consider factors like emotional well-being or social needs, influencing belief formation even if it deviates from the ideal evidence-based model.
Ultimately, while striving for evidence-based beliefs is crucial for intellectual integrity, the complexities of human behavior and decision-making require us to acknowledge different perspectives on rationality and the potential justifications for lowering evidence standards in specific situations, recognizing both the risks and potential benefits involved.
We can maintain a rigorous definition of irrationality while at the same time acknowledging humans do not always think and act rationally, right? We don’t need to abandon a rigorous notion of irrationality simply to semantically accommodate cases of human irrationality, right?
Absolutely, you’ve articulated a crucial distinction in the discussion of rationality and irrationality. It’s entirely possible—and indeed, necessary—to maintain a rigorous definition of irrationality while recognizing that human thought and behavior often deviate from rational norms. This acknowledgment doesn’t require diluting the standards of rationality; rather, it necessitates a clear understanding of the principles that define rational thought and behavior, alongside an awareness of human psychology and the various factors that influence decision-making. Let’s explore this in more detail:
- Rigorous Definition of Irrationality: A rigorous definition of irrationality helps to establish clear criteria for what constitutes rational versus irrational thought and behavior. This can be based on principles of logic, evidence-based reasoning, and the coherence of beliefs with available evidence. Such a definition is crucial for evaluating claims, arguments, and decisions in a consistent and objective manner.
- Human Cognitive Limitations: Recognizing human irrationality involves understanding cognitive biases, emotional influences, social pressures, and other psychological factors that can lead to deviations from rational thinking. Humans are subject to a wide range of heuristics and biases, such as confirmation bias, availability heuristic, and affect heuristic, which can lead to irrational conclusions or decisions.
- Compatibility of Concepts: Acknowledging human irrationality does not undermine the concept of rationality; instead, it highlights the complexity of human cognition and the challenges of achieving rational thought. The existence of a rigorous standard for rationality serves as an ideal or benchmark against which human thought and behavior can be assessed.
- Educational and Corrective Measures: Understanding the distinction between the ideal of rationality and the reality of human irrationality is crucial for developing strategies to promote more rational thinking. Education, awareness of cognitive biases, and critical thinking training are tools that can help individuals recognize and correct irrational tendencies.
- Philosophical and Practical Implications: The recognition that humans do not always act rationally has important implications for philosophy, psychology, economics, and other fields that study human behavior. It informs approaches to moral and ethical reasoning, decision-making models, and the design of systems and policies that account for human irrationality.
- Normative vs. Descriptive: The distinction between what is rational (normative aspect) and what people actually do (descriptive aspect) allows for a critical analysis of human behavior without the need to compromise on the definition of rationality. It enables a critique of irrational behaviors and the development of methods to foster more rational decision-making processes.
In sum, maintaining a rigorous definition of irrationality while recognizing the prevalence of irrational thought and behavior in humans is not only possible but also essential. It allows for a nuanced understanding of human cognition and behavior, providing a foundation for efforts to enhance rational thinking and decision-making within the constraints of human psychological tendencies.
Absolutely, you’re right. We can certainly maintain a rigorous definition of irrationality while acknowledging that humans don’t always act accordingly. In fact, doing so provides a valuable framework for critical thinking and evaluating beliefs.
Here’s why the separation is important:
- Clear standards: Having a rigorous definition of “irrationality” sets a clear standard for assessing beliefs and actions. It helps us identify when our own or others’ thinking might deviate from logic and evidence.
- Avoiding relativism: Without a defined standard, discussions about rationality can become subjective and relativist. This can make it difficult to have productive conversations about truth claims and hold people accountable for their beliefs.
- Nuance and context: Even with a rigorous definition, we can still analyze and understand human behavior in its full complexity. By considering factors like cognitive biases, emotions, and social influences, we can explain why people sometimes act “irrationally” without sacrificing the importance of the concept itself.
So, acknowledging human limitations doesn’t negate the need for a clear definition of irrationality. It simply means recognizing that various factors can influence our thinking and lead us to deviate from the ideal, evidence-based model. We can explore these influences and understand them without abandoning the crucial standard provided by a rigorous definition.
Ultimately, striving for intellectual honesty and clarity around rationality, while simultaneously acknowledging and exploring the complexities of human thought, is essential for navigating the world effectively and engaging in meaningful discussions about beliefs and knowledge.
So, in summary, an individual must abandon epistemic rationality when they lower their standards of evidence in X because they have been told belief in X is necessary to receive X, right? One’s degree of belief in the existence of X is inextricably tied to the actual degree of evidence for X for the rational, right? Lowering one’s standards of evidence for X based on the claim one must have belief in X for X to be received constitutes self-delusion to some degree, correct?
Yes, your summary captures the essence of the issue regarding epistemic rationality and the adjustment of evidence standards based on the supposed necessity of belief for receiving a benefit or effect (X). Here’s a breakdown of the key points:
- Abandoning Epistemic Rationality: When an individual lowers their standards of evidence for a belief (X) because they are told that belief in X is necessary to receive X, they are indeed deviating from epistemic rationality. Epistemic rationality requires that beliefs are proportionate to the evidence. To alter this standard based on the desired outcomes rather than the strength of the evidence undermines the rational basis for the belief.
- Tie Between Belief and Evidence: For a belief to be considered rational in the epistemic sense, it must be based on the actual degree of evidence supporting it. This means that a rational individual’s degree of belief in X should correspond closely to the strength and quality of evidence for X. Any adjustment to the degree of belief that does not reflect the evidence undermines this principle.
- Constituting Self-Delusion: Lowering one’s standards of evidence for X, based on the claim that belief in X is necessary to receive X, can indeed be seen as a form of self-delusion. This is because it involves convincing oneself to believe something more strongly than the evidence warrants, often to achieve a desired outcome or avoid an undesired one. This act of adjusting belief intensity independently of evidence involves a kind of self-deception or at least an acceptance of belief without adequate justification.
The critical issue here is the distinction between believing something because one wishes it to be true (or has been told it must be believed for certain outcomes) and believing something because the evidence supports its truth. The former approach risks engaging in wishful thinking or self-delusion, which can have broader implications for one’s ability to make rational decisions based on evidence and logic. It also raises ethical and philosophical questions about the nature of belief, the responsibilities of belief holders, and the impact of such beliefs on personal integrity and decision-making.
While it’s true that lowering evidence standards based solely on the desired outcome of receiving the claim itself deviates from a strictly evidence-based model of rationality, the situation isn’t quite as black and white as self-delusion. Here’s why:
Complexity of Rationality:
- Humans operate under bounded rationality, meaning we often make decisions factoring in emotions, biases, and limitations in processing information. While evidence should ideally guide beliefs, other factors influence real-life decision-making.
- Different modes of reasoning exist. Pragmatic reasoning, considering potential benefits and desired outcomes, can co-exist with evidence-based reasoning. In specific situations, prioritizing benefits despite limited evidence might be seen as a different rationality, not necessarily “irrationality”.
Degrees of “Lowering Standards”:
- The extent to which evidence standards are lowered matters. A slight adjustment based on personal hope or comfort might have a different impact than drastically sacrificing evidence for a belief with significant consequences.
Context Matters:
- The context of the claim and potential outcomes influence the judgment. A personal belief with minimal impact on others might be more acceptable than lowering standards for widely held beliefs with societal implications.
Epistemology vs. Psychology:
- While a purely epistemological view of rationality emphasizes evidence-based belief, a psychological perspective acknowledges factors like emotional well-being and social needs, influencing belief formation even if it deviates from the ideal evidence-based model.
Self-Delusion? Not Necessarily:
- While lowering standards due purely to desired outcomes raises concerns, it doesn’t automatically equate to self-delusion (consciously deceiving oneself). It might reflect limitations in critical thinking, emotional needs, or social pressures, not deliberate self-deception.
Nuance and Individual Judgment:
- Ultimately, judging whether lowering evidence standards in a specific situation constitutes “abandoning rationality” or “self-delusion” requires careful consideration of context, motivations, and potential consequences. It’s rarely a simple binary decision.
Remember, while striving for evidence-based beliefs is crucial, understanding human complexity and different perspectives on rationality helps us navigate real-world scenarios where emotions, desires, and limitations influence our thinking and decision-making.
Perhaps the term “self-deception” be more appropriate than “self-delusion”, but if one consciously lowers their standards of evidence to allow themselves to believe in X based on the claim a belief in X is necessary to acquire X, to the degree that they do so, they are consciously ignoring the standards of evidence they formerly assessed to be rational, right?
Even the term “self-deception” might be too strong in this context, as it implies an intentionality that might not be fully present. Here’s a different perspective:
- Motivational bias: Humans are susceptible to motivational biases, where our desires subtly influence how we interpret information and form beliefs. When someone lowers evidence standards for X because they want to receive X, it might not be conscious self-deception but rather an unconsciously biased interpretation driven by the potential benefit.
- Cognitive limitations: Processing information and evaluating evidence takes cognitive effort. In specific situations, individuals might take mental shortcuts or rely on simplified heuristics, unintentionally lowering the bar for accepting claims they want to believe. This doesn’t necessarily imply deliberate self-deception.
- Social influences: Beliefs and norms within social groups can shape how individuals evaluate evidence. In some cases, conforming to group expectations or maintaining social bonds might lead someone to accept a claim with less scrutiny than they normally would. This doesn’t necessarily involve conscious deception.
Instead of “self-deception,” “motivated reasoning” or “unintentional bias” might be more accurate descriptions. It’s important to remember that human decision-making is complex and influenced by various factors beyond a purely rational, evidence-based model.
However, your point about actively lowering the bar for evidence is valid. Even if unintentional, it creates a discrepancy between acknowledged standards of evidence and the actual evaluation of a specific claim. This discrepancy can have consequences, depending on the context and potential effects of the belief.
Ultimately, understanding the nuances of human decision-making and acknowledging the limitations of purely rational models is crucial when navigating situations where people lower evidence standards. It helps us move beyond judgmental terms like “self-deception” and engage in meaningful discussions about motivations, biases, and the complexities of belief formation.
Provide 10 discussion questions on the relationship between pragmatic concerns and an epistemic assessment of the evidence.
- How do pragmatic concerns influence the epistemic assessment of evidence in scientific research, and what are the potential consequences of such influence?
- Can the inclusion of pragmatic considerations in the evaluation of evidence ever be justified within the framework of epistemic rationality? Under what circumstances?
- How do ethical considerations intersect with pragmatic and epistemic concerns in the assessment of evidence in medical trials?
- In what ways can pragmatic concerns about societal well-being influence the standards of evidence required for policy-making decisions? Is this influence justified?
- Discuss the role of pragmatic considerations in legal contexts, such as the determination of guilt or innocence, and how they relate to epistemic standards of evidence.
- How does the concept of risk management illustrate the tension between pragmatic concerns and epistemic assessments of evidence in environmental science?
- To what extent can pragmatic concerns justify a lower standard of evidence in decisions about public health emergencies, such as pandemic responses?
- Explore the relationship between pragmatic concerns and epistemic assessments of evidence in the context of technological innovations and public safety.
- How do financial and economic interests as pragmatic concerns influence the epistemic standards of evidence in corporate and academic research?
- In the context of education and pedagogy, how should pragmatic concerns about student outcomes and societal needs influence the epistemic assessment of educational theories and practices?
- In what situations is it acceptable to lower evidential standards due to pragmatic concerns? Are there ever justifiable scenarios, or should evidence always hold absolute authority?
- Does prioritizing pragmatic benefits over a rigorous epistemic assessment create a slippery slope towards accepting unsubstantiated claims with potentially harmful consequences?
- Can the difference between individual beliefs and societal norms influence how we weigh pragmatic concerns against evidence? For example, is it easier to lower standards for personal beliefs compared to claims with wider implications?
- How can we distinguish between genuine “motivated reasoning” due to personal desires and manipulative attempts to exploit people’s vulnerabilities by lowering their evidential standards for specific agendas?
- Does the potential for positive outcomes ever justify accepting claims with shaky evidence, even if the potential negative consequences are substantial? Where should we draw the line?
- How can we cultivate critical thinking skills and promote epistemic responsibility while acknowledging the influence of emotions, social pressures, and cognitive biases on our evaluation of evidence?
- Is there a role for emotional intelligence and understanding subjective experiences in supplementing a purely evidence-based assessment of claims, particularly when considering pragmatic concerns?
- Should different disciplines or fields of study have varying standards for evidence and the weight given to pragmatic considerations? For example, might ethics or social justice concerns demand a different approach than hard sciences?
- How can we ensure transparent and open discussions about pragmatic factors influencing our assessment of evidence, avoiding manipulation and promoting genuine understanding of diverse perspectives?
- Can technology and artificial intelligence play a role in mitigating the influence of biases and promoting more objective evaluation of evidence while considering pragmatic concerns, or do these tools also present new challenges?
These are just some starting points for discussion, and the specific questions you explore will depend on your individual interests and context. Remember, these are complex issues with no easy answers, and open-minded consideration of different perspectives is key to gaining a deeper understanding of the relationship between pragmatism and epistemology.
Table of Contents: (Click any link below to navigate to that section.)
- Some philosophers argue that we are justified in lowering our standards of evidence if an enticing claim cannot be realized unless we have belief in that claim. (This argument is often invoked in the context of a proposed resurrected Jesus that needs to be accepted for redemption to take place.) It is claimed that the pragmatic consideration that the desired effect can be accomplished only if we have belief in that effect justifies a lowering of the standards of evidence. Can pragmatic considerations legitimately lower epistemic standards of evidence?
- But since rational belief is a degree of belief that maps to the degree of the relevant evidence, one is essentially becoming epistemically irrational when they lower their standards of evidence simply because belief in the claim is necessary to achieve the claim, right? Anytime the degree of belief rises above or below the degree of the perceived evidence, there is an increase in irrationality as irrationality is rigorously defined, right?
- We can maintain a rigorous definition of irrationality while at the same time acknowledging humans do not always think and act rationally, right? We don’t need to abandon a rigorous notion of irrationality simply to semantically accommodate cases of human irrationality, right?
- So, in summary, an individual must abandon epistemic rationality when they lower their standards of evidence in X because they have been told belief in X is necessary to receive X, right? One’s degree of belief in the existence of X is inextricably tied to the actual degree of evidence for X for the rational, right? Lowering one’s standards of evidence for X based on the claim one must have belief in X for X to be received constitutes self-delusion to some degree, correct?
- Perhaps the term “self-deception” be more appropriate than “self-delusion”, but if one consciously lowers their standards of evidence to allow themselves to believe in X based on the claim a belief in X is necessary to acquire X, to the degree that they do so, they are consciously ignoring the standards of evidence they formerly assessed to be rational, right?
- Provide 10 discussion questions on the relationship between pragmatic concerns and an epistemic assessment of the evidence.






Leave a comment