- “Rational belief updating should ideally be based on the evidence available at the time. Since A’s degree of belief does not map to the changing evidence, it would suggest that A was not engaging in a rational process of belief updating.”
- “By continuously updating one’s priors with new evidence, one maintains a belief system that is responsive and adaptive to new information, which is essential for rational decision-making.”
- “The most rational approach would be to map one’s degree of belief, whether it be belief or disbelief, in precise accordance with the degree of the reliably accruing evidence at every step, just as the premise states.”
- “Individual A maintaining a high degree of belief throughout the entire period, despite the evidence fluctuating and only reaching the same high level at the end, could be interpreted as a case of confirmation bias rather than rational updating of belief according to the evidence.”
Table of Contents: (Click any link below to navigate to that section.)
- Describe the content of this graphic.
- Can individual A rationally say at the end of the period, ‘I knew it all along’ if they maintained their high degree of belief without mapping it to the relevant evidence as it fluctuated?
- Was individual C rational to wait until the evidence reached a particular threshold to flip from disbelief to belief?
- Provide three scenarios in which the cost of belief would be high, epistemically justifying disbelief.
- In your three secnarios, what would be the actual danger of mapping one’s belief to the degree of the evidence?
- But a credence is not a decision. Wouldn’t more accurate credences better inform the decisions?
- All things being equal, including the credibility of the evidence, is it not better to update your priors as new confirming and disconfirming evidence arrives?
- Then, based on your last response, it would have been more rational for C to have updated their credences to map to the changing balance of evidence, right?
- You said the following:
- However, there was no suggestion in my scenario that the evidence was anything other than reliable evidence. There is no “whim”, and therefore, C would be irrational not to constantly adjust their degree of belief as the degree of the non-whimsical evidence arrived, right?
- Provide a salient reformulation of the notion that rational belief is a degree of belief that maps to the degree of the relevant evidence.
- For a final confirmation, explain why individual B was or was not the most rational of the three individuals.
- If the degree of B’s belief did map to the degree of the evidence, B would be most rational of the three individuals, correct?
- Then it is correct to say rational belief is a degree of belief that maps to the degree of the relevant evidence. Right?
- Create an 8-item quiz on the discussion above.
- Provide 15 discussion questions relevant to the content above.
Describe the content of this graphic.
The graphic is a line chart with a title and two key questions related to the depicted data.
Title:
“Continuously mapping the degree of belief to the degree of the evidence”
Key questions:
- “Can A rationally say at the end, ‘I knew all along’?”
- “Assuming individuals A, B, and C have the same access to the changing balance of evidence, which individual is most rational over time?”
- “Is C right to wait until the balance of evidence is high to flip his disbelief to belief?”
The chart shows three lines, each representing the degree of belief over time for individuals A, B, and C, against a background line representing the changing balance of evidence.
- The Changing Balance of Evidence line is green.
- Individual A’s Degree of Belief is represented by a purple line.
- Individual B’s Degree of Belief is represented by a blue line.
- Individual C’s Degree of Belief is represented by an orange line.
The evidence line begins at a low degree of evidence and increases in steps, indicating a change in the available evidence over time. Correspondingly, the individuals’ belief lines change over time:
- Individual A’s belief starts high and remains consistently high, matching the final degree of evidence even before the evidence supports this belief.
- Individual B’s belief starts low, increases in steps somewhat in sync with the evidence, but lags behind each increase in evidence before reaching the final degree of belief.
- Individual C’s belief remains at 0% until the very end when the evidence is [close to] 100%, at which point their belief jumps [close] to 100%.
The chart suggests a comparison of the rationality of belief updating in response to changing evidence among the three individuals.
This graphic depicts the concept of continuously mapping the degree of belief to the degree of evidence over time. It shows the changing balance of evidence (green line) and the corresponding degree of belief for three individuals, A (purple line), B (blue line), and C (orange line).
The question posed is whether individual A can rationally say “I knew it all along” at the end, given their degree of belief aligning with the evidence from the start.
It also asks which individual, A, B, or C, is the most rational over time, assuming they all have access to the same changing balance of evidence.
Finally, it questions whether individual C is right to wait until the balance of evidence is high before flipping their disbelief to belief, as indicated by their degree of belief remaining low until the evidence accumulates significantly.
Can individual A rationally say at the end of the period, ‘I knew it all along’ if they maintained their high degree of belief without mapping it to the relevant evidence as it fluctuated?
Individual A maintaining a high degree of belief throughout the entire period, despite the evidence fluctuating and only reaching the same high level at the end, could be interpreted as a case of confirmation bias rather than rational updating of belief according to the evidence. If A’s belief did not adjust to the evidence as it changed but was only confirmed at the very end, A’s claim of “I knew it all along” may not be based on a rational evaluation of the evidence but rather on a steadfast belief that happened to coincide with the final evidence.
Rational belief updating should ideally be based on the evidence available at the time. Since A’s degree of belief does not map to the changing evidence, it would suggest that A was not engaging in a rational process of belief updating. Therefore, while A can claim they “knew it all along,” such a statement would not be supported by a rational evaluation of the evidence, but rather by chance or possibly by having prior knowledge that is not represented in the chart.
Based on the graphic, individual A’s degree of belief starts and remains high throughout, even as the balance of evidence fluctuates considerably over time. While A’s belief aligns with the high evidence at the end, their steadfastly high belief did not seem to continuously map or adjust to the changing evidence along the way.
So no, individual A cannot rationally claim “I knew it all along” because their unwavering high belief did not appropriately track and respond to the shifting evidence over time. A rational belief should flexibly update in proportion to the accumulating evidence, rather than stubbornly holding a fixed position regardless of counterevidence arising. A’s belief appears more dogmatic than rationally informed by the full body of evidence as it evolved.
Was individual C rational to wait until the evidence reached a particular threshold to flip from disbelief to belief?
Individual C’s approach to flipping from disbelief to belief only when the evidence reaches a certain high threshold can be seen as a form of threshold model in belief updating. This strategy may be rational in scenarios where the cost of a false belief is high, and therefore it’s prudent to wait for strong evidence before committing to a belief. It can also be rational in situations where the evidence is expected to reach a reliable threshold within a certain time frame, making it unnecessary to respond to each fluctuation in the evidence.
However, the rationality of this strategy also depends on the context of the decision being made and how the evidence is expected to change over time. If incremental changes in evidence are informative and the decision requires ongoing updating, then C’s approach might be less rational compared to a more Bayesian approach, where belief is continuously updated as evidence is accumulated.
In the chart, C maintains complete disbelief regardless of the evidence presented, only changing their belief to complete belief when the evidence is overwhelming. This approach ignores intermediate evidence, which may not be rational if intermediate evidence is reliable and should influence belief to some degree. The rationality of C’s approach also depends on what is at stake with the belief and how they use this belief to make decisions.
Overall, if C’s goal is to minimize the chance of having any false beliefs, then waiting for strong evidence might be rational. However, if the goal is to have a belief system that is sensitive to and reflective of the available evidence at all times, C’s approach might not be the most rational strategy.
Provide three scenarios in which the cost of belief would be high, epistemically justifying disbelief.
- Medical Diagnosis:
When a medical professional is diagnosing a patient, the cost of incorrect belief can be extremely high. For instance, if a set of symptoms could be attributed to either a common illness or a rare but serious condition, acting on the belief that the patient has the serious condition without substantial evidence could lead to unnecessary, invasive procedures, causing harm to the patient. Therefore, it would be epistemically justified for the medical professional to maintain disbelief in the serious condition until all requisite tests are conducted and the evidence conclusively points to that diagnosis. - Criminal Justice:
In a criminal justice context, particularly when determining guilt, the standard is ‘beyond a reasonable doubt’ for good reason. The cost of a wrongful conviction based on an incorrect belief about a person’s guilt is very high — leading to loss of freedom for the individual, and a failure of justice. As such, a juror should maintain disbelief in the defendant’s guilt until the evidence presented removes all reasonable doubt. - Scientific Research:
In scientific research, particularly in fields dealing with public health, such as epidemiology or environmental science, the cost of belief based on insufficient evidence can have wide-ranging consequences. For example, prematurely believing in a causal link between a specific environmental factor and a health outcome without strong evidence could lead to costly policy changes, public panic, and possibly the diversion of resources from more pertinent issues. Scientists should therefore remain skeptical and maintain disbelief in the causal link until multiple high-quality studies converge on the same conclusion.
In your three secnarios, what would be the actual danger of mapping one’s belief to the degree of the evidence?
In the three scenarios I provided, mapping one’s belief too closely to the degree of evidence without considering the quality, reliability, or completeness of that evidence can lead to significant dangers:
- Medical Diagnosis:
If a healthcare professional were to map their belief too closely to preliminary or non-conclusive evidence, they might initiate a treatment plan based on an incorrect diagnosis. This could lead to administering unnecessary medication, performing unwarranted surgeries, or overlooking the actual condition that requires treatment, thereby causing harm to the patient and potentially resulting in medical complications or death. - Criminal Justice:
For jurors and law enforcement officials, mapping belief too closely to evidence without a critical assessment could result in confirmation bias, where they interpret all subsequent evidence in light of their initial belief. This could lead to ignoring or undervaluing exculpatory evidence, resulting in a wrongful conviction or the real perpetrator remaining at large. The dangers here are miscarriages of justice, the destruction of an innocent person’s life, and a loss of faith in the judicial system. - Scientific Research:
In scientific research, closely aligning belief to the degree of evidence without rigorous scrutiny can lead to the dissemination of false or premature conclusions. The dangers include undermining public trust in scientific findings, misguiding public policy, wasting resources on ineffective or harmful interventions, and potentially causing public health crises if, for example, the research pertains to incorrect medical guidance or environmental hazards.
But a credence is not a decision. Wouldn’t more accurate credences better inform the decisions?
Indeed, a credence is not a decision; it’s a degree of belief or subjective probability one assigns to a proposition being true. More accurate credences, when properly informed by evidence, do contribute to better decision-making. The key is in the word “accurate”—credences should be based on a careful and critical evaluation of the available evidence.
The danger arises when the process of forming these credences is not sufficiently critical or discerning, particularly in situations where the evidence is complex, incomplete, or can be interpreted in various ways. If the evidence is not thoroughly vetted or if its reliability is overestimated, the resulting credences may be misguided, and thus the decisions based on these credences could be flawed.
For instance, in the scenarios I described:
- Medical Diagnosis:
More accurate credences would indeed help a doctor make better decisions regarding treatment. However, if a doctor updates their credences with every new piece of evidence without considering the quality of that evidence, they might make a diagnosis or treatment decision based on misleading or incomplete information. - Criminal Justice:
Jurors should update their belief in the guilt or innocence of a defendant as new evidence is presented, but they also need to critically assess the quality and relevance of each piece of evidence. Overconfidence in weak evidence might lead to a wrongful conviction, while undue skepticism might result in a guilty person being acquitted. - Scientific Research:
For scientists, having accurate credences based on solid evidence is crucial for advancing knowledge. However, if researchers adjust their beliefs too readily based on preliminary or non-peer-reviewed evidence, they might pursue fruitless lines of inquiry, overlook better hypotheses, or publicly endorse findings that don’t hold up under scrutiny.
In summary, while more accurate credences informed by a critical evaluation of the evidence can indeed lead to better decisions, there’s a risk when those credences are formed on an uncritical acceptance of evidence. Therefore, it is essential to weigh the evidence, considering its reliability and the potential consequences of being wrong before updating one’s beliefs and making decisions based on those beliefs.
All things being equal, including the credibility of the evidence, is it not better to update your priors as new confirming and disconfirming evidence arrives?
Absolutely, under the assumption that all things are equal, including the credibility of the evidence, it is better to update your priors as new confirming and disconfirming evidence arrives. This principle is grounded in Bayesian reasoning, which suggests that individuals should adjust their beliefs proportionally to the weight of the new evidence, while also taking into account their prior beliefs.
By continuously updating one’s priors with new evidence, one maintains a belief system that is responsive and adaptive to new information, which is essential for rational decision-making. This process allows for a more nuanced and informed understanding of the situation at hand, and typically leads to better outcomes because decisions can be made based on the most current and comprehensive assessment of the evidence available.
This methodology promotes a dynamic approach to knowledge and decision-making, one that is self-correcting and always seeking alignment with the best representation of reality as evidenced by the latest, most credible data.
Then, based on your last response, it would have been more rational for C to have updated their credences to map to the changing balance of evidence, right?
Yes, based on the principle of rational belief updating, it would have been more rational for individual C to adjust their credences in line with the changing balance of credible evidence. Even if C had a high threshold for belief due to the potential costs of being wrong, completely disregarding incremental evidence until it became overwhelming might not be the most rational approach.
Updating beliefs incrementally allows for a more refined and responsive understanding, which can better inform decision-making at each point in time. It acknowledges that while the evidence might not be strong enough to warrant full belief, it could be sufficient to justify a partial adjustment of one’s credence. This stance avoids the pitfalls of both gullibility and stubborn skepticism, instead embracing a flexible yet cautious approach to the evidence.
Based on the graph, individual C’s approach of waiting until the balance of evidence reached a high threshold before flipping from disbelief to belief does seem rational and justified.
Unlike individuals A and B whose degrees of belief fluctuated considerably even when the evidence was low or mixed, C maintained a stance of disbelief until the cumulative evidence became overwhelmingly high. This suggests C was not swaying back and forth on a whim, but rationally updating their degree of belief only when the evidence reached a convincing level.
Holding out for substantial evidence before accepting a claim, rather than believing on sparse or contradictory evidence, is generally considered a rational approach from an epistemological standpoint. It avoids making hasty conclusions and aligns belief proportionally with the actual evidence available.
So while C was “late” to belief compared to A and B, their strategy of waiting for a high evidential threshold was a prudent, rationally justifiable stance given the initially weak and mixed evidence presented. C’s approach best exemplifies the ideal of continuously mapping belief to the substantive evidence at hand.
You said the following:
This suggests C was not swaying back and forth on a whim…
However, there was no suggestion in my scenario that the evidence was anything other than reliable evidence. There is no “whim”, and therefore, C would be irrational not to constantly adjust their degree of belief as the degree of the non-whimsical evidence arrived, right?
You make a fair point. If we assume the “changing balance of evidence” shown is based on fully reliable, non-whimsical evidence, then C’s approach of waiting for an extremely high evidential threshold before updating their belief would indeed be irrational.
In that scenario, the rational approach would be to continuously update one’s degree of belief in lockstep with the accumulating evidence, no matter how incrementally it changes over time. Holding an unwavering stance of disbelief despite mounting reliable evidence would violate the principle of proportioning belief to the evidence.
My previous characterization of C’s approach as “prudent” was based on an assumption that the initial evidence was potentially unreliable or inconclusive, thus making it rational to withhold belief until a high evidential threshold was met. However, if the evidence is stipulated to be fully reliable throughout, then you are correct – C’s refusal to update their belief continuously in response to the non-whimsical evidence would be an irrational stance.
The most rational approach would be to map one’s degree of belief, whether it be belief or disbelief, in precise accordance with the degree of the reliably accruing evidence at every step, just as the premise states. Thank you for the clarifying perspective.
Provide a salient reformulation of the notion that rational belief is a degree of belief that maps to the degree of the relevant evidence.
Here is a salient reformulation of the notion that rational belief is a degree of belief that maps to the degree of the relevant evidence:
Rational belief calibrates conviction proportionally to justification.
In other words, the strength of one’s belief in a proposition should align with the weight of the evidence supporting or contradicting that proposition. Belief that outstrips the evidence is irrational – whether that takes the form of excessive skepticism in the face of ample proof or unfounded certainty despite scant or ambiguous evidence.
Rationality demands that as new evidence arises, one updates their degree of confidence, tempering belief if exculpatory data emerges or reinforcing belief if augmented by additional confirmation. The ideal is a credence that maps precisely to the cumulative evidential balance, avoiding arbitrary fixity.
Ultimately, rational belief reflects reasoned judgment seamlessly tuned to the substantiating grounds, rather than manifesting rigid conviction disconnected from reality’s actual epistemic signals. It is belief calibrated as a finely measured mirror of the philosophical motto – proportion belief to the evidence.
For a final confirmation, explain why individual B was or was not the most rational of the three individuals.
Based on the graphic, individual B does not appear to be the most rational of the three individuals in mapping their degree of belief to the changing balance of evidence over time.
While B’s belief curve tracks the evidence more closely than A’s unwavering high belief, it still exhibits some irrational overshooting and lagging behind the evidence at various points.
Early on, B’s belief rises higher than warranted by the relatively low evidence level. Then B’s belief plunges lower than the evidence supports as it begins increasing again. Towards the end, B’s belief shoots up drastically before the evidence has caught up.
In contrast, individual C, despite waiting longer before elevating their belief, does seem to map their shift in belief more proportionally to the incremental accumulation of evidence over time. C’s belief remains low until the evidence becomes compelling, then rises steadily in tight alignment.
The hallmark of rationality is continuously adjusting one’s confidence to stay coherently calibrated with the evidential balance as it evolves, without overcorrecting or lagging. By that standard, C exhibits the most rationally disciplined belief revision of the three individuals across the full timeline.
B’s belief pattern, while better than A’s stagnant overconfidence, still displays some overshooting irrationality in both directions relative to the actualevidence accrued at various points. So B would not be considered the most rational of the three in this evidential mapping scenario.
If the degree of B’s belief did map to the degree of the evidence, B would be most rational of the three individuals, correct?
Yes, you’re correct. If individual B’s degree of belief mapped precisely to the degree of the changing evidence at every point in time, then B would be considered the most rational of the three individuals in this scenario.
The key premise is that a rational belief should continuously adjust its degree to match the cumulative evidential weight as new information emerges. If B’s belief curve perfectly overlaid and synchronized with the green “changing balance of evidence” line, updating in lockstep as the evidence waxed and waned, then B would exemplify the ideal of rational belief revision.
My previous assessment of B as not maximally rational was based on observing some overcompensations and lags in B’s belief relative to the evidence timeline. However, if we stipulate that B’s belief did in fact map exactly to the evidence degree, without any disconnects, then by definition B would be enacting the most rational belief updating process of the three individuals depicted.
You have rightly pointed out that a belief curve perfectly mirroring the evidence curve would be the most rational stance. Thank you for the clarifying premise – I now understand that if B’s belief was calibrated to seamlessly match the evidence, B would be the exemplar of rationality in this case.
Then it is correct to say rational belief is a degree of belief that maps to the degree of the relevant evidence. Right?
Yes, that is correct. The core principle being illustrated is that rational belief is a degree of belief that precisely maps to and corresponds with the degree of the relevant supporting evidence.
If an individual’s level of belief (or disbelief) towards a proposition aligns exactly with the weight of the evidence for or against that proposition, continuously adjusting as new evidence accumulates, then that individual is exhibiting rational, well-calibrated belief revision.
Deviations from mapping belief tightly to the evidential strength, whether that means believing too strongly despite weak evidence, or being overly skeptical in the face of robust evidence, would be considered irrational positions divorced from justification.
So in summary, you are absolutely right – the key takeaway is that rational belief is a degree of belief that stays coherently mapped and proportioned to the degree of the relevant evidence as it changes over time. Maintaining that tight mapping is the hallmark of reasonable, epistemically sound belief updating.
Create an 8-item quiz on the discussion above.
- What does the term ‘credence’ refer to in the context of belief updating?
- Why is it potentially dangerous to align one’s belief too closely with every piece of evidence?
- In the context of a medical diagnosis, what is the danger of acting on a belief based on preliminary evidence?
- In a criminal justice setting, what is the standard of proof required to convict, and why does this necessitate careful belief updating?
- How can overconfidence in the interpretation of scientific evidence impact public policy and public health?
- What is confirmation bias and how might it affect decision-making?
- Why is a critical evaluation of evidence important before updating one’s credences?
- According to Bayesian reasoning, how should one adjust their beliefs in response to new evidence?
Answers:
- A degree of belief or subjective probability one assigns to a proposition being true.
- It may lead to hasty decisions based on incomplete, misleading, or low-quality evidence.
- It could result in unnecessary treatments or surgeries, and the actual condition might be overlooked.
- ‘Beyond a reasonable doubt’; because it aims to prevent wrongful convictions by ensuring that the evidence is sufficiently convincing.
- It can lead to misguided decisions that waste resources, cause panic, or result in harmful interventions.
- It is the tendency to search for, interpret, and remember information in a way that confirms one’s preconceptions, leading to statistical errors.
- To ensure that belief updates are based on solid and reliable information, avoiding misinformed decisions.
- Beliefs should be adjusted proportionally to the weight and credibility of the new evidence, taking prior beliefs into account.
- Which individual’s belief most closely tracked the changing balance of evidence?
- Is it rational to maintain an unwavering high degree of belief regardless of fluctuations in evidence?
- If evidence is reliable and non-whimsical, is it rational to wait for an extremely high evidential threshold before updating one’s belief?
- What is the key principle regarding rational belief emphasized in the discussion?
- If an individual’s belief curve perfectly mirrored the evidence curve, would that be considered rational or irrational?
- Deviations from mapping belief tightly to evidential strength are considered what type of positions?
- Believing too strongly despite weak evidence is an example of a rational or irrational position?
- Maintaining the tight mapping of belief to evidence over time is the hallmark of what kind of belief updating?
Answers:
- B
- No
- No
- Rational belief is a degree of belief that maps to the degree of the relevant evidence.
- Rational
- Irrational positions divorced from justification
- Irrational
- Reasonable, epistemically sound belief updating
Provide 15 discussion questions relevant to the content above.
- How does the concept of credence differ from simple belief or disbelief?
- In what ways can mapping beliefs too closely to fluctuating evidence lead to poor decision-making?
- Discuss the potential risks of acting on a high degree of belief in a medical diagnosis without substantial evidence.
- Why is the standard of ‘beyond a reasonable doubt’ crucial in criminal justice, and how does it relate to belief updating?
- How might premature or misinformed scientific conclusions impact public trust and policy?
- Explain the role of confirmation bias in belief formation and decision-making. Can it ever be beneficial?
- What are the implications of not critically evaluating evidence before updating one’s beliefs?
- According to Bayesian reasoning, why is it important to adjust beliefs in response to new evidence? Give an example.
- Discuss a scenario where maintaining skepticism despite accumulating evidence might be rational.
- How do the costs associated with false beliefs vary across different domains, such as medicine, law, and science?
- What strategies can be employed to avoid the pitfalls of overadjusting or underadjusting one’s beliefs in response to new evidence?
- How does the concept of a “threshold for belief” play into the rational updating of beliefs? Is there a universal threshold, or is it context-dependent?
- In what ways might individual C’s approach to belief updating be seen as both rational and irrational?
- Discuss how the principle of updating one’s priors could be applied in a non-academic, everyday context.
- What are the ethical considerations in managing one’s degree of belief, particularly in professional settings like healthcare or justice?
Here are 15 discussion questions relevant to the content on rational belief and mapping belief to evidence:
- What are some real-world examples where people’s beliefs seem to deviate from the available evidence?
- Is it ever rational to maintain a degree of belief despite contradictory evidence, and if so, under what circumstances?
- How can we improve our ability to objectively evaluate the strength of evidence and calibrate our beliefs accordingly?
- What are the potential consequences of holding irrational beliefs that are not proportioned to evidence?
- How should we handle situations where the evidence is incomplete, ambiguous or conflicting?
- Are there certain types of beliefs (e.g. religious, political) where people tend to be more resistant to updating based on new evidence?
- Can emotions or personal biases ever lead to rational belief updating, or do they inherently distort our ability to map belief to evidence?
- Is it possible for two rational individuals to evaluate the same body of evidence but arrive at different belief conclusions? Why or why not?
- How does the field of epistemology approach the notion of mapping belief strictly to evidence versus allowing other factors?
- Are there examples in science or academia where beliefs persisted despite mounting contradictory evidence?
- Should standards for mapping belief to evidence be different for individuals versus institutions or government policy?
- How can education better teach the skills of evidence evaluation and belief calibration?
- Are there ways to incentivize or reinforce the behavior of continuously updating beliefs based on new information?
- What role do arrogance, stubbornness or the desire to be consistent play in distorting rational belief formation?
- Is aiming for perfectly mapping belief to evidence an ideal or is some degree of irrational belief unavoidable for humans?
Table of Contents: (Click any link below to navigate to that section.)
- Describe the content of this graphic.
- Can individual A rationally say at the end of the period, ‘I knew it all along’ if they maintained their high degree of belief without mapping it to the relevant evidence as it fluctuated?
- Was individual C rational to wait until the evidence reached a particular threshold to flip from disbelief to belief?
- Provide three scenarios in which the cost of belief would be high, epistemically justifying disbelief.
- In your three secnarios, what would be the actual danger of mapping one’s belief to the degree of the evidence?
- But a credence is not a decision. Wouldn’t more accurate credences better inform the decisions?
- All things being equal, including the credibility of the evidence, is it not better to update your priors as new confirming and disconfirming evidence arrives?
- Then, based on your last response, it would have been more rational for C to have updated their credences to map to the changing balance of evidence, right?
- You said the following:
- However, there was no suggestion in my scenario that the evidence was anything other than reliable evidence. There is no “whim”, and therefore, C would be irrational not to constantly adjust their degree of belief as the degree of the non-whimsical evidence arrived, right?
- Provide a salient reformulation of the notion that rational belief is a degree of belief that maps to the degree of the relevant evidence.
- For a final confirmation, explain why individual B was or was not the most rational of the three individuals.
- If the degree of B’s belief did map to the degree of the evidence, B would be most rational of the three individuals, correct?
- Then it is correct to say rational belief is a degree of belief that maps to the degree of the relevant evidence. Right?
- Create an 8-item quiz on the discussion above.
- Provide 15 discussion questions relevant to the content above.
Leave a comment