We have previously reported on a study that found sitting less is independently "associated with excellent health and excellent quality of life." Apparently sitting is not just good for the body, but it is also good for the mind according to a study published in Social Psychological and Personality Science. In the study, researchers from the Washington University’s Olin Business School found that standing during business meetings led to greater collaboration and more openness to the ideas of others among participants. The report notes, “in addition to the physiological benefits of non-sedentary work designs, getting people out of their chairs at work may increase their capacity for collaborative knowledge work.” According to the authors, the reasons for the findings are two-fold: increased arousal of the sympathetic nervous system (which prepares a person to act on her environment) and reduction in territoriality.
The authors acknowledge that there are limitations to the study in that the findings have not been replicated and the meeting length in the study was limited to 30 minutes (which represents the average meeting length for 75% of organizations). Nevertheless, one of the lead authors, Andrew Knight, PhD, assistant professor of organizational behavior at Olin Business School, recommends that, “organizations should design office spaces that facilitate nonsedentary work.” In addition to fighting the negative health effects of being sedentary, “Removing chairs and adding whiteboards are low-cost options that encourage brainstorming and collaboration.” In a knowledge economy, any move to encourage brainstorming and collaboration should provide tangible benefits to an organization’s bottom line. To the extent that a minor tweak to the working environment can also decrease the amount of time workers spend sitting is a double bonus.
The trend in medical research suggests that modifying work spaces to limit the amount of time we spend sitting can have significant effects on the physical health of workers. It only makes sense that the same effects would translate to workers’ cognitive health and abilities. From a claims perspective, these findings represent an opportunity to work with employers to encourage workplace designs that foster both physical health and mental acuity (and hopefully fewer claims).
Medical News Today reports on a study published in the Journal of Bone and Joint Surgery (subscription required) which found that patients whose opioid use was increasing prior to spine surgery had worse outcomes than those whose opioid use was not. As Medical News Today notes, studies have shown that opioid use prior to spine surgery frequently leads to worse outcomes, but "the studies did not account for differences in opioid consumption among patients." In this new study, the authors concluded that, "increased preoperative opioid use was a significant predictor of worse health outcomes at 3 and 12 months following surgical treatment..." While this news is not particularly surprising to those in the medico-legal world, it does offer an opportunity to ask IME physicians a targeted question about the appropriateness of spine surgery in claimants with a demonstrated history of opioid dose escalation which should ensure that the physician's opinion explicitly relies on evidence-based medicine and hence is more credible.
Returning to our discussion of strategies to eliminate cognitive biases and improve strategic decision making, we arrive at Brewer’s third strategy: discriminate between observation and inference, between established fact and subsequent conjecture. The last post in this series touched on this issue, but it is worth revisiting in greater detail. One of the things that plagues strategic decision making is our frequent tendency not to discriminate between observation and inference and between established fact and subsequent conjecture. This tendency is normal and virtually everyone exhibits it to some degree. However, when making strategic decisions, we want our judgments to be based on observation and fact to the maximum extent possible. When making inferences, we want observation and established fact to support our inferences. We want our inferences to be likely, not merely conjecture or possibility. But how do we do that?
The first step is to train oneself to identify when an inference or conjecture is being made. One way to do this (among many) is to ask whether the information is the product of a sense impression. Do we have the information because we saw it, heard it, felt it, touched it, smelled it? To return to a first report of injury, the existence of a first report with writing that states the employee reported the injury on Y date is an observation because we saw the report. When we see the report and hold the report and examine the report, it becomes and established fact. Whether the employee actually reported the injury on Y date is not a fact. Instead, if we posit that the employee actually reported the injury on Y date, we are making an inference based on a variety of facts and assumptions (such as the employer is reliable in reporting injuries, has never had an employee dispute the date the injury was reported, etc.). It is important to recognize that the fact of the first report of injury is different from the state of affairs it purports to represent, which is an inference, however strong.
This distinction even arises in diagnostic imaging studies, which we typically think of as “objective” evidence of injury or the lack thereof, conflating “objective” with “fact.” The image is a fact, what it signifies is an inference that an interpreting physician makes. For example, a person complains of a knee injury that suggests a meniscus tear to a treating orthopedist. The treating orthopedist orders an MRI which does not appear to demonstrate a meniscus tear. When we evaluate the medical records in the claim, we frequently conclude that if an MRI (or more properly the radiologist’s report interpreting the images) does not show pathology then none exists. This is an assumption. The only fact is the images the MRI scan generates. The simple fact that a radiologist concluded that the images do not show the presence of a meniscus tear does not mean that a meniscus tear is not present. We know for a fact that MRIs do not demonstrate every meniscus tear. However, we assume that an MRI is accurate because we know or have been told that MRIs accurately demonstrate the presence of most meniscus tears. Again, this is an assumption, not a fact. In our example, the treating orthopedist may perform a diagnostic arthroscopy and find that a meniscus tear is present. A physician in an IME report recently summed up the problem of conflating what an MRI scan actually demonstrates (observation) with the inference of pathology or lack thereof:
I would stress to the reader that diagnostically the arthroscopic evaluation of the knee is far more likely to be the gold standard of accuracy versus that of an MRI scan… I would note that there are, of course, instances wherein it can indeed be difficult to differentiate a recurrent tear from a picture of a meniscus that has been previously operated on. Furthermore, this case is a stellar example of how MRI scans can in fact be inaccurate despite expert interpretation.
In our MRI example, another assumption is being made: if an MRI reveals pathology, the pathology must be causing dysfunction. We know this is a questionable assumption based on numerous studies showing that large portions of the population have conditions ranging from rotator cuff tears to “herniated” discs that are present on MRI scans but asymptomatic. Whether the presence of pathology causes dysfunction is a separate question that the physician makes based on many factors including physical examination, history/mechanism of injury, medical records, and diagnostic imaging studies. The strength of the inference that a particular pathology is causing dysfunction is determined by reviewing all factors. The imaging study alone may be enough to make a strong inference, but often more support is needed before an inference can or should be made.
Another example that arises frequently in both the worker’s compensation and liability settings is the conflicting report of injury. For example, let’s assume that an employee reported to the employer that she did not remember a specific event but had been lifting heavy pipes all day and noticed shoulder was getting sore. The employee seeks treatment with her primary care physician who refers her to an orthopedic specialist several weeks after the date of injury because the shoulder condition did not improve. In the initial notes from the orthopedist, the employee is reported to have stated that she was lifting a heavy pipe and noted the immediate onset of shoulder pain. Obviously there is a discrepancy between the records, but what does the discrepancy mean? Does the discrepancy mean that the employee is untruthful or that the condition is less likely to have occurred at work?
The established facts in this scenario are that the first report of injury states the condition arose gradually during the course of a work day and did not follow a specific traumatic event while the orthopedist’s notes state that condition arose acutely, following a specific lifting event. These are the only facts we know. Any statement about what the facts mean is an inference and is not a fact. Before drawing any conclusions, I would want to obtain more information. For example, did the doctor’s office press the employee to identify a specific event? It would not be unheard of for a member of a physician’s staff to ask the injured worker something along the lines of, “Well, if you had to guess, what incident would have caused your shoulder pain?” I would also want to know how the injury was reported. Perhaps the employee said something along the lines of, “I lifted a pipe and felt something in my shoulder. I kept lifting heavy pipes all day and it just got worse and worse.” Either piece of information would make the discrepancy in reporting appear less significant. On the other hand, if there is no indication that the first report is inaccurate or that the orthopedist’s office asked the employee to identify a specific traumatic event, then the inferences that A) the employee appears to be unreliable or dishonest and B) the condition may not have arisen out of the employment are stronger. The point is that the discrepancy in the records only reflects a discrepancy in the records. This is our observation and the only established facts. To the extent that we infer that the employee is dishonest or that the work-relatedness is questionable from the discrepancy, we are making an inference that is not fact. When making such an inference, must be mindful that other information is necessary before we can decide whether the inference is strong or weak.
When evaluating claims, it is critical that we distinguish between observation and inference, between established fact and conjecture. Failing to do so will cause us to estimate the strength and weaknesses of our arguments inaccurately. If we do not accurately estimate our arguments, we cannot effectively administer our claims. One way to help ensure that we are accurate in our assessments is to discriminate between observation and inference, to ensure that our conjecture is supported by established fact and to recognize when we lack support for our inferences and conjectures.
Interesting new research from the University of Manchester finds that current smoking increases risk of hearing loss by 15.1%. Researchers were not sure whether "toxins in tobacco smoke affect hearing directly, or whether smoking-related cardiovascular disease causes microvascular changes that impact on hearing, or both." Regardless, current smokers or those exposed to passive smoking could could provide employers and insurance carriers with a potential new defense in occupational hearing loss cases if the study's results are replicated or otherwise confirmed.
What do we do when we have a conversation? Turns out, we do a lot of anticipating and predicting about what the other person is going to say. This predictive process makes our normal conversations better, or at least more readily intelligible. In an interesting study published in The Journal of Neuroscience, researchers found that “language processing is comprised of an anticipatory stage and a perceptual stage: both speakers and listeners take advantage of predictability by ‘preprocessing’ predictable representations during the anticipatory stage, which subsequently affects how those representations are processed during perception.” This would seem to have implications for the medico-legal world because of the reliance on oral statements, whether recorded or not, formal or informal in claims administration. Specifically, the quality of the answers one gets in a statement can potentially be manipulated when either party understands the predictive process involved in conversation. For example, when speakers introduce unexpected words or phrases, listeners become more prone to error: “When subsequently confronted with unpredicted words, listeners/readers typically show a prediction error response.” A clever interviewer could use this information to keep the interviewee off guard, which may help elicit information the interviewee had been consciously trying not to reveal. Conversely, a clever interviewee will be conscious of her tendency to answer based on both prediction and cognition and will take steps to limit the affect prediction has on her answers.
One simple technique interviewees can use is to (silently) repeat every question that is asked of them back to themselves before answering. This focuses the interviewee on comprehension and cognition rather than prediction, which will help the interviewee limit her response to what was in fact asked and not on what her predictive mind assumed was asked. This also may be effective because the prediction happens so quickly and over such a short period of time. According to the authors of the study, “[A]nticipation may precede perception by as little as 200 milliseconds…” This is an incredibly short time interval and any device that an interviewee can employ to slow cognition down will allow her to limit the tendency to anticipate where the speaker is going with a question and instead to hear the actual question that is asked.
One of the things that our brains do brilliantly well is to construct order of the world around us. This predictive aspect of speech is part of that. We are hard-wired to recognize patterns and make connections; hence, we gravitate to coherent narrative versions of events. It is difficult for our brains to process events without linking them together causally. Our conversations reflect this tendency as well. In fact, when people do not conform to the normal way conversation works in this regard it is noticeable and such speakers often seem odd, idiosyncratic, or eccentric.
The problem with the predictive process of speech and our tendency to turn our conversations into coherent narratives is that it inhibits our ability to ask the right questions and give the best answers. When taking a statement, the interviewer should keep in mind that the process is not a conversation in the ordinary sense of the word. That is why, for example, it is imperative to wait until the interviewee completes her response to each question before moving on to the next one. While normal conversation works better when we allow the predictive aspect of conversation to fulfill its function, in a statement the predictive aspect can lead the interviewer away from valuable areas of inquiry simply by virtue of dovetailing the interviewer’s thoughts about what to ask next with the interviewee’s response. Instead, interviewers should be mindful of the process and ask questions that occasionally interrupt the narrative flow to keep her attention focused on what the interviewee is actually saying. One such strategy could involve interjecting questions about an unrelated topic periodically. For example, during questions about the facts of an accident the interviewer might want to ask a question about current prescriptions that the interviewee takes. The question will feel strange when asked, but it is surprising how quickly this jars the interviewer back to the kind of focused attention that is necessary to obtain an effective statement. And that, after all, is the goal.
Evidence continues to mount that arthroscopy to treat osteoarthritis of the knee is no better than sham surgery or conservative care. The German Institute for Quality and Efficiency in Health Care (IQWiG) published a final report (executive summary available here) on May 12, 2014 that consisted of a meta-analysis of various studies comparing arthroscopy to various modalities, including sham surgery and strengthening exercises. The report’s authors concluded that:
The benefit of therapeutic arthroscopy (with lavage and possible additional debridement) for the treatment of gonarthrosis is not proven. There was no hint, indication or proof of a benefit of therapeutic arthroscopy for any patient-relevant outcome in comparison with no active comparator intervention. There was also no hint, indication or proof of a benefit of therapeutic arthroscopy for any outcome in the comparisons with lavage, oral administration of NSAIDs, intraarticular hyaluronic acid injection or strengthening exercises under the supervision of a physical therapist.
While this information is not new, it bolsters the conclusion that arthroscopy to treat osteoarthritis of the knee is no more effective than other modalities, including conservative care and doing nothing. The standard of care does appear to be shifting toward the abandonment of arthroscopy to treat osteoarthritis of the knee; however, the procedure is still performed occasionally. In managing claims, it is important to ensure that approval for any arthroscopic knee procedure be based on evidence-based medicine. Insurance carriers should not be expected to bear the cost of procedures the benefit of which “is not proven.” In addition, injured plaintiffs and employees should not be expected to bear the risks of surgical complications and extended recovery periods for procedures the benefit of which “is not proven.”
One of the problems we face in claims administration is that many of our decisions are made in the context of uncertainty. For example, we may know that the plaintiff is credible, but that the mechanism of injury is questionable and the defense has a strong IME report. The claims and legal professionals must determine (among other things) the plaintiff’s likelihood of succeeding on the question of whether an injury occurred based upon the available information. The problem is that this judgment is a guess (though hopefully an educated one) based on experience and the available information. There is no definite or fixed answer. In order to make such decisions effectively, we need to know what is fact, what is inference, what is loose conjecture, and what information is likely to be discoverable or otherwise available that will make the guess more educated. Once we have this information, we can determine what aspects of the claim are uncertain or ambiguous and develop a strategy to deal with them.
This brings us back to Brewer’s strategies for combating cognitive biases and making effective decisions. His second strategy asks us to:
“Be clearly and explicitly aware of gaps in available information.”
We normally live with and tolerate an enormous amount of ambiguity and uncertainty in our lives without paying much attention to it. In fact, imperfect knowledge is the general and pervasive condition of human life. However, when we assess claims, we become acutely aware of ambiguity and uncertainty and recoil from it. Why? We recoil because ambiguity and uncertainty foil our attempts to predict the outcome of claims and hence drive us crazy. Nonetheless, it is critical that we be able to make effective claims decisions against a background of ambiguity and uncertainty. And the key to making effective decisions in the context of ambiguity and uncertainty is to specifically and accurately identify what is known (and hence certain) and what is not known (and hence uncertain). Doing so will help us accurately evaluate the strength of our current position, reveal what we can do to obtain more information, and allow us to make rational decisions without ignoring or being paralyzed by ambiguity and uncertainty.
Once we have asked the “how do we know…” questions, we are in a position to organize what we know. What we know in any claim falls into several categories.
To accurately judge the claim, it is important to understand the gaps in available information and to understand when our conclusions are not supported by factual knowledge. Take the dictum that a delay in reporting an injury increases the likelihood that the injury is fraudulent. To believe this, one must make assumptions that may or may not be supported by actual evidence. It is important when evaluating a new claim that we understand what these assumptions are before we make a judgment regarding the validity of the claim.
First, accepting the dictum as true assumes that there is statistical support for it. If there is not, the dictum is the equivalent of an old wives tale. This is not to say that it may not be true, but without statistical support for it then it is equally plausible that the dictum is false. Thus, the dictum should not be taken to demonstrate the strength or weakness of a claim without the existence of additional supporting evidence such as the softball tournament example above. Despite the lack of statistical support for the dictum that delayed reporting increases the likelihood that a claim is fraudulent, numerous insurance professionals, companies, and even state agencies continue to hold the dictum out as if it had some sort of predictive significance.
Second, accepting the dictum can actually create a selection bias in which late reported claims receive a higher level of scrutiny and more intense investigation than claims with contemporaneous reporting. If one believes based on experience that late reported claims are more frequently bogus than timely reported claims, one must actually investigate her claim handling history and measure the level of scrutiny given to the separate claims to determine if there is any truth to the dictum. In order to determine if there is a probable statistically significant effect in a retrospective investigation, at a minimum you would have to include only those timely reported claims that receive the same or similar level of scrutiny and investigation to late-reported claims for comparison to at least attempt to eliminate selection bias. Without making this investigation, the dictum that late-reported claims are more likely to be fraudulent has no basis in fact and is likely to skew results in a way that confirms the dictum.
When managing claims, it is important to consider why a decision is being made and whether the decision is based on factual knowledge, an inference, or an assumption that has been “taken on faith.” Any claim will have ambiguity and uncertainty. This is normal. When the ambiguity and uncertainty are identified, they can be factored into the assessment of the claim and will help generate the strategy for developing the claim (which will be the topic of the next post in this series). When deciding to give a claim heightened scrutiny or making any other tactical decision, the decision will be more effective and will likely yield better results if it is based on factual knowledge than if it is based on an unsupported assumption. The only way to ensure that the decision is based on factual knowledge is organize what you know. Once the knowledge in a claim has been organized, it is easy to identify if something is being taken on faith rather than fact.
Medical News Today reported on a piece in Neurology (subscription required) in which researchers conducted memory studies on retired French workers who had been exposed to solvents during their working years. The specific solvents included benzene, chlorinated solvents, and petroleum solvents. The retirees had been out of work for an average of 10 years and the average age of study participants was 66. The results demonstrated that only 18% of the persons tested had no memory impairment. This statistic is more troubling in context: only 16% of the persons tested had no exposure to solvents. Another troubling aspect of the study is that it found that persons with high but distant solvent exposure (31-50 years prior to testing) still demonstrated measurable cognitive deficits.
While it would be too early to draw definitive conclusions from the report, it seems likely that the findings will prompt further investigation. If subsequent studies confirm the researchers’ conclusions, it certainly could prompt claims by those exposed to the offending solvents through their employment. This is significant because chlorinated solvents and petroleum solvents are found in such common items as cleaners, degreasers, and paint. Exposure to these products is regulated, but if new information becomes available that demonstrates the level of exposure that causes harm is lower than previously thought then employees in such occupations as commercial housekeeping and painting who suffer cognitive decline that would have been attributed to other factors may now connect the cognitive decline to solvent exposure on the job. Obviously the effect on worker’s compensation claims would be significant as would the likely third party claims against the manufacturers of the solvents.
In the last post, we discussed a paper Jeffrey Brewer wrote regarding strategies for overcoming cognitive biases and emotions. Brewer identified 10 specific strategies to overcome biases and emotion. His first strategy advocates consciously raising the questions:
But how does this help us? Don’t we already essentially do this when we analyze claims?
Not exactly. First, asking the questions immediately changes one’s state of mind from its natural, emotionally reactive state, to one in which reason is brought to the forefront. Consciously asking the questions forces us to slow down, search for, and contemplate the possible answers. Second, answering the questions quickly demonstrates whether something is an objectively verifiable fact, an inference, hearsay, opinion, or pure conjecture. Once the questions are answered and the information is categorized, the process will have naturally organized the claim in a rational way. Third, knowing what category the information falls into can provide a roadmap for developing the claim. Fourth, asking and answering the questions is likely to result in a more accurate assessment of liability, damages, exposure, and further investigation needed.
How can this strategy be applied to claims? The place to start is at the beginning of the process. When a claim comes in, we are given information asked to apply the information to a metric for assessing exposure. The formality of the metrics will vary, but the best companies and firms mechanize this process to the greatest extent possible to streamline the process and to make it as consistent as possible. This is of course why all case assessment reports, forms, and letters look roughly the same for each entity that generates them regardless of who actually wrote them. This predictability and uniformity is a virtue, not a vice. Nevertheless, individual claims professionals must judge where each piece of information goes and its significance.
The two most important parts of a case assessment report, form, or letter will generally be the statement of facts or narrative summary. It is from this that the conclusions regarding liability, damages, and exposure will be drawn. In preparing the statement of facts, it can be a useful exercise to distinguish between facts, opinion, hearsay, and assumptions to better understand the support for the claim or its defense. For example, take a claim where an employee X injures his hand on a piece of equipment. In conducting the investigation, the employer obtains a statement from employee Y who has observed X using the equipment for personal use in the past.
In this example, the only thing that is a fact is that Y observed X using the equipment for personal use in the past. If the statement is used to support the defense that the employee was not performing work for the benefit of the employer at the time of injury, then an inference is being made that X’s behavior at the time of injury was consistent with X’s past behavior. With no additional information or support, the inference is weak at best. In order to strengthen it, one could find out if X used the equipment for personal purposes at certain times of his shift or after certain jobs and whether the injury occurred at a similar time of day or after the same kind of job. In addition, the inference would be stronger if Y observed X using the equipment for personal use regularly or on many occasions, especially if the most recent uses were near in time to the accident. The bottom line is that the fact of the observation only affects the injury at issue if it can be inferred from the observation that the behavior leading to the injury likely conformed to the observed past behavior.
In another example, worker’s compensation investigations often discover a coworker who overheard the injured employee complaining about his job or the company or both. Specifically, assume employee X alleges he hurt his low back lifting a heavy object at work. The investigation discovers that employee Y heard employee X say that he was fed up with his manager and couldn’t take much more. What is fact? The only fact is that on one date X complained about his manager and said he couldn’t take much more. That is it. X’s statement does not mean that X feigned injury or exaggerated its severity. To move from X’s statement to that conclusion is an inference that requires additional information for it to be believable. The inference is that X reached some sort of breaking point and is using the work injury (or feigning injury altogether) as a means of avoiding his manager.
When judging the significance of the statement, several factors must be considered. Obviously if the injury is relatively near in time to the statement, it would appear more likely that they are related. Other factors could make the inference stronger as well, such as similar, repeated comments, a discernible change in performance, a discernible change in attendance, or any overt conflicts with his manager. On the other hand, if X was a generally good employee who was having a bad day and significant time elapsed between the remark and the injury with no further overt evidence of conflict with the manager, then the inference is weak. Likewise, in judging the likelihood that X is avoiding work based on the prior statement, one must consider the benefit to X of being absent (avoiding the manager, not having the responsibilities of the job) with the costs of being absent (wage loss, benefits loss, loss of social contact with coworkers, etc.). In this case, if X only made one statement and the injury involves an extended absence with significant financial consequences, the inference will be weaker.
In order to effectively determine the strengths and weaknesses of any claim, we must be able to ask and answer the right questions. Simply recording a narrative of events without asking whether each component is a fact, an inference, hearsay, or opinion will skew the analysis badly. For every piece of the narrative, we should ask how we know it, why do we believe it, and what evidence supports the belief. Once we take this step, we will understand the extent of our knowledge, whether our knowledge is based in fact, the inferences that can be drawn from our knowledge of the facts, how strong those inferences are, and what additional evidence or information should be obtained to strengthen inferences or eliminate ambiguity and uncertainty. When we know this, we can effectively assess liability, damages, and further claims investigation necessary.
We recently published a couple of posts about the impact of cognitive biases and emotion on decision making. In the posts, we offered some suggestions on how to limit biases and emotions in order to make better decisions. Recently, we came across a paper by Jeffrey W. Brewer, a member of the Risk and Reliability Department at Sandia National Laboratories, that discusses strategies for overcoming cognitive biases and irrational risk perception. Brewer’s specific discussion deals with overcoming biases in the context of explaining the benefits of nuclear power; however, his general discussion offers a number of strategies that can be applied in any business setting.
Brewer reduces the strategies to a simple statement that focuses on thinking carefully, question assumptions, and using the best available evidence:
Techniques to counter the undesirable tendencies [of cognitive biases] include a strong commitment to reflect on one’s biases in a specific decision making situation, to make decisions using the most valuable quantitative data available, and to carefully map out what one considers important in the decision making setting.
He then offers ten specific strategies that can be used to overcome our biases when we make critical decisions:
While not every decision in the medico-legal-claims environment requires such careful attention, we do make high stakes decision involving significant monetary sums that can have profound impacts on employers, employees, and health care providers. When we are tasked with making such important decisions, we should make an effort to ensure that we are making the best decision possible based on reason and the best available evidence. Following Brewer’s strategies can help us do just that.
Although some of Brewer’s strategies are self-explanatory, some of them are not and all would benefit from a more extended individual treatment. Over the course of the next few posts, we will address Brewer’s strategies in more detail, explaining exactly what each strategy means, why each strategy is important, and how each strategy can be implemented, using practical examples.