Based on evidence or preferences? A guide to assertiveness in decision making

Autores/as

DOI:

https://doi.org/10.33233/rbfex.v19i5.4415

Resumen

A common question among science consumers today is the fact that researchers are publishing a lot, and a lot fast. To have a mathematical dimension, every nine years, the production scientific doubles its size [1]. The growing number of publications in all the areas, especially in the health area, combined with the ease of access to scientific documents, allows current professionals to guide their practice in evidence, in the hope of improving the quality of their care. To make decisions, have scientific knowledge as a basis for clinical reasoning provides the health professional with an aura of safety and efficiency.

However, the consumption of a scientific article can hide many tattoos that appear before reading the document. The so-called cognitive biases are inherent to human behavior and directly influence the understanding of literature, distorting conceptual reality. These work as a varnish to legitimize and strengthen the personal beliefs of the reader, who comes to believe that he is making the best decision based on scientific evidence. Several aspects can hinder the interpretation and correct application of evidence-based practice concepts. The purpose of this document is to call attention to some of the cognitive biases, the importance of critical reading, and to present a flowchart that will assist the reader in making decisions about his practice.

The term Cognitive Bias was coined by researchers Amos Tversky and Daniel Kahneman for the first time in 1972. Since then, several researchers have described different types of biases that affect decision-making in a wide range of areas of knowledge, including health. We can exemplify the tendency in human behavior using as an archetype a person who, in possession of a resource (equipment, technique, method, drug), observed a positive response in two or three peoples submitted to his intervention. The positive feeling about the intervention (affinity bias) will make him believe that the change in the outcome has as major cause the applied intervention. When challenged about the veracity of his intervention, this professional tends to look for "evidence" that is generally supported by people who have already undergone the intervention and have benefited, and the search for scientific literature that often happens selectively (literature selection bias) to validate your belief (confirmation bias).

This illustration brings us fundamental points for understanding the real meaning of scientific thought. It is necessary to assume that, as professionals, we have preferences and that they can cloud our judgment, this recognition and attempt at impartiality being the beginning for the most appropriate acceptance of scientific evidence. In the background, it is necessary to understand that these same biases are also present in the authors of the research. Therefore, on purpose or not, scientific writing carries, to a greater or lesser degree, the author's preferences. It is up to the reader to identify which ideas are or are not supported by overwhelming data.

In the above example, the absence of a control group prevents us from measuring the benefit objectively. In this situation, the assessment of a subjective outcome, such as quality of life, can lead to a false-positive result due to the placebo effect. Other measures may occur depending on the time. For these reasons, clinical practice observations are subject to considerable risks of bias and interpretation, as it is an uncontrolled environment.

Still, on the biases of human understanding, a search for positive results for a specific intervention reduces the number of treatment possibilities and harm the final acceptor of the health system, our patient. In the field of scientific literature, it is also possible to identify data that justify dichotomous thoughts. At this point, it is necessary to emphasize the importance of understanding and differentiating each study design, the type of inference allowed, and the analysis of biases present in the research, which will give us to trust or not the result, especially when there is a positive response to the intervention. The flaws in the comparisons between the evidence can lead to the choice of misconduct (not supported by substantiated results) and influence by an affinity bias.

Regarding the proof of results, a special warning is necessary: can be shown the significance of an outcome from a statistical and clinical point of view. In the first case, one must pay attention to the statistical test applied, the nature of the comparisons, and the presence of comparable groups at the beginning of an intervention study, when possible. The application of the statistical test, the assembly of the sample calculation, and the change in the outcome predictor variable can be used to achieve a statistically significant result, which unfortunately is still the most accepted result in scientific journals, and act against the scientific thinking of quality.

Even in the presence of statistical findings, we need to reflect on the clinical importance of this finding. Let's imagine conducting a study using an exercise protocol aimed at responding to systolic blood pressure (SBP) in hypertensive men. Both groups of hypertensive patients had an SBP of 170mmHg at the beginning of the study, and we found that the intervention group and the control group after three months had SBP equal to 168 mmHg and 169mmHg, respectively. Checking these data, followed by a statistically significant statement (p < 0.05) for the intervention group, it is necessary to question whether this assessment was made in isolation to the initial and final condition of each group (intragroup analysis), comparing them whether only the final values of the two groups or by comparing the variations (initial - final) of the two groups. After this stage, we must analyze the numbers carefully and ask ourselves: Does the difference of 1 mmHg in favor of the intervention group justify the application of the conduct?

Even though there is statistical evidence and, considering a scenario where the SBP of the control group remains at 170mmHg, is a difference of 2mmHg in favor of the intervention a response with clinical implications so important as to reduce the risk of adverse events? When comparing it with other strategies, is the benefit still prevalent to the point of enabling its use as the first line of conduct?

The health professional must rationally guide his decisions, correctly using the arsenal available in the literature, based on the principles of scientific understanding, mostly, maintaining the null hypothesis as a starting point. In figure 1, we outline how to conduct the thought before making a decision.

We reinforce that the trust in scientific works should not occur simply by the design of the study, nor should it invalidate them due to small flaws, yet, increase or reduce the level of confidence in the answers that depart from the null hypothesis. We invite the reader to delve deeper into the investigation of the risks of cognitive bias, to question their advantages, always opting for the most impartial assessment and decision-making possible and in the light of science.

It is important to note that the physiological rationale and practice will always be important tools for understanding and questioning the mechanisms found in the evidence, or even for directing the search for new alternatives in the absence of direct evidence. Therefore, they should not be overlooked. The perfect clinical decision is based on the overlap between these two views and treatment of the state of the art of health treatment.

Biografía del autor/a

Marvyn de Santana do Sacramento, UNISBA

ACTUS CORDIOS Reabilitação Cardiovascular, Respiratória e Metabólica, Salvador, BA, Centro Universitário Social da Bahia (UNISBA), Salvador, BA,  Faculdade do Centro Oeste Paulista, Bauru, SP

 

João Victor Luz de Sousa, UFS

Universidade Federal de Sergipe, Aracaju, SE, Brazil

Antônio Marcos Andrade, UNISBA

Centro Universitário Social da Bahia, Salvador, BA, Brazil

Jefferson Petto, UNISBA

ACTUS CORDIOS Reabilitação Cardiovascular, Respiratória e Metabólica, Salvador, BA,    Centro Universitário Social da Bahia (UNISBA), Salvador, BA,  Faculdade do Centro Oeste Paulista, Bauru, SP,  Escola Bahiana de Medicina e Saúde Pública, Salvador, BA, Brazil

Citas

Bornmann L, Mutz, R. Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references. J Assoc Inf Sci Technol 2015;66:2215-22. doi: 10.1002/asi.23329

Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn´´´´ t. BMJ 1996;312(7023):71-72.

Kahneman D, Tversky A. Subjective probability: A judgment of representativeness. Cogn Psychol 1972;3(3):430-54.

Larsen PO, von Ins M. The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index. Scientometrics 2010;84(3):575-603. https://10.1007/s11192-010-0202-z.

Tversky A, Kahneman D. Judgment under Uncertainty: Heuristics and Biases. Science 1974;185(4157):1124131. doi: 10.1126/science.185.4157.1124

Burns PB, Rohrich RJ, Chung KC. The levels of evidence and their role in evidence-based medicine. Plast Reconstr Surg 2011;128(1):305-10. doi: 10.1097/PRS.0b013e318219c171

Ranganathan P, Pramesh CS, Buyse M. Common pitfalls in statistical analysis: Clinical versus statistical significance. Perspect Clin Res 2015;6(3):169-70. doi: 10.4103/2229-3485.159943

Petto J, Oliveira IM, Oliveira AM, Sacramento MS. Safe and effective praxis without scientific evidence: is it possible? Rev Bras Fisiol Exerc 2020;19(3):178-9. doi: 10.33233/rbfe.v19i3.4211

Publicado

2021-10-15