Criminology Research

Michael McGrath , in Ethical Justice, 2013

Logical Fallacies

It is incumbent on researchers and authors to recognize and avoid logical fallacies. Such fallacies are numerous and beyond the scope of this chapter to mention, let alone describe them all.

Circular reasoning. Using data to prove something that was used to develop the hypothesis; a proof that essentially restates the question. An example would be: "There is no such thing as a false confession, because innocent people do not confess to crimes they did not commit."

Overgeneralization. Making generalizations to a broad population based on insufficient data.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124045972000048

The Intuitive Traditionalist

Scott Eidelman , Christian S. Crandall , in Advances in Experimental Social Psychology, 2014

7.4 Summary

Automatic thinking favors conservative ideology; it requires less effort, intention, and control to endorse political conservatism. This does not mean that conservatives rely on automatic thinking (concluding this from our data would commit the logical fallacy of affirming the consequent: If A then B    if B then A (Cheng & Holyoak, 1985; Tidman & Kahane, 2003). And mindful of Hume's guillotine, we underscore that our use of the term "automatic" is descriptive and carries no value connotation. Put simply, quick and simple thinking promotes the status quo (including existence and longevity biases), and political conservatism as well. In Section 8, we explore another consequence of status quo preference.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128002841000023

Inferring Offender Characteristics

Brent E. Turvey , in Criminal Profiling (Fourth Edition), 2012

Discussion

The court correctly identified, perhaps for the first time, a complete range of failings with respect to FBI profiler testimony—from the "aura" of FBI expertise, to unvalidated methodology, to misapplied research, to logical fallacies, to lack of peer review. In the end, these considerations rightly add up to a lack of expert reliability. This ruling also provides an excellent template for future courts, given that FBI profiling methodologies do not differ significantly across cases. It also signals the education of the court with respect to the particulars that should be of concern regarding this type of highly speculative testimony.

The only issue not sufficiently covered by this ruling was that of expert qualification. One is left to wonder why the FBI sent a lawyer to testify about behavioral science issues, as opposed to an actual behavioral scientist. Additionally, the claims of casework and training are not consistent with the testimony of FBI other profilers. If the defense had preserved this issue with an objection, the court's Thomas opinion could have been far more exhaustive in its condemnations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123852434000174

Introduction to electronic resources in philosophy and religion

Ana Dubnjakovic , Patrick Tomlin , in A Practical Guide to Electronic Resources in the Humanities, 2010

Logic

The Critical Thinking Community

URL: http://www.criticalthinking.org/

Content provider: Foundation for Critical Thinking

Description: Affiliated with Sonoma State University in northern California, this website includes many types of resources pertaining to the study of critical thinking. Users will find a library of scholarly articles, education resources (syllabi, lesson plans and sample assignments), current international news items, and a directory of professional development opportunities, among other items. This is a dense but accessible resource, useful for educators and students alike.

Critical Thinking on the Web

URL: http://austhink.com/critical/

Content provider: Tim van Gelder, University of Melbourne

Description: This directory of online resources provides a diverse range of links to websites – some scholarly, others less serious – concerned with defining and enhancing critical thinking skills. Simple to navigate, its content includes tutorials for argument mapping, tips on evaluating the credibility of websites, links to blogs and electronic texts on the subject, e-mail lists and newsletters, guides to understanding logical fallacies, and a large selection of classic and contemporary readings on the history of critical thinking from such philosophers as John Stuart Mill.

Critical Thinking Web

URL: http://philosophy.hku.hk/think/

Content provider: Joe Lau, University of Hong Kong, and Jonathan Chan, Baptist University of Hong Kong

Description: An educational website focusing on critical thinking, logic and the nature of creativity, the site provides access to more than 100 self-guided tutorials. Resources are divided among 13 modules, including meaning and argument analysis, strategic thinking, scientific reasoning and logic. Of special note for more advanced researchers is the section for further resources, which contains links and citations for additional websites and texts on critical thinking. A bibliography is also provided for each of the modules. A Chinese-language version of the site is also available.

The Fallacy Files

URL: http://www.fallacyfiles.org/index.html

Content provider: Gary Curtis

Description: This interesting website offers a collection of logical fallacies, or examples of bad or faulty reasoning. Although the website has a search engine, some of the more obscurely-named fallacies are best found through the alphabetical index, as they can appear under more than one name. For a more systematic analysis of logical fallacies, users may want to seek out the resource's taxonomy of fallacies, which classifies them by broad type.

Logic Tutor

URL: http://www.wwnorton.com/college/phil/logic3/

Content provider: Michael K. Green, W.W. Norton and Co.

Description: Designed to accompany David E. Kelley's The Art of Reasoning, this resource provides more than 1,100 exercises and summaries on the fundamentals of logical reasoning and argument. Each section of the website is organized into three modules: a tutorial with brief quizzes; interactive problem sets; and a self-quiz that allows users to score their exam and e-mail the results to their instructors. Chapters and topics include propositional logic, inductive generalization and statistical reasoning, among many others.

Philosophy Pages: Logic

URL: http://www.philosophypages.com/lg/

Content provider: Garth Kemerling, Newberry College

Description: The site contains concise and thoughtful explications of the theories and formal components of elementary logic, including uses of language, definition and meaning, logical fallacies, categorical propositions, causal reasoning and probability theory, among several others. A brief list of secondary texts is included.

The Reasoning Page

URL: http://pegasus.cc.ucf.edu/~janzb/reasoning/

Content provider: Bruce B. Janz, University of Central Florida

Description: The website is devoted to logical reasoning, and contains many links to external resources focusing on formal logic, rhetoric, critical thinking and informal logic, among other aspects of the philosophy of logic. Examples from law, medicine and computing supplement the many links to philosophy sites; links to websites with teaching resources such as sample syllabi and exams are also compiled.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781843345978500050

Occupational Neurology

Christopher M. Filley , in Handbook of Clinical Neurology, 2015

Uncertain cause-and-effect relationship

Even if a neurotoxic exposure is well documented, difficulty can still arise as to the relationship between the exposure incident and the presumed effect. Association is not causation, and the fact that a cluster of symptoms follows neurotoxic exposure does not necessarily mean that the symptoms were caused by the exposure. This distinction brings up the logical fallacy of post hoc, ergo propter hoc ("after the fact, therefore because of the fact"), a common error that leads to the often vigorous but unsupported contention that an exposure event produced illness (Rosenberg, 1995). Neurotoxicology does not usually have the advantage enjoyed by a field such as infectious disease, for example, where the application of Koch's postulates can lead to definitive and convincing evidence that a pathogen unequivocally produced a disease state, and no other explanation is in any way plausible (Rosenberg, 1995). Rather, much uncertainty attends the attribution of a clinical sequel to the presumed neurotoxic exposure.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444626271000068

Understanding Bias in Diagnosing, Assessing, and Treating Female Offenders

Ted B. Cunliffe , ... Jason M. Smith , in Understanding Female Offenders, 2021

Summary and Discussion

The scientific method has a long and illustrious history of providing reliable and valid understandings of the world and different phenomena. As such, it forms the basis of modern thinking, clinical practice, and philosophic and scientific thought. However, over the years, researchers and clinicians have strayed from the central tenets of the method and fallen prey to a wide range of logical fallacies, biases, and misinformation. Although she acknowledged that not all biased science is conducted intentionally, Å imundic (2013) commented, "It is immoral and unethical to conduct biased research." She broadly defined bias in research as a deviation away from truth in the application of the scientific method and wrote,

authors, journal editors, and reviewers need to be concerned about the quality of the work submitted for publication and ensure that only studies which has been designed, conducted, and reported in a transparent way, honestly and without any deviation from truth get to be published (p. 12).

As eminent physicist and scientist, Richard Fenyman (2015) stated, "Science is a way to teach how something gets to be known … how to handle doubt and uncertainty, what the rules of evidence are, how to think about things so that judgments can be made … to distinguish truth from fraud" (p. 146). Although Fenyman emphasized the self-correcting aspect of science and lauded its focus on doubt and verification of results, this notion has been difficult to achieve in the social sciences (Ioannidis, 2012). If science were truly self-correcting, we would expect to see evidence in the form of replications of previous findings including the report of negative findings. However, in their review of the psychological literature, Makel, Plucker, and Hegarty (2012), found that replication of previously reported positive findings is extremely rare (in the range of 1%–5% of published research in the field of psychology since 1900). They concluded that only 1.07% of psychological research were replication efforts. Additionally, of the small number of studies they identified, only 18% were direct replications (rather than just conceptual) and most of the replications (53%) were performed by the original authors. Pashler and Harris (2012) examined original findings reported in a range of major psychology journals and found that 56% of the findings were false positives and suggested that the absence of any replication attempts indicated that the vast majority were "unchallenged fallacies." Extremely low rates of replication of previous findings were also reported in other areas of scientific endeavor (Evanschitsky, Baumgarth, Hubbard, & Armstrong, 2007; Hubbard & Armstrong, 1994; Kelly, Vhase, & Tucker, 1979; Nosek & Errington, 2017; Prinz, Schlange, & Asadullah, 2011; Å imundic, 2013). Although bias is present in research due to a range of logical fallacies and cognitive biases (i.e., heuristics, personal biases, confirmation bias, illusory correlation, suggestibility, and the misinformation effect), problems also exist in the poor application of the tenets of science and statistical techniques.

Meta-analysis (Glass, 1972, 1976; Glass & Kliegel, 1983) is a very powerful technique to highlight effects across studies in research samples with high homogeneity (provided that individual studies are free of internal validity issues and bias). Haig (1988) suggested that the technique should be considered not as "an integrator of research findings" but rather, as a data analytic procedure to be used in theory development (highlighting questions in need of explanatory answers). Eysenck (1984) went further and suggested that meta-analysis is unscientific and constitutes "an abuse of research integration." He complained that often researchers were mixing apples and oranges and sometimes "apples, lice, and killer whales" yielding meaningless conclusions (Eysenck, 1995, pp. 110–111). Eysenck has not been the only one to criticize meta-analyses (also see Gacono, 2019; Gacono & Smith 2021; Slavin, 1986, Chapter 3). Greco, Zangrillo, Biondi-Zoccai, and Landoi (2013) also cautioned against meta-analyses especially related to the quality of the research findings included. They wrote, "The conclusions of a meta-analysis depend strongly on the quality of the studies identified to estimate the pooled effect. The internal validity may be affected by errors and incorrect evaluations" (p. 222; also see Gacono, 2019; Gacono & Smith, 2021; Smith, Gacono, Fontan et al., 2020). They also stated, "It is strongly recommended that reviewers use a set of specific rules to assign a quality category, aiming for transparency and reproducibility" (p. 222).

In their work studying and teaching meta-analysis, Hunter and Schmidt (1990, 2004; Schmidt & Hunter, 2014) emphasized the complexity of the procedure and highlighted the importance of ensuring that researchers using the technique carefully analyze and select reliable and valid studies for inclusion. The authors discussed the importance of ensuring that studies selected were equivalent with respect to standardized effect score, diagnostic conditions, and strength/weakness of study design, and an individual study's methodological concerns. They also suggested that meta-analyses be conducted in an open, transparent, and collegial way (involving authors of the respective studies by asking them to comment on whether their study was adequately represented). Although reliability reflects the degree to which variation in a phenomenon may be attributed to true score, validity is related to the meaning of the test scores and findings. Validity forms the basis of Hunter and Schmidt's (2004) comments concerning the importance of ensuring that reliable, valid, and methodologically sound studies are selected for inclusion in a meta-analysis. However, many investigators have pointed out that meta-analytic techniques are often misused, incorrectly applied and the guidelines delineated by Hunter, Schmidt, and others (Glass, 1976) are rarely followed (Barnard, Willett, & Ding, 2017; Cunliffe et al., 2012; Haig, 1988; Lecky, Little, & Brennan, 1996; Smith, Gacono, Fontan et al., 2018; Staines & Cleland, 2007).

In their study of bias present in meta-analysis (randomization, methodological quality, heterogeneity of effects, unreliability of outcome measures, overestimation biases, small sample bias, failure to weigh by sample bias, retention bias in psychotherapy studies, and publication bias), Staines and Cleland (2007) emphasized that researchers should eliminate, minimize, and control for bias in their meta-analyses. They suggested that the usefulness of the technique is maximized when these concerns are addressed. In our experience, meta-analytic techniques are often misused and inappropriately applied, and they are rarely conducted in an open and transparent way (i.e., Mihura et al., 2013; Wood et al., 2010). In their discussion of some of the shortcomings of meta-analysis, Barnard et al. (2017) suggested that the peer review process must go beyond simply ensuring that the correct meta-analytic procedure is followed. They suggested that prospective research papers should be reviewed by individuals with an expertise in meta-analysis and the subject matter being considered. In addition, that authors should be required to confirm with original investigators that their work is accurately represented, authors share complete methodological details of their meta-analyses so others may attempt replication, and that meta-analyses derived by pooling original primary data be prioritized over those using published summary data. It is essential that good scientific principles be followed in all research and clinical applications.

A specific source of bias is present in the comments and analyses of researchers critiquing the reliability and validity of psychological tests (Lilienfeld, et al., 2015; Mihura et al., 2013; Wood et al., 2000, 2003, 2010; Yokota, 2012). The errors in clinical thinking are manifested in a poor understanding of clinical assessment and how psychological tests work in practice (see Chapter 4; Gacono & Evans, 2008). As we described in other chapters in this volume and elsewhere, reliable and valid assessment is a progressive and iterative process whereby clinical interview, review of records and other collateral sources, legal documents, and tests of intellectual ability, adaptive functioning, academic achievement, and psychological functioning are carefully considered, evaluated, and integrated (Chapter 4).

The diagnosis rendered, treatment recommendations and clinical and forensic opinions then, are the confluence of all these factors. Many of the investigators conducting research studies on the "reliability and validity" of psychological tests are often not clinical psychologists or professionals with extensive experience conducting psychological assessments. As reflected by the works of Wood et al. (2000, 2003, 2010) and Mihura et al. (2013), the tests are viewed solely from a psychometric perspective with little to no understanding of their applied usage, including their role as one instrument within an assessment battery. In some cases (as in the example of Mihura et al., 2013) variables under consideration are discarded or deemphasized because they measure abilities or variables that do not occur frequently (see Gacono & Smith, 2021; Smith, Gacono, Fontan et al., 2018, 2020). The lack of understanding and appreciation of the assessment process creates a skewed perspective, and in itself constitutes bias (argument from ignorance, confirmation bias, false analogy, & false dichotomy).

The evaluation of meta-analytic studies is more arduous as it requires applying the above process to individual studies (see Cunliffe et al., 2012; Smith, Gacono, Fontan et al., 2018, 2020). However, by reviewing the study inclusion criteria, a reviewer can ascertain if the researcher was sensitive to the above issues and attempted to control for any obvious confounds and/or internal validity issues within individual studies.

The Scientist-Practitioner Model. From August 20th to September 3rd, 1949, 73 committee members representing academic and applied psychology, medicine, and education convened at Boulder, Colorado to consider and ratify a training model for clinical psychology (Raimy, 1950; Stricker & Trierweiler, 2006). The scientist-practitioner or Boulder Model, as it is commonly called, outlined a structured training program whereby graduate students would be rigorously trained in the scientific method as well as clinical techniques as a means of ensuring that all clinical activities are grounded in science.

Unlike other medical and mental health disciplines, the scientist-practitioner model emphasized the integration of empiricism, scientific training, and clinical research. Although there was an absence of scientist-practitioner role models on the university faculties across the country at the time, the training model envisioned a practitioner who was equally grounded in science and clinical practice and who would practice both disciplines equally. It was believed that this would result in clinical services which would be effective and supported by research, and the scientist-practitioners would produce a body of applied or clinical research conducted by individuals grounded in both disciplines.

Some investigators have questioned whether science and clinical practice are commensurable and if the integration of the two disciplines is practical (Cascio & Aguinis, 2008; Long & Hollin, 1997; Martin, 1989; Stricker, 1997; Stricker & Trierweiler, 2006). Ioannidis (2016) outlined key features that must be included in clinical research studies to make the findings useful to clinicians in the field: a direct relationship to the problem identified, context placement, information gain, pragmatism, patient centeredness, value for money, feasibility, and transparency. He concluded that most studies in the major journals he reviewed did not fulfill these criteria. Long and Hollin (1997) suggested that the areas of science and practice were poorly integrated within research and were not conducted by clinicians. They concluded that evidence from fundamental research to inform training and practice were "sadly lacking."

Martin (1989) found that although there was wide support for the scientist-practitioner model, most clinical psychologists did not participate in, let alone publish research findings. Stricker and Trierweiler (2006) argued for a local clinical scientist to bridge the gap between science and practice where the clinical setting is viewed as analogous to a scientific laboratory to stay true to the scientist-practitioner model. Carroll, Skinner, McCleary, von Mizener, and Bliss (2009) examined author affiliation over an eight-year period in four prominent school psychology journals and found that 90% of the studies published were conducted by individuals affiliated with universities while only 10% were published by individuals practicing in the field. They suggested that since 85% of school psychologists are practitioners, the contribution of clinicians to the scientific literature was minimal and their lack of participation constituted a bias. Blair (2010) suggested that this lack of involvement of clinicians in research has resulted in a situation in which practicing clinicians rarely questioned the scientific practice, research methods, or findings of published research or found the information useful in their professional practice.

In our view, the separation between clinicians and researchers has greatly expanded over the past 20 years to the point where true scientist-practitioners are becoming increasingly rare (Chapter 1). Although many university-based researchers frequently attempt to characterize themselves as clinicians, this is not the case.

We examined workshops offered at the Annual Meeting of the American Psychological Association (APA) held in Chicago, IL from August 8th to 11th 2019. None of the presenters of the Thought Leader workshops conducted at the conference provided clinical practice as their primary affiliation (two were not clinicians but were contributors to news networks and popular magazines; not psychology journals or other print media specifically focused on the science or practice of psychology). Only seven of the 56 APA division programs were chaired by individuals who listed their primary affiliation as clinical practice (the others were university affiliated) and in one case (Society of Counseling Psychology), one of the co-chairs was not a psychologist and held a master's degree. Additionally, examining the workshops at the Annual Meeting of the Society of Personality Assessment (SPA; the main assessment organization) in March 2018, the numbers were a little better but not great. Of the 27 workshops, 12 (44%) of the primary workshop leaders were in clinical practice (JMS was one) while the majority held faculty positions at a university. Certainly these figures are somewhat skewed as a clinician may list an adjunct affiliation when presenting at a professional conference. However, many of the biases discussed in this chapter go hand in hand with conceptually poor research designs that suggest an "academic" (lacking an experiential component) perspective.

Training and clinical research has been overtaken by academics while the participation of active clinicians is minimal. This is a serious problem; the lack of participation of clinicians in research and even professional presentations, constitutes a bias (Epilogue). Our affective experience of the state of psychological research evoked memories of the words of Admiral Greer to Jack Ryan in the film, Clear and Present Danger, "you think, it's alright, you lived a long time, you had a family that loved ya, a job you thought made a difference, that you thought was honorable and then … you see this."

As we have delineated throughout this chapter, bias is omnipresent throughout the psychological field in both research and clinical applications. Although some of the biases are related to inappropriate or poor application of the scientific method and a multitude of logical fallacies, others are directly linked to a range of biases including political correctness, heuristics, confirmation bias, illusory correlation, suggestibility, misinformation effect, racial bias, and gender bias. We have highlighted how these effects impact research and clinical work with female offenders and female psychopaths to provide the reader with some guidance in utilizing research findings as a guide to their clinical practice. We conclude with guidelines for evaluating the presence of the many biases found in the female offender literature:

1.

Conceptual Premises: Carefully evaluate the literature review.

a.

The literature review should be thorough and complete, free of bias, and scientifically sound.

b.

The literature review should justify the author's hypotheses and conceptual premises.

c.

There should be no bias (blindspots) in the conceptual premises (e.g., stating psychopathy is solely a dimensional construct, then using a self-report measure to assess it, and finally offering conclusions about "psychopathy" as a category when there are no identified PCL-R     30 participants).

2.

Methodology: Evaluate participants and design (procedures, statistics, & results).

a.

Appropriate procedures must be used for determining categories to be studied (e.g., if psychopathy is studied, one must use a PCL-R score     30 to determine group inclusion; PCL:SV or self-report measures are not appropriate).

b.

Participant pool should contain significant numbers of the syndrome purported to be studied (e.g., psychopathy studied in college populations where no psychopaths exist). The number of participants should be adequate to the statistical comparisons conducted.

c.

Appropriate statistical design and statistics must be utilized (many Rorschach variables form non-normal distributions [J-Shaped curves] and are not amendable to comparison with parametric procedures--requiring the use of nonparametric procedures; comparing relative frequencies not means). For some research, a comparison of > 1 of the variable to those who scored 1 or 0 may be conceptually warranted over the simple presence or absence of the index. PCL-R research requires comparing groups comprised of scores ≥ 30 to those scoring ≤ 24.

d.

Ensure that interrater reliability is reported for all measures and variables studied.

e.

Ensure that all necessary data is presented to assess the validity of the findings and conclusions (e.g., with Rorschach data, means, standard deviations, frequencies should be listed for all variables studied; Smith, Gacono, Fontan et al., 2018). The range of R and Lambda must also be reported for Rorschach data (Gacono, 2019; Smith, Gacono, Fontan et al., 2018, 2020).

3.

Conclusions and Inferences: Assess the congruence between the methodology, results, and the conclusions.

a.

Does the methodology support the conclusions? For example, definitive conclusions cannot be made about psychopathy (category) from a methodology that contained no PCL-R     30 scorers (psychopaths) in the study.

b.

Assess for any bias that skews the meaning of the findings (i.e., political correctness, etc.).

Finally, we remind you of the quote from Sherlock Holmes signifying the importance of an objective, bias-free clinical perspective. "But love is an emotional thing and whatever is emotional, is opposed to that true cold reason I place above all things. I should never marry myself lest I bias my judgment" (Conan Doyle, 1930).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128233726000060

Forensic Examination Reports1

W. Jerry Chisum , ... Jodi Freeman , in Crime Reconstruction (Second Edition), 2011

Logical fallacies in crime reconstruction

If it was so, it might be; and if it were so, it would be; but as it isn't, it ain't. That's logic.

–Tweedledee, in Lewis Carroll's Through the Looking Glass, and What Alice Found There, London: Macmillan (1872)

Perhaps the most revealing indicators of the absence of analytical logic and the scientific method in a crime reconstruction are the logical fallacies in forensic examination reports. The fallacious reconstructionist is not necessarily being intentionally deceptive. Rather, some reconstructionists lack the intellectual dexterity to know whether and when their reasoning is flawed. Regardless of motive or intent, logical fallacies are impermissible and can render any subsequent forensic conclusions erroneous.

Forensic practitioners of all disciplines would therefore do well to learn more about fallacies in logic and reasoning in order to avoid them in their own work as well as identify them in the work of others. 11 Common logical fallacies in crime reconstruction, and the forensic sciences in general, include, but are certainly not limited to, the following.

Suppressed Evidence or Card Stacking

This is a one-sided argument that presents only evidence favoring a particular conclusion and ignores or downplays the evidence against it. It may involve distortions, exaggerations, misstatements of facts, or outright lies. It is, in essence, cherry-picking evidence from that which is available to support a conclusion, while ignoring anything that is contrary. This is an act of omission that can only be identified in the peer review process or as part of independent review.

Appeals to Authority

This occurs when someone offers a conclusion based on the stated authority or expertise of themselves or others. This kind of reasoning can be fallacious when the authority lacks the expertise suggested; when the authority is an expert in one subject but not the subject at hand; when the subject is contentious and involves multiple interpretations with good arguments on both sides; when the authority is biased; when the area of expertise is fabricated; when the authority is vague or unidentified; and when the authority is offered as evidence in place of defensible scientific fact.

"I know this is a fact because I've been doing this for 25 years."

"I know I'm right because of my training."

It is common for forensic experts of all kinds to offer their years of experience as evidence of competence. However, experience and competence are not necessarily related. Although knowledge, skill, and ability are potential benefits of age and experience, not everyone acquires them or the humility to apply them correctly. As explained in Thornton (1997), summoning experience instead of logic and reasoning to support a finding is an admission to lacking both (p. 17):

Experience is neither a liability nor an enemy of the truth; it is a valuable commodity, but it should not be used as a mask to deflect legitimate scientific scrutiny, the sort of scrutiny that customarily is leveled at scientific evidence of all sorts. To do so is professionally bankrupt and devoid of scientific legitimacy, and courts would do well to disallow testimony of this sort. Experience ought to be used to enable the expert to remember the when and the how, why, who, and what. Experience should not make the expert less responsible, but rather more responsible for justifying an opinion with defensible scientific facts.

In other words, the more experience of quality and substance one has, the less one will need to tell people about it in order to gain their trust and confidence—the quality of one's experience is only demonstrated through the inherent quality of one's methods and results.

Appeal to False Authority

This is an appeal to an authority that, in particular, lacks expertise in the relevant subject. It involves either ignorance on the part of the examiner or deliberate misrepresentation. In crime reconstruction, a common example would be arguing or assuming that experience in finding, collecting, and/or packaging evidence (a.k.a. crime scene processing) is related to experience interpreting the meaning of evidence in its context (a.k.a. reconstruction) or that being a law enforcement officer necessarily implies forensic expertise or a scientific disposition. Such faulty assertions and assumptions are common to the admission of "expert" reconstruction testimony by the courts.

"I work in crime scenes all day picking up evidence in the mud and the blood; of course I know how to interpret what it means."

"I'm a cop; of course I know how to read a crime scene."

As explained in O'Hara (1970, p. 667), the role of crime scene investigator and the role of evidence interpretation do not, and should not, intersect:

It is not to be expected that the investigator also play the role of the laboratory expert in relation to the physical evidence found at the scene of the crime…. It suffices that the investigator investigate; it is supererogatory that he should perform refined scientific examinations. Any serious effort to accomplish such a conversion would mutilate against the investigator's efficiency.

… In general the investigator should know the methods of discovering, "field-testing," preserving, collecting, and transporting evidence. Questions of analysis and comparison should be referred to the laboratory [aka scientific] expert.

In addition, although emphasizing cooperation between crime scene investigators and forensic scientists, Lee (1994) is in agreement with this separation of collection and interpretation duties. Crime reconstruction is outlined as a process of systematic evidence examination based on adherence to the principles of forensic science and the scientific method, something necessarily beyond the short course training of the average crime scene technician.

Appeal to Tradition

This kind of argument reasons that a conclusion is correct simply because it is older, traditional, or "has always been so." It supports a conclusion by appealing to long-standing, institutional, or cultural opinions, as if the past itself is a form of authority.

"One swab per hand, front and back, is the correct method for collecting gunshot residue (GSR) because that's that way I was taught and that's the way it's always been done in this department/ agency." 12

"I've been doing it this way for 25 years."

Argumentum ad Hominem, a.k.a. "Argument to the Man"

This argument attacks an opponent's character rather than an opponent's argument. Because of its effectiveness, it is perhaps the most common logical fallacy. Even if true, it is important to note that arguments against character are not always relevant to the presentation of scientific conclusions, logic, and reasoning.

"He's wrong because he's an arrogant jerk."

"She's just saying that because she's a woman."

Emotional Appeal

This is an attempt to gain favor based on arousing emotions and/or sympathy to subvert rational thought. This is used very commonly to sway juries in cases involving traditionally sympathetic victims, such as an attractive woman or a young child.

"You know in your heart the right thing to do."

"If you work for the defense, then you must hate law enforcement and want to let child killers go free."

Circulus in Probando, a.k.a. Circular Reasoning

This is an argument that assumes as part of its premises the very conclusion that is supposed to be proven.

At a bail hearing, prior to trial: "He's a danger to society because he killed the victim, and therefore should not be granted bail." Violation of the presumption of innocence. The very fact to prove at trial is assumed pretrial.

At a bail hearing, prior to trial: "She shouldn't be granted bail because she has shown no remorse for her actions." Violation of the presumption of innocence—innocent people cannot show remorse for crimes that they did not commit.

Cum Hoc, Ergo Propter Hoc, or "with This, Therefore Because of This"

This occurs when one jumps to a conclusion about causation based on a correlation between two events, or types of events, that occur simultaneously. The examiner assumes that things found together must be related.

"We found these knives in the house, so they must be related to the crime that happened in another room—despite any lack of any direct associative evidence."

"We found these condoms at the scene, so they must be related to the rape."

Post Hoc, Ergo Propter Hoc, or "after This, Therefore Because of This"

Reasons that a causal conclusion exists based solely on the alleged cause preceding its alleged effect.

"She was killed just after he arrived at the house, so obviously he's involved in her death somehow."

Hasty Generalizations

This sort of generalization forms a conclusion based on woefully incomplete information or by examining only a few specific cases that are not representative of all possible cases.

"I don't know all of the facts of the case, and haven't spent more than a few hours examining the evidence, but I can provide a fairly detailed reconstruction of events."

"I've seen a couple of cases just like this before."

Sweeping Generalization

This occurs when one forms a conclusion by examining what occurs in many cases and assuming that it must or will be so in a particular case. This is the opposite of a hasty generalization.

"All cops are crooked."

"All scientists do is work with theories; they don't have real-life experience."

False Precision

False precision occurs when an argument treats information as more precise or reliable than it really is. This happens when conclusions are based on imprecise information that must be taken as precise in order to support the conclusion adequately.

"This method of examination has an error rate of zero."

"I'm 100% certain of my findings."

"The point of origin of the blood drop is 15.78 inches above the floor and 7.852 inches west of the drop."

It bears mentioning that presenting what appear to be precise statistics or numbers in support of an argument gives the appearance of scientific accuracy when this may not actually be the case. Many people find math and statistics overly impressive and become intimidated easily by those who wield numbers with ease. This is especially true with DNA evidence, whose astronomic statistical probabilities are often presented by those without any background in statistics and without full consideration of the databases that such probabilities are being derived from.

In the advent of varying DNA databases and subsequent statistics of impressive weight being read in court to bedazzled jurors, and the outright fabrication of statistics related to hair comparisons, the cautionary offered in Kirk and Kingston (1964) is more appropriate now than ever: "Without a firm grasp of the principles involved, the unwary witness can be led into making statements that he cannot properly uphold, especially in the matter of claiming inordinately high probability figures."

A more specific criticism of forensic practices was provided in Moennsens (1993):

Experts use statistics compiled by other experts without any appreciation of whether the database upon which the statistics were formulated fits their own local experience, or how the statistics were compiled. Sometimes these experts, trained in one forensic discipline, have little or no knowledge of the study of probabilities, and never even had a college-level course in statistics.

Those using statistics to support their findings have a responsibility to know where they come from, how they were derived, and what they mean to the case at hand before they form conclusions, write them up, and certainly before they testify in court.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123864604000199

Attitudes Towards Science

Bastiaan T. Rutjens , ... Frenk van Harreveld , in Advances in Experimental Social Psychology, 2018

4.2 Concerns About the Morality of Scientists

In addition to the aforementioned work that honed in on the moral concerns that people might have about various types of scientific evidence, we have examined the moral associations that people have with scientists (Rutjens & Heine, 2016). Do people think that scientists are good or bad people? We were inspired to study this because of an interesting ambivalence; despite the fact that scientists are one of the most respected occupations (e.g., Fiske & Dupree, 2014; The Harris Poll, 2014), a substantial portion of the general public seems to distrust science. Since there was, to our knowledge, virtually no research on perceptions of scientists, we devised several studies that aimed to provide some initial insight into such perceptions.

A first set of studies exploited the representativeness heuristic (or conjunction fallacy; Tversky & Kahneman, 1983) in order to gauge intuitive associations between scientists and violations of morality. This classic fallacy is a mental shortcut in which people make a judgment on the basis of how stereotypical, rather than likely, something is. As a (famous) example, participants presented with the "Linda problem" were asked to decide, based on a short personal description, whether it is more likely that Linda is either a bank teller, or a bank teller and a feminist. The description of Linda mentioned that she is deeply concerned with issues of social justice and that she has participated in antinuclear demonstrations. The majority of participants in the original study (Tversky & Kahneman, 1983 ) opted for the feminist bank teller option (which is a subset of the set of bank tellers, and therefore logically less likely), arguably because the description that they were given fit the feminist category so well. More specifically, participants do not commit this logical fallacy because they believe that all feminists are deeply concerned about social justice issues, or have a history of participating in antinuclear demonstrations, but rather than a person to which this description applies fits the social category of feminists. In our research, we used a variety of descriptions depicting various moral transgressions that were used in previous research on morality (e.g., Gervais, 2014; Haidt, Koller, & Dias, 1993). Consider the following example study: participants read a description about a man named John, who engages in an act of cannibalism. Subsequently, they were asked to indicate which option is more likely: John is a sports fan, or John is a sports fan and a scientist. In the control conditions, the category of scientist was replaced with one of various control targets (e.g., teacher, Muslim). The categories were manipulated between-subjects, and in the majority of the studies, we also included two more specific scientist categories (i.e., cell biologist, experimental psychologist). An overview of the percentage of participants who committed the fallacy can be found in Fig. 2. When the target category was a scientist, participants were significantly more likely to make the conjunction error, suggesting that descriptions of cannibalism (and also serial murder, incest, and necrobestiality) fit the category of scientists better than a host of control categories. f In other words, when reading descriptions about various immoral acts, a substantial percentage of the participants intuitively assumed that the protagonist committing the act was a scientist. Interestingly, we found no association of scientists with scenarios describing violations of care and fairness. We interpreted these results in light of Moral Foundations Theory (e.g., Graham et al., 2009), which maintains that morality can be classified along (at least) five foundations, organized into two broad categories. The category of binding moral foundations concerns intuitions that are centered on the welfare of the group or community, and binds people to roles and duties that promote group order and cohesion. These intuitions are ingroup loyalty, authority, and purity. The category of individualizing moral foundations concerns intuitions pertaining to the welfare of the individual, which function to protect the rights and freedoms of all individuals. These intuitions are fairness and care. Our results show that scientists were associated with violations of the binding moral foundations of authority and—particularly—purity, but not with violations of the individualizing moral foundations of fairness and care.

Fig. 2.

Fig. 2. Intuitive associations between various morality violations and scientists. The Y-axis indicates the percentage of participants committing a logical fallacy that reflects this association (Rutjens & Heine, 2016).

Using a different method, we tested this notion in another study. Here, we employed the moral stereotypes method (Graham et al., 2009), in which participants fill out the moral judgments section of the moral foundations questionnaire in the third person. In one condition, they were asked to reply to the statements "as John, who is a scientist" (e.g., John believes that people should not do things that are disgusting, even if no one is harmed). Compared to the control condition, participants in the scientist condition indicated that John cares less about the binding moral foundations of loyalty, authority, and purity than those in the control condition. There were no differences in perceived importance of care and fairness (see Fig. 3). It is worth noting that the associations and stereotypes were found to be largely independent of participants' own religious and political beliefs and moral foundations scores, with the exception that religious participants were somewhat more extreme in their moral stereotypes of scientists than nonreligious participants.

Fig. 3.

Fig. 3. Moral stereotypes about scientists: scientists are seen as caring less about loyalty, authority, and purity (Rutjens & Heine, 2016).

The above studies suggest that people perceive scientists as caring less about the binding moral foundations than various other categories of people. Given this, what do people believe that scientists do care about. Two additional studies indicated that—compared to various other categories—people believe that scientists place relatively more value on knowledge gain and satisfying their curiosity than on acting morally. They were also seen as potentially dangerous. At the same time, scientists were found to be relatively well-liked and trusted. Thus, we concluded that scientists are perceived as capable of immoral behavior, but not as immoral per se. Potential immoral conduct might be preceded by amoral motives.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0065260117300345

Evidence, Evidence Functions, and Error Probabilities

Mark L. Taper , Subhash R. Lele , in Philosophy of Statistics, 2011

1 Introduction

In a recent paper Malcolm Forster has stated a common understanding regarding modern statistics:

Contemporary statistics is divided into three camps; classical Neyman-Pearson statistics (see [Mayo, 1996] for a recent defense), Bayesianism (e.g., [Jefferys, 1961; Savage, 1976; Berger, 1985; Berger and Wolpert, 1988]), and third, but not last, Likelihoodism (e.g., [Hacking, 1965; Edwards, 1987; Royall, 1997]). [Forster, 2006]

We agree with this division of statistics into three camps, but feel that Likelihoodism is only an important special case of what we would like to call evidential statistics. In the sequel, we will try to justify our expansion of evidential statistics beyond the likelihood paradigm and to relate evidentialism to classical epistemology, 1 and to classical statistics.

For at least the last three quarters of a century a fierce battle has raged regarding foundations for statistical methods. Statistical methods are epistemological methods, that is, methods for gaining knowledge. What needs to be remembered is that epistemological methods are technological devices — tools. One does not ask if a tool is true or false, or right or wrong. One judges a tool as effective or ineffective for the task to which it will be applied. In this article, we are interested in statistics as a tool for the development of scientific knowledge. We develop our desiderata for knowledge developing tools in science and show how far the evidential statistical paradigm goes towards meeting these objectives. We also relate evidential statistics to the competing paradigms of Bayesian statistics and error statistics.

Richard Royall [1997, 2004] focuses attention on three kinds of questions: "What should I believe?", "What should I do?", and "How should I interpret this body of observations as evidence." Royall says that these questions "define three distinct problem areas of statistics." But, are they the right questions for science? Science and scientists do many things.

Individual scientists have personal beliefs regarding the theories and even the observations of science. And yes, these personal beliefs are critical for progress in science. Without as yet unjustified belief, what scientist would stick his or her neck out to drive a research program past the edge of the known? In a more applied context, scientists are often called to make or advise on decisions large and small. It is likely that this decision making function pays the bills for the majority of scientist. But, perhaps the most important activity that scientists aspire to is augmenting humanity's accumulated store of scientific knowledge. It is in this activity that we believe Royall's third question is critical.

Our thinking regarding the importance and nature of statistical evidence develops from our understanding (however crude) of a number of precepts drawn from the philosophy of science. We share the view, widely held since the eighteenth century, that science is a collective process carried out by vast numbers of researchers over long stretches of time [Nisbet, 1980].

Personally, we hold the view that models carry the meaning in science [Frigg, 2006; Giere, 2004; 2008]. This is, perhaps, a radical view, but an interest in statistical evidence can be motivated by more commonplace beliefs regarding models such as that they represent reality [Cartwright, 1999; Giere, 1988; 1999; 2004; Hughes, 1997; Morgan, 1999; Psillos, 1999; Suppe, 1989; van Fraassen, 1980; 2002] or serve as tools for learning about reality [Giere, 1999; Morgan, 1999].

We are strongly skeptical about the "truth" of any models or theories proposable by scientists [Miller, 2000]. We mean by this that although we believe there is a reality, which we refer to as "truth", no humanly constructed model or theory completely captures it, and thus all models are necessarily false. Nevertheless, some models are better approximations of reality than other models [Lindsay, 2004], and some models are even useful [Box, 1979]. In the light of these basal concepts, we believe that growth in scientific knowledge can be seen as the continual replacement of current models with models that approximate reality more closely. Consequently, the question "what methods to use when selecting amongst models?" is perhaps the most critical one in developing a scientific method.

Undoubtedly the works that most strongly influenced 20th century scientists in their model choices were Karl Popper's 1934 (German) and 1959 (English) versions of his book Logic of Scientific Discovery. Nobel Prize winning scientist Sir Peter Medawar called this book "one of the most important documents of the twentieth century." Popper took the fallacy of affirming the consequent 2 seriously, stating that the fundamental principle of science is that hypotheses and theories can never be proved but only disproved. Hypotheses and theories are compared by comparing deductive consequences with empirical observations. This hypothetico-deductive framework for scientific investigation was popularized in the scientific community by [Platt's, 1964] article on Strong Inference. Platt's important contribution was his emphasis on multiple competing hypotheses.

Another difficulty with the falsificationist approach is the fact that not only can you not prove hypotheses, you can't disprove them. This was recognized by Quine [1951]; his discussion of the under-determination of theory by data concludes that a hypothesis 3 is only testable as a bundle with all of the background statements on which it depends. Another block to disproving hypotheses is the modern realization that the world and our observation of it are awash with stochastic influences including process variation and measurement error. When random effects are taken into consideration, we frequently find that no data set is impossible under a model, only highly improbable.

Therefore, "truth" is inaccessible to scientist either because the models required to represent "truth" are complex beyond comprehension, or because so many elements are involved in a theory that might represent "truth" fully that an infinite number of experimental manipulations would be required to test such a theory. Finally, even if two full theories could be formulated and probed experimentally, it is not likely that either will be unequivocally excluded because in a stochastic world all outcomes are likely to be possible even if unlikely. What are we as scientists to do? We do not wish to lust after an unattainable goal; we are not so adolescent. Fortunately, there are several substitute goals that may be attainable. First, even if we can't make true statements about reality, it would be nice to be able to make true statements about the state of our knowledge of reality. Second, if our models are only approximations, it would be nice to be able to assess how close to truth they are [Forster, 2002].

Popper [1963] was the first to realize that although all theories are false, some might be more truthlike than others and proposed his concept of verisimilitude to measure this property. Popper's exact formulation was quickly discredited [Harris, 1974; Miller, 1974; Tichy, 1974], but the idea of verisimilitude continues to drive much thought in the philosophy of science (see [Niiniluoto, 1998; Zwart, 2001; Oddie 2007] for reviews). The results of this research have been mixed [Gemes, 2007]. The difficulty for the verisimilitude project is that, philosophically, theories are considered as sets of linguistic propositions. Ranking the overall truthlikeness of different theories on the basis of the truth values and content of their comprised propositions is quite arbitrary. Is theory A, with only one false logical consequence, truer than theory B, with several false consequences? Does it make a difference if the false proposition in A is really important, and the false propositions in B are trivial? Fortunately, as Popper noted [1976] verisimilitude is possible with numerical models where the distance of a model to truth can be represented by a single value.

We take evidence to be a three-place relation between data and two alternate models 4 . Evidence quantifies the relative support for one model over the other and is a data based estimate of the relative distance from each of the models to reality. Under this conception, to speak of evidence for a model does not make sense. This then is what we call the evidential approach, to compare the truthlikeness of numerical models. The statistical evidence measures the differences of models from truth in a single dimension and consequently may flatten some of the richness of a linguistic theory. While statistical evidence is perhaps not as ambitious as Popper's verisimilitude, it is achievable and useful.

We term the quantitative measure of relative distance of models to truth an evidence function [Lele, 2004, Taper & Lele, 2004]. There will be no unique measure or the divergence between models and truth so a theory of evidence should guide the choice of measures in a useful fashion. To facilitate the use of statistical evidence functions as a tool for the accumulation of scientific knowledge we believe that a theory of evidence should have the following desiderata:

D1)

Evidence should be a data based estimate of the relative distance between two models and reality.

D2)

Evidence should be a continuous function of data. This means that there is no threshold that must be passed before something is counted as evidence.

D3)

The reliability of evidential statements should be quantifiable.

D4)

Evidence should be public not private or personal.

D5)

Evidence should be portable, that is it should be transferable from person to person.

D6)

Evidence should be accumulable: If two data sets relate the same pair of models, then the evidence should be combinable in some fashion, and any evidence collected should bear on any future inferences regarding the models in question.

D7)

Evidence should not depend on the personal idiosyncrasies of model formulation. By this we mean that evidence functions should be both scale and transformation invariant. 5

We do not claim that inferential methods lacking some of these characteristics cannot be useful. Nor do we claim that evidential statistics is fully formulated. Much work needs to be done, but these are the characteristics that we hope a mature theory of evidence will contain.

Glossed over in Platt is the question of what to do if all of your hypotheses are refuted. Popper acknowledges that even if it is refuted, scientists need to keep their best hypothesis until a superior one is found [Popper, 1963]. Once we recognize that scientists are unwilling to discard all hypotheses [Thompson, 2007] then it is easy to recognize that the falsificationist paradigm is really a paradigm of relative confirmation — the hypothesis least refuted is most confirmed. Thus, the practice of science has been cryptically evidential for at least half a century. We believe that it is important to make this practice more explicit.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444518620500150

Social construction of scientifically grounded climate change discussions

Janet K. Swim , ... John Fraser , in Psychology and Climate Change, 2018

4.4 Improving conversations

Social psychological research, such as that reviewed above, coupled with research in communication, points to strategies to facilitate fact-based and productive conversations. Productive conversations are defined as being engaging, informative, staying on topic, building to action, and spreading to others. Below we describe elements of conversations that can help achieve these outcomes, elements that are connected to the training provided to educators in the intervention we report here. The elements follow a general arc of story beginning with an introduction that sets the stage and tone of the conversation, moves to the core story based on simplifying metaphors, and ends with a resolution that describes relatively simple community and collective actions.

4.4.1 Frame messages to engage audiences

Overarching frames--the context in which messages are situated—introduce topics and influence responses to messages (e.g., Corner, Markowitz, & Pidgeon, 2014; Lakoff, 2010). Effective frames can engage listeners, and promote or motivate action. In contrast, disengagement or even resistance may be met when frames do not align with an audience's values and moral principles. Many communicators are in search of effective frames that reach across audiences, are endorsed by most members of the public, or match the values of a particular target audience.

Empirical research can help in the selection of such frames. Examples of effective frames are those that encourage the vast majority of people to think compassionately and empathically about animals and humans, and thinking about one's impact on the future generations (Pfattheicher, Sassenrath, & Schindler, 2016; Swim & Bloodhart, 2014; Zaval, Markowitz, & Weber, 2015). Research also points to matching frames with audience views of climate change (Maibach, Leiserowitz, Roser-Renouf, & Mertz, 2011). For example, frames that emphasize harmful impacts that will be a consequence of climate change on people are less effective at reaching conservative audiences than messages that emphasize technological solutions that can address the problem, reinforce conservative moral principles such as purity in the form of protecting air and water quality, or build on in-group loyalty in the form of patriotism (Feinberg & Willer, 2013; Wolsko, Ariceaga, & Seiden, 2016).

4.4.2 Provide an even toned conversation

The tone of conversation can determine the success of interpersonal exchanges (Kasperson et al., 1988; Renn, 2011). Expressing a rigid, dogmatic belief that one's beliefs are superior to others' beliefs may increase frustration with conversations and discourage future conversations without increasing the effectiveness of persuading the other (Maki, Raimi, 2017; Regan, 2007). In contrast, those who acknowledge the legitimacy of the other's viewpoint and focus on a give-and-take dialogue create more positive experiences that are also more influential (Maki, Raimi, 2017; Regan, 2007). Such a dialogue could still address misinformation by providing scientifically grounded information but couch these corrections within active listening and consideration of what others are thinking and feeling. Listening carefully to what a misinformed person is saying could not only help that person be more receptive but also help identify common logical fallacies. In these instances, rather than countering misinformation with accurate information, misinformation is likely best countered by identifying logical fallacies in the misinformation, such as cherry-picking data (Cook & Lewandowsky, 2011). Cook and Lewandowsky also argue that effective commuicators follow the identification of a fallacy by circling back to core messages grounded in science in order to mentally replace the misinformation with succinctly articulated accurate information. That is to say, a fallacy requires substitution in conversation with a simplified scientifically accurate replacement.

4.4.3 Create core science messages

Climate change education does not need to be complicated. Brief statements that provide information about how climate change works have been shown to change attitudes about the topic (Ranney & Clark, 2016). An ideal conversation will contribute to improved climate literacy by moving the general mental models of climate change to align with expert climate scientists' understanding of the phenomenon. This includes the goals of encouraging the public to understand causes and evidence of climate change, the consequences of climate change, and the mechanisms that connect the causes with evidence of climate change and its consequences (Volmert, 2014). Attending to causal processes could encourage people to think more in terms of systems, such as the carbon cycle, rather than disconnected attributes such as carbon dioxide or isolated consequences such as melting icebergs. Systems thinking (i.e. thinking holistically about complex systems rather than seeing all parts as disconnected from one another) can encourage people to see people, including themselves, as operating within the climate system. By situating a person within the system, they can better perceive themselves as a causal agent, someone who will experience predicted impacts, and, as part of a group, have the power to address the causes and impacts. Encouraging systems thinking can be helpful in climate change messaging because it is related to both risk perception and policy support (Lezak & Thibodeau, 2016).

In order to address both actual knowledge and competency related self-presentation concerns, the content in the core message should be easily understood. Explanatory metaphors and analogies can provide a means of conveying key climate change learning objectives (cf. Sopory & Dillard, 2002). Consistent with aligning expert and public understanding of climate change, an explanatory metaphor provides "a bridge between expert and public (that is, non-scientist) understandings" (p. 416) that highlights salient features of a complex or abstract concept and maps them onto more concrete and familiar objects, events, or processes (Kendall-Taylor & Haydon, 2016). Further, effective analogies should "(1) be factual and not misleading, (2) use a familiar domain to explain the unfamiliar, (3) be novel enough to capture interest, and (4) allow for correct extrapolations based on understanding of the known domain." (p. 3; Raimi, Stern, & Maki, 2017). Ideally, these messages would be tested for one's audience in order to assure that specific learning objectives are achieved (e.g., Raimi et al., 2017). An example of a tested effective metaphor for encouraging systems thinking is describing the "earth as our home" (Thibodeau, Frantz, & Berretta, 2017).

In order to encourage the social construction of climate change, effective messages would facilitate easy repetition of the message to others. Metaphors and analogies may help achieve this if they make the topic vivid and improve memory for the information (cf. Blondé & Girandola, 2016). As noted above, messages would ideally be tested to ensure that they are easy to recall and repeat.

The success of central climate change messages would be revealed in improved public understanding of the mechanisms that connect causes of climate change to evidence of climate change and understanding of how to mitigate risk from that change. Greater understanding of climate change can overcome self-presentation concerns about incompetence that prevent people from talking about climate change, particularly with audiences that they anticipate will disagree with their position on the topic (Geiger et al. 2017; Geiger, Swim, 2016). Further, if metaphors are selected that teach about climate change and have demonstrably been repeated accurately to others, there is a higher likelihood that people will repeat those core messages in other conversations.

4.4.4 Increase hope with doable solutions

Climate change messages were described earlier in this chapter as sometimes so emotionally disturbing that people prefer to not talk about it. In this sense, "catastrophizing" climate change has been described as counter-productive because the emotions produced from the message can lead to disengagement and denial (Foust & O'Shannon Murphy, 2009; O'Neill, Nicholson-Cole, 2009). As a result, some communicators may choose to avoid using disturbing images of climate change impacts as a way to avoid controversy. However, understanding changes in the climate that can be attributed to human development is central to explaining why the issue is of such proximate concern. Describing human activity that has disrupted the carbon cycle can help a listener understand why life on the planet is at more risk now than it has been in the past centuries, and that this risk is projected to increase over the next century (see IPCC, 2013 for details on projected climate risks).

Rather than avoiding discussion of impacts, research on fear appeals has demonstrated that messages can be improved if the impacts are paired with solutions that help people cope with the problem (Tannenbaum et al., 2015). For example, when the source of most carbon dioxide in the atmosphere is described as being from fossil fuels used to create electricity and transportation, it becomes clearer that people can engage in solutions to reduce the use of fossil fuels. Proposed climate communications solutions are more likely to lead to action when they are perceived as "doable", that is, when audience member perceive that they have the capacity to take action personally (known as self-efficacy) and the knowledge that if they take these actions, there result will be a desired outcome (known as response efficacy; Geiger et al. 2017; Norgaard, 2011). The importance of actions that make a difference is reflected in the New York Times journalist and Pulitzer prize winner Thomas Friedman's 2008 encouragement to "change your leaders not your light bulb" (Starosta, 2008). While energy-efficiency behaviors such as replacing inefficient lightbulbs can play an important part in reducing carbon emissions, the statement highlights that civic behavior has a higher likelihood of achieving the large-scale impacts necessary to counter the risks that will flow from climate change than do small-scale personal actions. People may not believe that they have the power to replace disliked political leaders, but wider public engagement with the challenge can also convince policy makers to commit more resources to the solution.

These findings demonstrate that effective climate messaging is a combination of literacy expansion and personal actions for learners that meet the "sweet spot." Inspiring climate communications combine actions that are small enough that individuals can do them, yet large enough to contribute to systemic difference. Examples include civic and community responses such as installing community solar panels, support for government development of public transportation, participating in coordinated neighborhood activities to reduce collective energy use, and talking about climate change as a problem that groups and organizations can help to solve through implementing these activities.

Evidence that messages have achieved their goals include the sense that the listener has the capacity to make a change in their own sphere of influence and increased hope about people's ability to address climate change (Chadwick, 2015). Hope can be a particularly relevant emotion to target because hope is an emotion associated with agency and expansive thinking, which can galvanize action (Cavanaugh, Cutright, Luce, & Bettman, 2011; Ojala, 2012; Snyder, 2002). Thus, messages that instill hope in audiences can motivate commitment to engaging in actions, potentially those most similar to those recommended in messages, such as community level actions.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128131305000047