Juries: Are they actually able to be effective deliberators?


 Jury Trials and Source Monitoring

Today, researchers are trying to understand the intricacies of jury trials and their effectiveness. Now that social media exists, it's becoming increasingly difficult to guarantee that jury members can remain impartial (Ruva et al., 2007). People having exposure to cases is an ongoing problem, and it may not be simple to disregard information seen in the media (Ruva et al., 2007). Judges may instruct jurors to disregard any information heard prior to the case and only use what they've heard in court (Bornstein & Greene, 2011). However, it may be difficult to know where information came from and even more difficult to simply forget it. Source monitoring is a concept in cognitive psychology that refers to determining where our memories actually came from (Goldstein, 2019). Another issue pertains to the way judges instruct the juries. When a judge asks a jury to deliberate using a standard of 'reasonable doubt' instead of instructing them to use a preponderance of evidence, jurors tend to be more lenient (Stawiski et al., 2012). Not only are juries influenced by things they've seen in the media, but they are also influenced by the judge's verbiage and the testimonies they hear from experts. Often times, experts are brought in to give testimony about the problems with eyewitness accounts, but now prosecutors are countering this tactic by bringing in other experts to discredit these facts (Devenport & Cutler, 2004). This is problematic because it undermines the credibility of experts and cultivates doubts about the science of psychology (Devenport & Cutler, 2004). By taking a look at some quality research that's been done, we can better understand the facts about how people on a jury deliberate and work toward making jury trials more effective than they already are.

 

How the Research Compares

 

Participants

A study by Ruva et al. (2007) worked to understand how exposure to court cases through avenues like social media may affect their judgements and subsequently, their verdicts. The researchers in this study recruited a total of 558 university students to determine whether or not they could disregard information they’ve seen prior to a court case and make an objective verdict. Another study by Devenport & Cutler (2004) was conducted to determine whether a prosecution’s witness that has been hired to counter an expert witness for the defense will affect a jury’s perception of the defense expert’s testimony. For their study, they recruited 240 college students and 257 jury-eligible community members. Having participants from both the community and a college can more accurately represent people who may be on a jury, which can help to allow conclusions to be generalized to a larger portion of people. Having a study like that of Ruva et al. (2007) that consists only of students in a college may not reflect the thought patterns and tendencies or a larger population, so conclusions may not be transferred to an actual jury and different results could be possible if the sample is more representative.

 

Methods

In the study by Ruva et al. (2007), the participants were randomly assigned to two groups: one that did not see any negative articles about a court case and others that did. They then were all instructed to complete an unrelated task. Days later, they were shown a videotaped trial and were told to produce a verdict, disregarding any information they had previously seen. The participants were then given a source-monitoring test to determine where their information regarding the case came from. The measures used in this experiment required participants to report their own results and researchers compared these self-reports to determine the effects of news articles on court verdicts. The results indicated that people were unable to determine accurately where their information came from. These methods help us understand the difficulties in disregarding information and show us how media can play an unintentional role in influencing court outcomes. Future research can study the effects of media using biased language and how it may influence public opinion in comparison to media that offers objective facts without using carefully chosen words to drive up traffic and improve revenue.

 

Another study by Stawiski et al., 2012 was conducted to determine if the power of suggestion will influence verdicts. All participants were given transcripts of a court case, but some participants got instructions from the judge that used the term 'preponderance of evidence', which essentially means that the prosecution’s case is more likely than not. Other participants were given instructions from the judge that referred to 'reasonable doubt', which means the jury has to feel that there is no other reasonable explanation than that the defendant is guilty. They then were told to deliberate either in a group or individually and determine a verdict. The results indicated that people were more lenient when instructed to use reasonable doubt versus when they were instructed to convict using a preponderance of evidence, implicating that the judge's instruction can guide juries in reaching a verdict. These methods show us how the power of suggestion can influence decisions. Future research can study the impacts of this power of suggestion on differing cases, like violent cases, drug cases, and civil cases. Research can also study the impacts of suggestions on a general population rather than on a group of undergraduate students.

 

Both of the studies by Ruva et al. (2007) and Stawiski et al. (2012) worked to understand the influences of outside forces on juries’ deliberations. Both show how the power of suggestion can influence people’s opinions and how they may not be able to recognize their own opinions are being influenced. The measures in Ruva et al.’s (2007) study relied on self-reports to reach conclusions, while the measures in Stawiski et al.’s (2012) study used verdicts of guilt to determine participant’s attitudes. Self-report measures may be influenced by participant’s desire to respond correctly, while measuring guilty verdicts is only influenced by the variables related to determination of that verdict. The research design on Stawiski et al.’s (2012) study is therefore more reliable than that of Ruva et al.’s (2007) study, but both studies are valid and give us important insights on how jurors deliberate.

 

Limitations

The limitations of the study conducted by Stawiski et al. (2012) include the sample being only undergraduate students. This is an educated group of people that isn't necessarily representative of a random jury composed of eligible citizens. Results could have been different if a more representative sample of the general population had been taken.

 

The limitations of the study conducted by Devenport & Cutler (2004) include the fact that the experiment was done in an artificial setting that may not replicate a true courtroom setting. These participants are at no risk of being responsible for the outcomes of their verdicts, so they may be more likely to follow their own judgements rather than the instructions of the court. Results may be different if researchers questioned actual jurors’ thoughts and feelings after concluding a verdict in a real court case.

 

Out of these two particular cases, the study by Devenport & Cutler (2004) used methodologies that may better infer causation than those of the study done by Stawiski et al. (2012), simply because there were less variables. In Devenport & Cutler's (2004) study, they studied the effects on opposing expert witnesses at trial and created three groups: one with an expert witness present, one with a witness opposing the expert witness, and one with no witness at all. Then these participants rated their perception of credibility. Having three control groups only studying the one variable makes this method more reliable than Stawiski et al.'s (2012) study, which evaluated multiple variables. Along with determining whether the judge's suggestion influenced deliberations, Stawiski et al.'s (2012) study also explored implicit bias. The participants were placed into groups that not only varied in judge's instruction, but also whether the defendant was straight or gay to determine if implicit bias played a role in deliberations. Having more than one variable being studied can possibly sway results as it's not entirely possible to determine causation when more than one variable is being studied. Also, Stawiski et al.'s (2012) sample consisted solely of undergraduate students, while Devenport & Cutler's (2004) sample was more representative of an actual pool of jury members as they recruited both college students and jury-eligible community members. While the procedures of Devenport & Cutler’s (2004) experiment were more reliable than those of Stawiski et al.’s (2004) experiment, both studies are valid and again, give us important insights on jury deliberations.

 

Conclusions

Upon consideration of the aforementioned cases coupled with a review of Bornstein & Greene's (2011) research, it is evident that juries are a highly effective part of our judicial system, but there are some areas that can be improved upon. Continuing research on the decision-making processes of jury members will help us gain a better understanding of what areas need more involvement. Understanding where juries may fall short can allow us to implement policies and procedures that can help us make an already effective component of our justice system more efficient. As Nunez et al. (2011) points out, much research has been done on cognition as an individual on a jury board, but little research has been done on a group level- which is a major part of being in a jury. We must work to understand how these groups operate just as much as we have to understand how the individuals operate independently of the group. By using available research like the studies examined in this article, we can begin advocating for change and accountability in the media, uniformity on judge’s instructions, and a reconsideration of allowing opposing witnesses to discredit the experts that have been hired.


References

 

Bornstein, B. H., & Greene, E. (2011). Jury Decision Making: Implications For and From Psychology. Current Directions in Psychological Science, 20(1), 63-67. https://doi-org.ezproxy.snhu.edu/10.1177/0963721410397282

Devenport, J.L., & Cutler, B.L. (2004, October). Impact of Defense-Only and Opposing Eyewitness Experts on Juror Judgments. Law and Human Behavior, 28(5), 569-576. https://doi-org.ezproxy.snhu.edu/10.1023/B:LAHU.0000046434.39181.07

Goldstein, E. B. (2019). Cognitive Psychology (5th ed.) Cengage. https://www.cengage.com/

Nunez, N., McCrea, S. M., & Culhane, S. E. (2011). Jury Decision Making Research: Are Researchers Focusing on the Mouse and Not the Elephant in the Room? Behavioral Sciences and the Law, 29(3), 439–451. https://doi-org.ezproxy.snhu.edu/10.1002/bsl.967

Ruva, C., McEvoy, C., & Bryant, J.B. (2007). Effects of Pre-Trial Publicity and Jury Deliberation on Juror Bias and Source Memory Errors. Applied Cognitive Psychology, 21(1), 45–67. https://doi-org.ezproxy.snhu.edu/10.1002/acp.1254

Stawiski, S., Dykema-Engblade, A., & Tindale, R.S. (2012, January 1). The Roles of Shared Stereotypes and Shared Processing Goals on Mock Jury Decision Making. Basic & Applied Social Psychology, 34(1), 88–97. https://doi-org.ezproxy.snhu.edu/10.1080/01973533.2011.637467