|
November 8, 2025 Discussing the Force Science Institute at our Town Hall Meeting
PERF members, As I wrote in the column previewing our Town Hall Meeting in Denver, I was particularly excited to host a debate on the reliability of the Force Science Institute’s findings. Most of you are probably familiar with Force Science, which conducts research and provides training and expert testimony on police use of force. But in case you aren’t: According to the organization itself, Force Science “consists of three interconnected divisions:” (1) research, which conducts scientific investigations and presents and publishes its findings in academic and professional venues; (2) training, which applies the research division’s work by delivering presentations summarizing the results to a variety of legal and law enforcement professionals; and (3) consulting, which lends the institute’s expertise to entities engaged in examining and determining “the appropriateness of an officer’s response during a force encounter.” The authors of a recent article in Police Quarterly questioned the reliability of Force Science’s research: “Contrary to [Force Science’s] assertions, our findings show its published materials fail to meet the scientific rigor demanded by the Daubert standard, which governs the admissibility of scientific evidence in U.S. courts. These results highlight the need for caution and critical scrutiny of such evidence, and suggest that reliance on Force Science in legal proceedings, training programs, and policing policies risks introducing unverified concepts into high-stakes decision-making contexts.”
(L–R) Ian Adams, Chuck Wexler, and Lewis Von Kliem at PERF’s Town Hall Meeting in Denver. In response, Lewis Von Kliem from Force Science noted, “Three of [the] authors frequently testify against law enforcement officers and opposite Force Science experts in high profile litigation” and wrote, “While the article presents itself as a scientific critique, it includes errors that warrant correction.” To further explore this controversy, I invited the study’s authors and Force Science to send one representative each to speak at our Town Hall Meeting. Those representatives were given six to eight minutes to make an opening statement and show PowerPoint slides. We were pleased to be joined by Ian Adams, an assistant professor at the University of South Carolina who coauthored the Police Quarterly article, and Lewis Von Kliem of Force Science. After their initial presentations, I invited audience members to ask questions and join the discussion, and there was a lively debate among attendees. People were very engaged in this discussion, and it became a highlight of the Town Hall Meeting. The opening statements from Ian Adams and Lewis Von Kliem are transcribed below, and you can view video of their comments and the rest of the discussion here. I’ll note that Ian Adams is referring to slides that you can see in the video.
Ian Adams, Assistant Professor of Criminology and Criminal Justice, University of South Carolina My name is Ian Adams. I’m an assistant professor at the University of South Carolina. Two of my coauthors, Seth Stoughton and Brandon del Pozo, are in the audience as well. Thank you very much. And I’d like to present just some of the results very quickly of a study that looked into the scientific reliability of Force Science studies. And the reason I’m here today—and the reason I’m very thankful for Dr. Wexler’s invitation—is these are my principles. My principles are that when a scientist gives scientific advice to police chiefs about decisions they need to make, that that scientific advice is highly reliable. Because it can influence training, policy, outcomes, the way that we review those outcomes and feed back into the policy loop. And so it’s really important that the scientific advice we give is highly reliable.
Ian Adams. And the good news is Force Science agrees. Force Science’s mission is to provide high-quality research that ultimately aims to educate law enforcement, courts, and communities. That’s a quote directly from co-founder Bill Lewinski in their book, which collects 24 of their studies and which Dr. Lewinski calls the core of Force Science’s research. So, I set out to sort of study the reliability of that. And the reason I did is when courts have looked at Force Science, they find that it’s highly unreliable. In fact, Judge Bernal here in the Central District of the U.S. District Courts in California says, “this is nonscientific gobbledygook.” That “the Force Science Institute is widely regarded as a purveyor of unreliable pseudoscientific analysis, and its studies, virtually all of which are non–peer reviewed and none of which have been published in reputable scientific journals, enjoy little to no acceptance within the relevant scientific community.” And that’s where I want to keep the focus today. This is a scientific question. This is not a question about people. This is not an attack on people. It’s not about personalities. It’s not about relationships. It’s about scientific reliability. And the question that I set out to answer is Judge Bernal’s question: Is it true that these are not scientifically peer reviewed, and is it true that they don’t have scientific acceptance because of their unreliability. To the first question the judge poses, it’s not true that nothing has ever been published in a peer-reviewed journal. But it is true that 58 percent of what’s been published was in nonscientific journals. And in fact, 40 percent of those studies [are] from a single outlet—a practitioner journal, not a scientific journal, called Law Enforcement Executive Forum. And a practitioner journal where Dr. Lewinski himself is a common coauthor of all those studies, was the associate editor. This is a journal that is not published in the Web of Science. It can’t be. It doesn’t comport with the scientific demands, such as the committee on publication ethics, that would be required to join the Web of Science. And the reason this is important—it’s fine, I do it myself. I take high-quality, peer-reviewed studies, scientific studies, and I translate them for practitioners. I think that’s extremely important. But what you don’t do is skip that first step. You don’t take articles published in Police Marksman in the early 2000s—a magazine for gun enthusiasts—and then publish them in a practitioner journal and say, “See, there’s our science.” So, three coders. Myself, I hold both basic and advanced Force Science certifications from my time as a police officer. A second coder, who’s a 23-year serving law enforcement administrator, a PhD student, also holds Force Science basic certification. And a third coder, who has no policing background, no Force Science background, but is an expert in methods and statistical validity. We all set out to use three different tools. Let’s give it a fair test. This is the most strict tool: the Maryland Scientific Scale. And it asks—it’s used across thousands of studies, but we used it here to ask, “Does this study’s stated design, can it support policy? Can it support policy recommendations?” And as you can see, even on their stated design, it’s about one and a half points out of five. And when you take in errors in the design, it falls to less than one point out of five. So no, it can’t support policy recommendations. Second, a little more relaxed tool that’s used on things like descriptive studies, cross-sectional designs. And it asks about like, “Is your sample size justified? Is your sample size good? Did you show statistical power? Did you use random sampling? Did you use something like—you might’ve heard scientists say once in a while, ‘We controlled for blah, blah, blah’—did it do that? Did it use randomization?” Across all that, no. One point out of five available on selection. Less than half a point available on comparability. Where they do make up some points is on outcomes, and those questions are asking, “Do they provide precise outcomes?” And they do. But it ultimately turns into a sort of false precision because it can’t make up for the total, in which they score just about 40 percent of the available 10 points on this much more relaxed scale. And finally, the easiest scale, the Mixed Methods Assessment tool. All this is asking is, “Hey, the study said they set out to do a certain design. How did they do on that design, out of five points?” So if they say we’re a descriptive study, five questions about descriptive studies. If they say we’re nonrandomized, fine, here’s five questions about nonrandomized. Even on this, the most relaxed scale, just 60 percent, about three out of five points. So yes, Judge Bernal was right. These studies don’t hold acceptance within the relevant scientific communities, because they don’t have the scientific reliability necessary. In Force Science’s reply, which was authored by Mr. Von Kliem, we find moments of agreement, specifically here. Force Science, this is their response. These are their words. They say, “Force Science training and expert testimony does not presume the ability to attribute or preclude a specific human performance phenomenon to an individual.” That’s pretty wordy. What is it saying? It’s saying that you, the chiefs, your agencies, your officers should not take Force Science studies and try to generalize to another case. I agree. And finally, extending an olive branch, I think these are good questions. They’re good questions. They’re questions that are important to policing. They were important to me as a police officer. They are important to you as police officer leaders. And to the degree that you have good questions, then it deserves good scientific process. To the degree that you seek scientific advice, that scientific advice needs to be highly reliable. And I urge you, do not be satisfied with low quality, unreliable, pseudoscientific gobbledygook, in the words of Judge Bernal. Thank you. Lewis Von Kliem, Chief Consulting and Communications Officer, Force Science How many of you have been to Force Science training? Okay. One of things that you might be wondering [is] why we’re in this room particularly talking about scientific methodology, and I will assure you of this: This is nonsense. There are independent researchers who called us as soon as this got published and they said, “This is a hit piece.” And they used the wrong evaluative models. They combined different types of research. They tried to do a comprehensive, once-around-the-block, 22 different studies—some of which were led by different authors other than Dr. Lewinski, as you know.
Lewis Von Kliem. And so you think to yourself, “He left out one important thing.” And I want you to start and end with this: None of these experts are aware of any research that contradicts or discredits the findings of Force Science. None. And they are highly motivated to find them. So why are we sitting in here talking about research that they can find nothing to discredit, except that this is what it sounds like: “We don’t know if you guys are right or wrong, we just don’t feel comfortable with your methodology. We’re going to throw it through some”—I’m not, was it AI models? I don’t know if it was AI-generated, some of these evaluative models—“we’re going throw these through these evaluative models and we’re going to give you a score. And then we’re going to try to convince the industry that because we do not agree with the methodology in this research and we don’t think from a scientific standard what you published in the highest-rated journals”—by the way, Force Science does publish in some of the highest-rated journals. [Law Enforcement Executive Forum] was a peer-reviewed journal by some incredibly, highly, highly competent PhDs, of [which] one was Dr. Bill Lewinski. So why are we sitting here? I’m a lawyer. I was a police officer for a lot of years. I’ve focused on use of force. I’ve been, like many of you, immersed in violence. Now what I do is I’m an expert consultant. I’m a litigation consultant. I was a special assistant U.S. attorney. I was a prosecutor. I was a legal advisor for police chiefs, at least five different chiefs of police. And when I first came to Force Science, they said, “Force Science is pseudoscience,” says the New York Times in an opinion piece, which you guys cited. We discredited that opinion piece years ago and it got resurfaced again in this article. We attempted to correct the record on that opinion piece and they said, “It doesn’t have to be factual, it’s just an opinion piece.” And so I had to start doing research when I was brought on. I was a military JAG. I worked at the DoD. I specialize in domestic operational law, all the stuff you guys are talking about for National Guard stuff, my time at the Pentagon. My job—I was a senior policy attorney for Lexipol—my job is to look at policy, look at curriculum, and decide which research supports these progressive reform efforts. And we came up with some—look, if you are going to attack Force Science, here’s what you’re attacking. Force Science is committed to this concept of honest accountability. That’s what you saw. Honest accountability requires two things. You cannot have terms that are so ambiguous that they are not clear enough that an officer can predict the lawfulness of their own behavior. When you get into an organization like PERF, which is I think self-described as a progressive, activist organization who is trying to elevate policing beyond the Constitutional standard. Fair enough. That’s an incredibly important voice in the discussion. And what started to happen back in the mid-2000s was, they wanted to start holding officers accountable. And when you start holding officers accountable, you take them to trial for their uses of force. And when they would lose those trials because of testimony by someone who was trained by Force Science, all that meant was somebody came in and they said, “We are only going to settle for honest accountability. If you’re using standards that no officer knows what they mean and can’t predict the lawfulness of their behavior, we’re going to stand in the way.” But more importantly, the second criterion for honest accountability requires that you cannot have expectations on a human beyond human performance capabilities. You cannot expect cops to do things that no other human can do. And when you started to see the progressive police reform agenda—simply, they were throwing every possible word in there, from “minimum force necessary,” to “proportionality,” to “only when justified,” to “only when the community agrees.” And they’re saying, “Okay officer, go do your job and we’ll let you know after the fact whether you got it right.” And then they say that, “By the way, you must stop immediately, never make a mistake.” You guys remember the [Bay Area Rapid Transit] shooting, that’s probably one of the most critical ones, where they were saying this officer intentionally shot this guy. And we said, “We don’t think so. We think that when he yells, ‘Taser, Taser, Taser,’ there’s a lot of evidence he didn’t intend to do it. So help us understand what is the science that explains why somebody intends to draw a gun but instead draws a Taser.” And someone like Dr. Lewinski will testify to that. And then when the officers are found not guilty or they are relieved from civil liability because the expectation was beyond human performance, we have people writing hit pieces about us. We continue to battle for honest accountability in the court. And what ends up happening is, they start to attack the research with incorrect methodological models. But more than that—and you heard them say it today, which I think was spot on—they’re trying to attack our policy advice, our legislative advice, and they’re trying to attack our expert testimony in court. Here's the problem with that. All of the authors—well, most of the authors—are opposition to us. If you—in almost every national high-profile police case you’re going to find Seth Stoughton. Fair enough. Seth and I sit on the opposite side. Almost every case we go to, we’re going to be on the opposite side, and I think we have some more coming up. Force Science will always be there if they’re going to try to hold officers corruptly accountable, disingenuously accountable, naively accountable, I don’t care what you want to call it. But it’s not honest accountability when you’re expecting things from cops that no human can do. They’ve been weaponizing video. They’ve been weaponizing police tactics. They’ve been weaponizing generally accepted police practices. And every time they try to do that, you’re going to find Force Science standing in the way. We have thousands of peer-reviewed research that undergirds our curriculum. Do not question our curriculum. We have the top rated—we have at least four directors of the top rated clinical trauma centers in the country. We have some of the foundational leaders in academia who teach for us—Dr. Marc Green, Tim Lee, Richard Smith before he died, Gary Klein. These are the people who undergird our curriculum. If you want to challenge research, what typically happens in the scientific process is you try to replicate it. Tell us where we’re wrong. We do that ourselves. We have an invitation on our website: If you are aware of any research that contradicts or is inconsistent with ours, come let us know. Bring it to our attention. It’s my job to pull it out. Nobody’s done that. This is an important conversation, but I want you to recognize how it came to you. It didn’t come to you through the scientific process. It didn’t come to you through a peer review process. We keep winning in court. That’s what’s happening. We keep discrediting their experts. That’s what’s happening. Their own experts come into court and can’t agree with each other. That’s what’s happening. And then this article pops up. Now, Ian Adams, he’s new to the expert witness game, I think. Maybe not, maybe he knows he’s been doing it. I’m just getting used to him. I don’t have anything against him. I don’t have any specific examples of things you might’ve said in court that I would disagree with. Thanks to Ian Adams, Lewis Von Kliem, and the audience members who spoke. I enjoyed hosting this healthy debate about a key issue facing the field, and I encourage you all to watch the full discussion. While this debate was specifically about this study of Force Science, it highlights broader questions about what constitutes quality research. Police officials often seek guidance on critical questions about policy and training. That guidance should be grounded in research that both is relevant to the realities of modern policing and has been vetted for quality. For more discussions like this, I hope you save the date for our Annual Meeting on April 15–17 in Los Angeles! Best, Chuck |