INDIANAPOLIS – Before the start of most seasons, chances are high that athletes have gone through a computerized exam called ImPACT, or Immediate Post-Concussion Assessment and Cognitive Testing. It is a process that has become almost synonymous with preseason conditioning tests and two-a-days.

The ImPACT Test, one of the most widely used of several similar concussion management tools, is a computer-based test that measures shape recall, reaction time, attention, working memory, and other mental abilities. Individuals are given the test to establish a baseline score at the start of a season, then those who suffer a head injury are tested again before being allowed to return to play.

But, baseline results may not be as accurate as ImPACT claims.

According to new research from Butler University Director of Undergraduate Health Science Programs, Dr. Amy Peak, and former Butler health science student Courtney Raab, individuals are outsmarting the test. Previous studies, including one cited by ImPACT’s “Administration and Interpretation Manual,” say 89 percent of ‘sandbaggers,’ or individuals purposefully doing poorly on the test, are flagged. But, according to Peak and Raab’s research, only half are caught.

“If baseline scores aren’t accurate, that could likely lead to individuals returning to play before they are healed, or individuals returning to normal activity prior to their brain being ready,” Peak says. “This is a very dangerous situation because it is clear that individuals who have had one concussion are at greater risk of having subsequent concussions.”

So why cheat the system?

Many athletes don’t want to miss playing time. In fact, a study found in 2017 that nearly a third of athletes didn’t give their best effort on computerized neurocognitive tests, such as ImPACT.

The ImPACT Test is key when it comes to making return to play decisions. Though not the only determining factor, comparing test scores is routinely something trainers or doctors do to see if the individual can return to action or regular activities.

“Athletes get smart about how to take this test and they admit to wanting to return to action as soon as possible,” Peak says. “Some athletes ignore the risks, and just want to play, so if this test can be cheated, they will do it.”

Their research shows that those who attempted to sandbag were successful, as long as they didn’t try to do too poorly on the test.

There were 77 volunteers who participated in the study, 40 of whom were told to sandbag the test and 37 of whom were told to try their best. Of the 37 volunteers in the control group, none were flagged for invalid results. But of the 40 sandbaggers, 20 successfully tricked the test.

The key to not getting flagged by the test was to get questions wrong, but not too many questions.

“The group that scored much lower than our control got flagged, but the group that did bad, but not too bad were not caught,” Peak says. “Our research revealed that you can get away with doing poorly, sneak through with a low score, if your score isn’t outrageously low.”

Instead of the 89 percent rate of catching sandbaggers that previous research suggests, Peak and Raab’s research revealed just a 50 percent rate. However, Peak says, the takeaway is not to scrap the entire ImPACT Test. Peak says their research points to the fact that key aspects of the widely used test should be reevaluated.

The ImPACT Test’s five built-in invalidity indicators, which are designed to flag results which suggest underperformance, are not working well, she says. Peak and Raab’s research found that only two of those indicators detected more than 15 percent of test takers who tried to trick the test.

“There are some invalidity indicators that are really ineffective. Our research showed us that these indicators are not sensitive enough,” Peak says. “There are many things to consider. Are the indicators even right? Maybe the cutoffs should be higher? These are all important questions. But one thing we do know is that a much greater percentage of individuals can purposefully underperform without detection and we need to delve deeper into how to improve the test.”