Photo by RDNE Stock project via Pexels
In the United States, approximately one in five homicide victims, and nearly 50 percent of all female homicide victims, are killed by an intimate partner. Local governments and law enforcement agencies use danger assessment tools to determine which victims are at the highest risk of death at the hands of their partner. While the content of danger assessments varies across the country, they usually involve law enforcement officers asking a series of yes or no questions to victims, such as whether their partner owns a gun or has threatened to commit suicide. Danger assessments also include questions about past instances of strangulation, which can cause loss of consciousness within seconds and is a strong indicator that a victim is at higher risk of serious injury or death.
At first glance, algorithmic and Artificial Intelligence (AI) technologies appear to be effective tools to expedite danger assessments. But Spain’s use of the predictive algorithm Viogén should give government officials in the United States pause before adopting these tools. In a sample of domestic violence related homicides in Spain from 2010 to 2022, Viogén classified 55 of the 98 victims at negligible or low risk of repeat abuse. Although police officers and government officials could override Viogén’s risk score based on their own observations, they still accepted its result 95 percent of the time. If these victims had been classified at a higher risk level, they would have received more resources and police protection—potentially saving their lives. Viogén is one of many new technologies used to provide services to domestic violence victims: AI chatbots now provide safety planning for domestic violence victims and an AI system in Australia identifies dangerous domestic violence offenders. As AI and predictive algorithms become more entwined with government services for domestic violence victims, government officials must understand the perils of using this new technology to identify those at risk of domestic violence.
While California and several other states have used predictive algorithms similar to Viogén to assess risk in child abuse cases, these tools are not yet known to be in widespread use for domestic violence in the United States—yet. If one thing appears certain about the rapid adoption of AI in 2024: it is not a matter of if these new tools are adopted, but when. Before these technologies infiltrate governments’ responses to domestic violence across the country, lawmakers should pass legislation requiring a periodic outside audit to test the system’s effectiveness and show which data points are used to determine whether a victim is at high-risk of death (something the Spanish government refused to do with Viogén).
Even when humans have the ability to override the algorithm’s recommendation, we tend to ignore valuable evidence that contradicts the system’s conclusion, a concerning phenomenon known as automation bias. These tools can also perpetuate existing patterns of discrimination and lack transparency about how automated decision-making systems weigh factors such as race, as seen with predictive algorithms used to assess risk for pretrial detention and release decisions across the United States.
While it’s true humans can also make mistakes in danger assessments, when government officials use automated technologies in these life-or-death decisions, it is difficult to hold them accountable for failing to protect victims. Government agencies should mandate training for law enforcement officials and staff using these technologies to help them critically evaluate the algorithm’s risk assessments instead of blindly following each generated recommendation. These steps would force automated decision-making systems to be publicly accountable to domestic violence victims, who may not be aware the extent to which an algorithm determined their danger assessment.
AI and predictive algorithms also face the classic “garbage in, garbage out” computing problem. If the information entered in the danger assessment is poor, then the result will not capture the victim’s risk of deadly abuse. For example, say a police officer conducts a danger assessment in a hurry and asks the victim whether her partner strangled her. The victim answers “no,” but tells the officer that her partner held her against the wall. The officer fails to ask a follow up question about the way the victim was held against the wall. Had the officer asked, the victim would have replied that her partner held her by the throat. Without this crucial piece of information, the victim receives a low risk assessment and the local prosecutor files lesser charges against the alleged abuser.
To ensure quality information, not garbage, goes into a danger assessment, officers need stronger training to respond to domestic violence incidents and a holistic model to connect at-risk victims to more resources. A proven effective method is Maryland’s Lethality Assessment Program (LAP), in which a first responder identifies high-risk victims and immediately does a warm handoff to a service provider to help the victim create a safety plan and connect them to essential resources. After Maryland implemented this lethality assessment method, the state saw around a 40 percent decrease in female homicide victims killed by a male intimate partner.
The success of a model involving both a risk assessment and immediate connection to resources shows identifying those most in danger of domestic violence does not have a purely technical solution. Legislators should be proactive and regulate the new frontier of AI-related domestic violence responses rather than react too late and risk future tech-enabled tragedies.