Artificial intelligence (AI) has become a powerful tool in various sectors, including the criminal justice system. However, concerns about algorithmic biases and the lack of human oversight have prompted Virginia lawmakers to take action. During this year’s General Assembly session, Del. Cliff Hayes Jr. (D–Chesapeake) introduced a bill aimed at regulating the use of AI-based tools in criminal justice, ensuring that human judgment remains central to critical decisions.
The Push for Human Oversight in AI-Driven Decisions
Del. Hayes’ bill seeks to address the growing reliance on AI in the criminal justice system by prohibiting AI-generated recommendations from being the sole basis for key decisions. These decisions include pre-trial detention, prosecution, sentencing, parole, and rehabilitation. The legislation also allows for legal challenges or objections to any AI-driven decisions, emphasizing the need for accountability and transparency.
Hayes, a technology management veteran with three decades of experience, acknowledges the benefits of AI but warns of its potential dangers. “AI definitely offers great benefits,” he said. “But there’s another side to that coin. In some cases, we know AI, when it’s not accurate, can be extremely damaging and harmful.” He questions whether the government should experiment with AI in court cases, where decisions can significantly impact individuals’ lives.
“I think we need to continue to have human oversight in those cases, qualified human oversight,” Hayes emphasized. “The people who today are qualified to make those judgments, those decisions, should be the same individuals to make those determinations, and not rely 100% on AI.”
The Problem of Bias in AI Algorithms
The push for regulation comes amid growing evidence of biases in AI algorithms, particularly in facial recognition technology. According to a study by the National Institute of Standards and Technology (NIST), Black and Asian individuals are 10 to 100 times more likely to be misidentified by facial recognition systems compared to white individuals, depending on the algorithm used.
Steven Keener, an assistant professor of criminology at Christopher Newport University and director of the university’s Center for Crime, Equity, and Justice Research and Policy, highlights the disproportionate impact of these biases. “We’re a system that disproportionately incarcerates people of color, especially Black men,” Keener said. While AI tools are designed to reduce bias and racism, Keener notes that many algorithms are inherently biased due to the data used to train them.
“What data set are you using to build the algorithm that determines who is safe and who is unsafe?” Keener asked, pointing to the potential for flawed decision-making in areas like bail eligibility.
The Risks of Over-Reliance on AI
Sanmay Das, a computer science professor at Virginia Tech and associate director of AI for social impact at the Sanghani Center for Artificial Intelligence and Data Analytics, warns against replacing human judgment with AI entirely. “I think the key point over there is accountability, right?” Das said. “If you did not have human oversight, it’s really easy to blame the machine, or the algorithm.”
While AI tools can be helpful in processing large amounts of data quickly, Das cautions that their speed and scale could lead to catastrophic outcomes when applied to decisions affecting thousands of people. “I think AI tools can be enormously helpful in many of these kinds of domains,” he said. “But, I think that we’re going to need to deal with this challenge that people may be tempted to use them and apply them at really grand scales in order to save human time, to save human effort.”
The Broader Implications of AI Regulation
Virginia is not alone in grappling with the ethical implications of AI in law enforcement. At least 26 states allow law enforcement to use facial recognition technology against driver’s license and ID databases, and 16 states permit the FBI to use the technology in “virtual lineups.” Over 117 million American adults are included in these facial recognition networks, raising significant privacy and equity concerns.
As the debate over AI regulation continues, Gov. Glenn Youngkin has until March 24 to review, amend, sign, or veto the legislation. If passed, the bill could set a precedent for other states to follow, ensuring that human oversight remains a cornerstone of justice in an increasingly AI-driven world.






Leave a comment