Skip to main content
Blog April 16, 2021

Commercial DNA Technologies, Algorithmic Tools, & Secrecy Threaten Fairness in Trials

The increasingly rapid rate of technological innovation has disrupted many spheres of life, but few with higher stakes than forensic evidence. Yet the private sector tools that generate this evidence are far from flawless, and commercial secrecy can confound attempts by those accused of crime to challenge the case against them, leading to devastating consequences for individuals.

Forensic technologies are proliferating and increasingly form the basis for critical evidence and determinations in court (in addition to their more familiar role in the criminal investigative process). 

For example, reliance on automated tools for analyzing contaminated DNA samples is becoming particularly common in criminal proceedings, posing challenges for fundamental justice. 

The history of traditional (that is, non-automated) forensic science is replete with examples of techniques that were relied upon in court prematurely and without rigorous scientific vetting, leading to numerous wrongful convictions and other forms of faulty imprisonment. This history features criminal prosecutions based on forensic techniques that areapplied inconsistently, a propensity to overstate the accuracy of forensic outcomes in court, and even prosecutions reliant on forensic techniques known to have no scientific basis at all. Historical forensic problems prompted President Obama’s technology advisory council (PCAST) to conclude in 2016 that “it has become increasingly clear in recent years that lack of rigor in the assessment of the scientific validity of forensic evidence is not just a hypothetical problem but a real and significant weakness in the judicial system.”

Automated forensic technologies create several additional points of failure in this already problematic discipline. Forensic systems automate many of the tasks traditionally carried out by individuals, meaning that the automation process itself can fail even if the underlying science is sound. Complicating matters further, these technologies are often owned by companies that shroud their key operating parameters in commercial secrecy. 

Take DNA comparison analysis as an example. Sufficient standards are now in place to ensure that manual DNA comparison techniques live up to the presumptive infallibility they enjoyed for years in U.S. courts. But where multiple DNA samples are mixed together, these manual ‘gold standard’ DNA analysis techniques fail to discover a match. 

Police then turn to emerging Probabilistic Genotyping Technologies (PGTs), which mathematically guess an individual’s DNA profile despite intermingling. As is the case with traditional DNA matching techniques, the mathematical model for interpreting mixed DNA profiles must be rigorously peer reviewed for accuracy. For PGTs, however, peer review of scientific methodology alone is insufficient. The mathematical techniques relied on by PGTs are implemented in software. If the source code implementing these techniques is flawed, then errors will result even if the basic mathematical foundation remains scientifically sound. 

Many PGTs treat their code as trade secrets, meaning that there is no public opportunity to assess this source code and ensure its rigour. Some PGT companies have gone so far as to refuse disclosure of their source code for vetting in criminal proceedings. As there are no formal standards for ensuring PGT source code is valid, there is little assurance that PGTs adopted by police and presented as evidence in court are error free. 

Encouragingly, recent court decisions in the United States have rejected claims of commercial secrecy. As early as 2015, a court-mandated review of a PGT software called ‘STRmix’ identified coding errors that significantly impeded matching reliability. Another PGT called ‘Forensic Statistical Tool’ or ‘FST’ provided evidence in thousands of criminal trials before a U.S. court ordered production of its source code in 2017. The subsequent adversarial scrutiny of FST showed that coding problems implemented a different scientific methodology than publicly described, leading FST to overstate the likelihood of guilt. As a result, use of FST was subsequently discontinued. Earlier this year, a New Jersey appellate court also confirmed that private companies offering their forensic services to help secure criminal convictions cannot expect to shield their proprietary software from being vetted by the defence.

Canadian courts have been similarly reluctant to permit secrecy claims where these will undermine fundamental justice, and the Crown must proactively disclose any PGT use in the course of a criminal proceeding. Additionally, judges are obligated to play a ‘gatekeeping’ function when confronted by new forensic techniques in criminal proceedings, and are responsible for subjecting new techniques to special scrutiny to ensure they are reliable and sound. While some PGT techniques have been relied upon by Canadian courts, to date their reliability has not been adversarially challenged. 

Continued use of these technologies in Canada suggests they will, in time, have their day in court. At that stage, adversarial vetting of specific PGT products may well reveal some coding errors. But the broader secrecy in which PGTs operate will continue to pose a problem. To date, courts have limited source code disclosure obligations to defence experts while continuing to shield PGT software from peer-review by the broader technical community. Yet coding errors are difficult to spot in one-off assessments and software is not static, meaning that coding errors can be introduced with each version of each vendor’s PGT. 

Court-based adversarial scrutiny of DNA techniques also does little to stem reliance on shoddy tools in other settings. For example, Canadian policing and border control agencies have been known to rely on ancestry mapping websites such as Ancestry.com and FamilyDNA to solve high profile cold cases and to reject asylum claims. Yet the accuracy of these websites is highly questionable, and the consequences of a mistaken match can be severe

Nor does the problem stop with DNA techniques. A veritable host of new technologies are being adopted by government agencies. Algorithmic tools analyze swaths of evidence to assess whether an individual is lying or the risk that they might reoffend if released to the public. Automated systems analyze biometric information such as fingerprints or facial images in grainy photographs to identify perpetrators of crime. Some of these tools are more accurate than others, with wide variations in accuracy and racial bias between different vendors and different product versions. 

While there have been preliminary attempts to impose vetting requirements on some emerging technologies, assessment and adoption of these tools is often at the discretion of individual agencies or even individual officers, and without meaningful input from the public or from technical communities. Adopting a more systematic means of assessing the accuracy, reliability, as well as broader acceptability of these tools at the procurement stage is critical.