IRS Stares Technological Advancement Issues in the Face

Almost every American is used to sharing information with the Internal Revenue Service (IRS), but people were about to become eerily intimate with the agency after they announced the launch of a new website requirement:  live video identity verification. The agency partnered with the third-party company ID.me to prevent scammers from benefiting off of the chaos of the pandemic. Although this new security measure was meant to protect citizens, it can end up doing more harm than good.

The government has dealt with a higher than average number of con artists since the start of the pandemic. In late 2020, intelligence agencies revealed that Russian hackers had been successfully infiltrating U.S government agencies and placing malware for six months. The IRS was not one of the agencies affected, but it uses a lot of the same security infrastructure.  

There are also cases in other countries where personal information is gained by “mission creep”.  Data from government projects like COVID-19 contact tracing were seized by police and used for unrelated purposes. German police used COVID tracking data to investigate a restaurant death, and Australian police officials used it to investigate murders without public knowledge or consent. 

The IRS’s solution to protecting private information: capture live footage of everyone trying to access certain tax services, like their personal profile, on the government website. This impromptu image would then be cross referenced with the photo ID already on file and confirmed. Ari Trachtenberg, CISE faculty affiliate and a researcher in the Center for Reliable Information Systems and Cyber Security at the Hariri Institute, had concerns about the project. “The downside of such a system is that both the government and third parties have a terrible track record of maintaining the privacy of sensitive information,” Trachtenberg said. 

Not only did the plan itself have some flaws, but the tools being used have also been criticized. Facial recognition systems actually use a combination of two different technologies. Both digital processing and artificial intelligence (AI) are used to measure a person’s facial features, and then find a matching image in a database. “These technologies typically have very high accuracy, in part because these biometric features tend to be fairly stable over a person’s lifetime,” Trachtenberg stated.  “At the same time, they have also had spectacular and very public failures.”   

Some of these failures center around larger systemic issues with photo databases. Studies suggest that certain algorithms have trouble identifying specific minority groups. AI systems are trained using a database of faces, and because most of these faces belong to white people, the technology may be less likely to correctly identify people of color.  Using these technologies can result in an additional barrier for certain groups and companies can be liable, regardless of intent, in a civil lawsuit. 

The systemic flaws in facial recognition software design and implementation become a nation-wide problem if this technology is used in government processes. Andrew Sellars, a clinical associate professor of law at Boston University, believes that technologies should have a rigorous vetting process before any decisions are made. “It puts a spotlight on the growing recognition that there needs to be more of an impact assessment,” said Sellars.  “What biases could be involved, who is accountable, and do we understand the machine learning process that got us this technology?”

The IRS recently dropped the facial recognition project, just a few weeks after announcing it, following much criticism.  The agency announced that it will use different security measures that do not involve facial recognition. 

Trying to balance security, privacy, and equity may seem like a riddle with no answer at times, but Trachtenberg offers some insight to potential fixes. “I don’t think that there is a foolproof solution, but one approach that has worked reasonably well in the security community is to allow as much transparency as possible.” This means publishing photo identification system methods and opening them for public comment, or allowing third parties to test the systems and publish their findings.

Sellars advocates for a better screening process before any new systems are actually implemented.   “It calls for human impact to be a larger part of the discussion when adopting technology,” he said.  “Once something has already been created, it’s very hard to take that thing away.” 

Lawyers and cybersecurity experts are having serious discussions about the existence of facial recognition technology, and if it should be used at all.  They argue that there are major faults, and that we don’t understand machine learning enough to ethically implement it into our lives like this.  Whatever future decisions are made about facial recognition, it seems that the issue is making its way to the forefront of both tech talk and research.  

“I think that the IRS reached the right result,” Sellars ends.  “The best choice would be not to do this right now.”