a Better Bubble™

TechDirt 🕸

ID.me Finally Admits It Runs Selfies Against Preexisting Databases As IRS Reconsiders Its Partnership With The Company

2 years 11 months ago

Tech company ID.me has made amazing inroads with government customers over the past several months. Some of this is due to unvetted claims by the company's CEO, Blake Hall, who has asserted (without evidence) that the federal government lost $400 billion to fraudulent COVID-related claims in 2020. He also claimed (without providing evidence) that ID.me's facial recognition tech was sturdy, sound, accurate, and backstopped by human review.

These claims were made after it became apparent the AI was somewhat faulty, resulting in people being locked out of their unemployment benefits in several states. This was a problem, considering ID.me was now being used by 27 states to handle dispersal of various benefits. And it was bound to get worse, if for no other reason than ID.me would be expected to handle an entire nation of beneficiaries, thanks to its contract with the IRS.

The other problem is the CEO's attitude towards reported failures. He has yet to produce anything that backs up his $400 billion in fraud claim and when confronted with mass failures at state level has chosen to blame these on the actions of fraudsters, rather than people simply being denied access to benefits due to imperfect selfies.

Another claim made by Hall has resulted in a walk-back by ID.me's CEO, prompted by increased scrutiny of his company's activities. First, the company's AI has never been tested by an outside party, which means any accuracy claims should be given some serious side-eye until it's been independently verified.

But Hall also claimed the company wasn't using any existing databases to match faces, insinuating the company relied on 1:1 matching to verify someone's identity. But this couldn't possibly be true for all benefit seekers, who had never previously uploaded a photo to the company's servers, only to be rejected when ID.me claimed to not find a match.

It's obvious the company was using 1:many matching, which carries with it a bigger potential for failure, as well as the inherent flaws of almost all facial recognition tech: the tendency to be less reliable when dealing with women and minorities.

This increased outside scrutiny of ID.me has forced CEO Blake Hall to come clean. And it started with his own employees pointing out how continuing to maintain this line of "1-to-1" bullshit would come back to haunt the company. Internal chats obtained by CyberScoop show employees imploring Hall to be honest about the company's practices before his dishonesty caused it any more damage.

“We could disable the 1:many face search, but then lose a valuable fraud-fighting tool. Or we could change our public stance on using 1:many face search,” an engineer wrote in a message posted to a company Slack channel on Tuesday. “But it seems we can’t keep doing one thing and saying another as that’s bound to land us in hot water.”

The internal messages, obtained by CyberScoop, also imply that the company discussed the use of 1:many with the IRS in a meeting.

Those messages had a direct effect: Blake Hall issued a LinkedIn post that admitted the company used 1:many verification, which indicates the company also relies on outside databases to verify identity.

In the Wednesday LinkedIn post Hall said that 1:many verification is used “once during enrollment” and “is not tied to identity verification.”

“It does not block legitimate users from verifying their identity, nor is it used for any other purpose other than to prevent identity theft,” he writes.

Hall's post hedges things quite a bit by insinuating any failures to access benefits is the result of malicious fraudsters, rather than any flaws in ID.me's tech. But this belated honesty -- along with the company's multiple failures at the state level -- has caused the IRS to reconsider its reliance on ID.me's AI. (Archived link here.)

The Treasury Department is reconsidering the Internal Revenue Service’s reliance on facial recognition software ID.me for access to its website, an official said Friday amid scrutiny of the company’s collection of images of tens of millions of Americans’ faces.

Treasury and the IRS are looking into alternatives to ID.me, the department official said, and the agencies are in the meantime attentive to concerns around the software.

This doesn't mean the IRS has divested itself of ID.me completely. At the moment, it's only doing some shopping around. Filing your taxes online still means subjecting yourself to ID.me's verification software for the time being.

A recent blog post on ID.me's site explains how the company verifies identity as well as names the algorithms it relies on to match faces, which include Paravision (which has been tested by the NIST) and Amazon's Rekognition, a product Amazon took off the law enforcement market in 2020, perhaps sensing the public's reluctance to embrace even more domestic surveillance tech.

This may be too little too late for ID.me. Its refusal to engage honestly and transparently with the public while gobbling up state and federal government contracts has expanded its scrutiny past that of the Extremely Online. Senator Ron Wyden wants to know why the IRS has made ID.me the only option for online filing.

I’m very disturbed that Americans may have to submit to a facial recognition system, wait on hold for hours, or both, to access personal data on the IRS website. While e-filing returns remain unaffected, I’m pushing the IRS for greater transparency on this plan.

But e-filing is affected. As the IRS's spokesperson noted in a statement to Bloomberg, ID.me is still standing between e-filers and e-filing.

[IRS spokesperson Barbara] LaManna noted that any taxpayer who does not want to use ID.me can opt against filing his or her taxes online.

It may be true that people with existing accounts might be able to route around this tech impediment, but new filers are still forced to interact with ID.me to set up accounts for e-filing. If spotty state interactions created national headlines, just wait until a nation of millions starts putting ID.me's tech through its paces.

Tim Cushing

Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did

2 years 11 months ago

Another day, another privacy scandal that likely ends with nothing changing.

Crisis Text Line, one of the nation's largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers... to create and market customer service software. More specifically, Crisis Text Line says it "anonymizes" some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.

As we've seen in countless privacy scandals before this one, the idea that this data is "anonymized" is once again held up as some kind of get out of jail free card:

"Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable."

But as we've noted more times than I can count, "anonymized" is effectively a meaningless term in the privacy realm. Study after study after study has shown that it's relatively trivial to identify a user's "anonymized" footprint when that data is combined with a variety of other datasets. For a long time the press couldn't be bothered to point this out, something that's thankfully starting to change.

Also, just like most privacy scandals, the organization caught selling access to this data goes out of its way to portray it as something much different than it actually is. In this case, they're acting as if they're just being super altruistic:

"We view the relationship with Loris.ai as a valuable way to put more empathy into the world, while rigorously upholding our commitment to protecting the safety and anonymity of our texters,” Rodriguez wrote. He added that "sensitive data from conversations is not commercialized, full stop."

Obviously there are layers of dysfunction that have helped normalize this kind of stupidity. One, it's 2021 and we still don't have even a basic privacy law for the internet era that sets out clear guidelines and imposes stiff penalties on negligent companies, nonprofits, and executives. And we don't have a basic law because it's hard (though writing any decent law certainly isn't easy), but because a parade of large corporations, lobbyists, and revolving door regulators don't want the data monetization party to suffer even a modest drop in revenues from the introduction of modest accountability, transparency, and empowered end users. It's just boring old greed. There's a lot of tap dancing that goes on to pretend that's not the reason, but it doesn't make it any less true.

We also don't adequately fund mental health care in the states, forcing desperate people to reach out to startups that clearly don't fully understand the scope of their responsibility. We also don't adequately fund and resource our privacy regulators at agencies like the FTC. And even when the FTC does act (which it often can't in terms of nonprofits), the penalties and fines are often pathetic in scale of the money being made.

Even before these problems are considered, you have to factor that the entire adtech space reaches across industries from big tech to telecom, and is designed specifically to be a convoluted nightmare making oversight as difficult as possible. The end result of this is just about what you'd expect. A steady parade of scandals (like the other big scandal last week in which gay/bi dating and Muslim prayer apps were caught selling user location data) that briefly generate a few headlines and furrowed eyebrows without any meaningful change.

Karl Bode

Massachusetts Court Says Breathaylzers Are A-OK Less Than Three Months After Declaring Them Hot Garbage

2 years 11 months ago

Breathalyzers are like drug dogs and field tests: they are considered infallible right up until they're challenged in court. Once challenged, the evidence seems to indicate all of the above are basically coin tosses the government always claims to win. Good enough for a search or an arrest when only examined by an interested outsider who's been subjected to warrantless searches and possibly bogus criminal charges. But when the evidentiary standard is a little more rigorous than roadside stops, probable cause assertions seem to start falling apart.

Drug dogs are only as good as their handlers. They perform probable cause tricks in exchange for praise and treats. Field drug tests turn bird poop and donut crumbs into probable cause with a little roadside swirling of $2-worth of chemicals. And breathalyzers turn regular driving into impaired driving with devices that see little in the way of calibration or routine maintenance.

Courts have seldom felt compelled to argue against law enforcement expertise and training, even when said expertise/training relies on devices never calibrated or maintained, even when said devices are capable of depriving people of their freedom.

Once every so often courts take notice of the weak assertions of probable cause -- ones almost entirely supported by cop tools that remain untested and unproven. Late last year, a state judge issued an order forbidding the use of breathalyzer results as evidence in impaired driving prosecutions. District court judge Robert Brennan said he had numerous concerns about the accuracy of the tests, and the oversight of testing, and the testing of test equipment by the Massachusetts Office of Alcohol Testing.

“Breathalyzer results undeniably are among the most incriminating and powerful pieces of evidence in prosecutions involving either alcohol impairment or “per se” blood alcohol percentage as an element. Their improper inclusion in criminal cases not only unfairly impacts individual defendants, but also undermines public confidence in the criminal justice system.”

The pause on using breathalyzer tests as evidence is only the most recent development in a year's long challenge of their accuracy. In 2017, ruling on the reliability of tests taken between 2012 and 2014, Brennan found that while the tests were accurate, the way the state maintained them was not.

A court finally found a reason to push back against assertions of training and expertise, as well as assertions that cop tech should be considered nigh invulnerable. But the pushback is over. The same court is apparently now satisfied that the tech it questioned last November is good enough to make determinations that can deprive people of their property and freedom.

Breathalyzers are back in business in the Bay State after a judge dropped the suspension on breath tests, which cops use to bust and prosecute drunk drivers.

Salem Judge Robert Brennan, who in November ordered the statewide exclusion of breath test results, has tossed out the police Breathalyzer pause.

The Draeger Alcotest 9510 breath tests have come under fire for several years, as a Springfield OUI attorney represents defendants in statewide Breathalyzer litigation. Lead defense attorney Joseph Bernard has been raising concerns about the software problems impacting the scientific reliability of the breath test.

But the Salem judge in the ruling vacating the Breathalyzer suspension said the Draeger Alcotest 9510 "produces scientifically reliable breath test results."

Judge Brennan isn't willing to let the possibly subpar be the enemy of the verifiable good. If you went long on breathalyzers late last year, it's time to cash out. According to Judge Brennan, whatever's determined to be good enough is, well, good enough to deprive people of their liberties. Brennan's decision notes there's no such thing as "perfect source code" or "flawless machines." Therefore, state residents should just resign themselves to the fact their freedom is reliant on the Massachusetts' OKest Breathalyzers.

"This Court remains satisfied that the public can have full confidence in the results produced by the Alcotest 9510…"

But can they though? Who knows? Certainly not this court. Certification information has been offered but prior to the November 2021 decision, state prosecutors were voluntarily excluding breathalyzer evidence. That's not exactly a vote of confidence. And this vote against breathalyzers was coming from entities judged almost solely on their prosecutorial wins, necessitating the need to achieve as many easy wins as possible.

Weirdly, the judge says the tests are OK but their oversight isn't. Despite the fact that both facets need to be on the same level to avoid abuse and unjustified arrests, the judge is allowing roadside testing to move forward while criticizing the Office of Alcohol Testing for its "lack of candor and transparency" when dealing with the court and criminal defendants.

In the end, the system prevails. Massachusetts cops can continue to use questionable tech to effect arrests and engage in warrantless searches and detentions. As for its oversight, it's only being threatened with the possibility of further action from this court -- the same court that ended breathalyzer testing in November (citing concerns about equipment and accuracy) only to reverse course three months later.

One imagines the demands placed on the Office of Alcohol Testing will be just as temporary as this court's momentary pause on the use of unproven tech. The desire to be in the police business once again outweighs the public's concern about being on the wrong end of baseless prosecutions. The onus is back on presumably innocent defendants to prove the government isn't using faulty tech to lock them up.

Tim Cushing