a Better Bubble™

TechDirt 🕸

Senate's New EARN IT Bill Will Make Child Exploitation Problem Worse, Not Better, And Still Attacks Encryption

3 years 5 months ago

You may recall the terrible and dangerous EARN IT Act from two years ago, which was a push by Senators Richard Blumenthal and Lindsey Graham to chip away more at Section 230 and to blame tech companies for child sexual abuse material (CSAM). When it was initially introduced, many people noticed that it would undermine both encryption and Section 230 in a single bill. While the supporters of the bill insisted that it wouldn't undermine encryption, the nature of the bill clearly set things up so that you either needed to encrypt everything or to spy on everything. Eventually, the Senators were persuaded to adopt an amendment from Senator Patrick Leahy to more explicitly attempt to exempt encryption from the bill, but it was done in a pretty weak manner. That said, the bill still died.

But, as with 2020, 2022 is an election year, and in an election year some politicians just really want to get their name in headlines about how they're "protecting the children," and Senator Richard Blumenthal loves the fake "protecting the children" limelight more than most other Senators. And thus he has reintroduced the EARN IT Act, claiming (falsely) that it will somehow "hold tech companies responsible for their complicity in sexual abuse and exploitation of children." This is false. It will actually make it more difficult to stop child sexual abuse, but we'll get there. You can read the bill text here, and note that it is nearly identical to the version that came out of the 2020 markup process with the Leahy Amendment, with a few very minor tweaks. The bill has a lot of big name Senators as co-sponsors, and that's from both parties, suggesting that this bill has a very real chance of becoming law. And that would be dangerous.

If you want to know just how bad the bill is, I found out about the re-introduction of the bill -- before it was announced anywhere else -- via a press release sent to me by NCOSE, formerly "morality in media," the busybody organization of prudes who believe that all pornography should be banned. NCOSE was also a driving force behind FOSTA -- the dangerous law with many similarities to EARN IT that (as we predicted) did nothing to stop sex trafficking, and plenty of things to increase the problem of sex trafficking, while putting women in danger and making it more difficult for the police to actually stop trafficking.

Amusingly (?!?) NCOSE's press release tells me both that without EARN IT tech platforms "have no incentive to prevent" CSAM, and that in 2019 tech platforms reported 70 million CSAM images to NCMEC. They use the former to insist that the law is needed, and the latter to suggest that the problem is obviously out of control -- apparently missing the fact that the latter actually shows how the platforms are doing everything they can to stop CSAM on their platforms (and others!) by following existing laws and reporting it to NCMEC where it can be put into a hash database and shared and blocked elsewhere.

But facts are not what's important here. Emotions, headlines, and votes in November are.

Speaking of the lack of facts necessary, with the bill, they also have a "myth v. fact" sheet which is just chock full of misleading and simply incorrect nonsense. I'll break that down in a separate post, but just as one key example, the document really leans heavily on the fact that Amazon sends a lot fewer reports of CSAM to NCMEC than Facebook does. But, if you think for more than 3 seconds about it (and aren't just grandstanding for headlines) you might notice that Facebook is a social media site and Amazon is not. It's comparing two totally different types of services.

However, for this post I want to focus on the key problems of EARN IT. In the very original version of EARN IT, the bill created a committee to study if exempting CSAM from Section 230 would help stop CSAM. Then it shifted to the same form it's in now where the committee still exists, but they skip the part where the committee has to determine if chipping away at 230 will help, and just includes that as a key part of the bill. The 230 part mimics FOSTA (again which has completely failed to do what it claimed and has made the actual problems worse), in that it adds a new exemption to Section 230 that exempts any CSAM from Section 230.

EARN IT will make the CSAM problem much, much worse.

At least in the FOSTA case, supporters could (incorrectly and misleadingly, as it turned out) point to Backpage as an example of a site that had been sued for trafficking and used Section 230 to block the lawsuit. But here... there's nothing. There really aren't examples of websites using Section 230 to try to block claims of child sexual abuse material. So it's not even clear what problem these Senators think they're solving (unless the problem is "not enough headlines during an election year about how I'm protecting the children.")

The best they can say is that companies need the threat of law to report and takedown CSAM. Except, again, pretty much every major website that hosts user content already does this. This is why groups like NCOSE can trumpet "70 million CSAM images" being reported to NCMEC. Because all of the major internet companies actually do what they're supposed to do.

And here's where we get into one of the many reasons this bill is so dangerous. It totally misunderstands how Section 230 works, and in doing so (as with FOSTA) it is likely to make the very real problem of CSAM worse, not better. Section 230 gives companies the flexibility to try different approaches to dealing with various content moderation challenges. It allows for greater and greater experimentation and adjustments as they learn what works -- without fear of liability for any "failure." Removing Section 230 protections does the opposite. It says if you do anything, you may face crippling legal liability. This actually makes companies less willing to do anything that involves trying to seek out, take down, and report CSAM because of the greatly increased liability that comes with admitting that there is CSAM on your platform to search for and deal with.

EARN IT gets the problem exactly backwards. It disincentivizes action by companies, because the vast majority of actions will actually increase rather than decrease liability. As Eric Goldman wrote two years ago, this version of EARN IT doesn't penalize companies for CSAM, it penalizes them for (1) not magically making all CSAM disappear, for (2) knowing too much about CSAM (i.e., telling them to stop looking for it and taking it down) or (3) not exiting the industry altogether (as we saw a bunch of dating sites do post FOSTA).

EARN IT is based on the extremely faulty assumption that internet companies don't care about CSAM and need more incentive to do so, rather than the real problem, which is that CSAM has always been a huge problem and stopping it requires actual law enforcement work focused on the producers of that content. But by threatening internet websites with massive liability if they make a mistake, it actually makes law enforcement's job harder, because they will be less able to actually work with law enforcement. This is not theoretical. We already saw exactly this problem with FOSTA, in which multiple law enforcement agencies have said that FOSTA made their job harder because they can no longer find the information they need to stop sex traffickers. EARN IT creates the exact same problem for CSAM.

So the end result is that by misunderstanding Section 230, by misunderstanding internet company's existing willingness to fight CSAM, EARN IT will undoubtedly make the CSAM problem worse by making it more difficult for companies to track CSAM down and report it, and more difficult for law enforcement to track down an arrest those actually responsible for it. It's a very, very bad and dangerous bill -- and that's before we even get to the issue of encryption!

EARN IT is still very dangerous for encryption

EARN IT supporters claim they "fixed" the threat to encryption in the original bill by using text similar to Senator Leahy's amendment to say that using encryption cannot "serve as an independent basis for liability." But, the language still puts encryption very much at risk. As we've seen, the law enforcement/political class is very quick to want to (falsely) blame encryption for CSAM. And by saying that encryption cannot serve as "an independent basis" for liability, that still leaves open the door to using it as one piece of evidence in a case under EARN IT.

Indeed, one of the changes to the bill from the one in 2020 is that immediately after saying encryption can't be an independent basis for liability it adds a new section that wasn't there before that effectively walks back the encryption-protecting stuff. The new section says: "Nothing in [the part that says encryption isn't a basis for liability] shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph if the evidence is otherwise admissable." In other words, as long as anyone bringing a case under EARN IT can point to something that is not related to encryption, it can point to the use of encryption as additional evidence of liability for CSAM on the platform.

Again, the end result is drastically increasing liability for the use of encryption. While no one will be able to use the encryption alone as evidence, as long as they point to one other thing -- such as a failure to find a single piece of CSAM -- then they can bring the encryption evidence back in and suggest (incorrectly) some sort of pattern or willful blindness.

And this doesn't even touch on what will come out of the "committee" and its best practices recommendations, which very well might include an attack on end-to-end encryption.

The end result is that (1) EARN IT is attacking a problem that doesn't exist (the use Section 230 to avoid responsibility for CSAM) (2) EARN IT will make the actual problem of CSAM worse by making it much more risky for internet companies to fight CSAM and (3) EARN IT puts encryption at risk by potentially increasing the liability risk of any company that offers encryption.

It's a bad and dangerous bill and the many, many Senators supporting it for kicks and headlines should be ashamed of themselves.

Mike Masnick

Daily Deal: The Stellar Utility Software Bundle

3 years 5 months ago

The Stellar Utility Software Bundle has what you need to recover data, reinforce security, erase sensitive documents, and organize photos. It features Stellar Data Recovery Standard Windows, Ashampoo Backup Pro 15, Ashampoo WinOptimizer 19, InPixio Photo Editor v9, Nero AI Photo Tagger Manager, and BitRaser File Eraser. It is on sale for $39.95.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

ID.me Finally Admits It Runs Selfies Against Preexisting Databases As IRS Reconsiders Its Partnership With The Company

3 years 5 months ago

Tech company ID.me has made amazing inroads with government customers over the past several months. Some of this is due to unvetted claims by the company's CEO, Blake Hall, who has asserted (without evidence) that the federal government lost $400 billion to fraudulent COVID-related claims in 2020. He also claimed (without providing evidence) that ID.me's facial recognition tech was sturdy, sound, accurate, and backstopped by human review.

These claims were made after it became apparent the AI was somewhat faulty, resulting in people being locked out of their unemployment benefits in several states. This was a problem, considering ID.me was now being used by 27 states to handle dispersal of various benefits. And it was bound to get worse, if for no other reason than ID.me would be expected to handle an entire nation of beneficiaries, thanks to its contract with the IRS.

The other problem is the CEO's attitude towards reported failures. He has yet to produce anything that backs up his $400 billion in fraud claim and when confronted with mass failures at state level has chosen to blame these on the actions of fraudsters, rather than people simply being denied access to benefits due to imperfect selfies.

Another claim made by Hall has resulted in a walk-back by ID.me's CEO, prompted by increased scrutiny of his company's activities. First, the company's AI has never been tested by an outside party, which means any accuracy claims should be given some serious side-eye until it's been independently verified.

But Hall also claimed the company wasn't using any existing databases to match faces, insinuating the company relied on 1:1 matching to verify someone's identity. But this couldn't possibly be true for all benefit seekers, who had never previously uploaded a photo to the company's servers, only to be rejected when ID.me claimed to not find a match.

It's obvious the company was using 1:many matching, which carries with it a bigger potential for failure, as well as the inherent flaws of almost all facial recognition tech: the tendency to be less reliable when dealing with women and minorities.

This increased outside scrutiny of ID.me has forced CEO Blake Hall to come clean. And it started with his own employees pointing out how continuing to maintain this line of "1-to-1" bullshit would come back to haunt the company. Internal chats obtained by CyberScoop show employees imploring Hall to be honest about the company's practices before his dishonesty caused it any more damage.

“We could disable the 1:many face search, but then lose a valuable fraud-fighting tool. Or we could change our public stance on using 1:many face search,” an engineer wrote in a message posted to a company Slack channel on Tuesday. “But it seems we can’t keep doing one thing and saying another as that’s bound to land us in hot water.”

The internal messages, obtained by CyberScoop, also imply that the company discussed the use of 1:many with the IRS in a meeting.

Those messages had a direct effect: Blake Hall issued a LinkedIn post that admitted the company used 1:many verification, which indicates the company also relies on outside databases to verify identity.

In the Wednesday LinkedIn post Hall said that 1:many verification is used “once during enrollment” and “is not tied to identity verification.”

“It does not block legitimate users from verifying their identity, nor is it used for any other purpose other than to prevent identity theft,” he writes.

Hall's post hedges things quite a bit by insinuating any failures to access benefits is the result of malicious fraudsters, rather than any flaws in ID.me's tech. But this belated honesty -- along with the company's multiple failures at the state level -- has caused the IRS to reconsider its reliance on ID.me's AI. (Archived link here.)

The Treasury Department is reconsidering the Internal Revenue Service’s reliance on facial recognition software ID.me for access to its website, an official said Friday amid scrutiny of the company’s collection of images of tens of millions of Americans’ faces.

Treasury and the IRS are looking into alternatives to ID.me, the department official said, and the agencies are in the meantime attentive to concerns around the software.

This doesn't mean the IRS has divested itself of ID.me completely. At the moment, it's only doing some shopping around. Filing your taxes online still means subjecting yourself to ID.me's verification software for the time being.

A recent blog post on ID.me's site explains how the company verifies identity as well as names the algorithms it relies on to match faces, which include Paravision (which has been tested by the NIST) and Amazon's Rekognition, a product Amazon took off the law enforcement market in 2020, perhaps sensing the public's reluctance to embrace even more domestic surveillance tech.

This may be too little too late for ID.me. Its refusal to engage honestly and transparently with the public while gobbling up state and federal government contracts has expanded its scrutiny past that of the Extremely Online. Senator Ron Wyden wants to know why the IRS has made ID.me the only option for online filing.

I’m very disturbed that Americans may have to submit to a facial recognition system, wait on hold for hours, or both, to access personal data on the IRS website. While e-filing returns remain unaffected, I’m pushing the IRS for greater transparency on this plan.

But e-filing is affected. As the IRS's spokesperson noted in a statement to Bloomberg, ID.me is still standing between e-filers and e-filing.

[IRS spokesperson Barbara] LaManna noted that any taxpayer who does not want to use ID.me can opt against filing his or her taxes online.

It may be true that people with existing accounts might be able to route around this tech impediment, but new filers are still forced to interact with ID.me to set up accounts for e-filing. If spotty state interactions created national headlines, just wait until a nation of millions starts putting ID.me's tech through its paces.

Tim Cushing

Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did

3 years 5 months ago

Another day, another privacy scandal that likely ends with nothing changing.

Crisis Text Line, one of the nation's largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers... to create and market customer service software. More specifically, Crisis Text Line says it "anonymizes" some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.

As we've seen in countless privacy scandals before this one, the idea that this data is "anonymized" is once again held up as some kind of get out of jail free card:

"Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable."

But as we've noted more times than I can count, "anonymized" is effectively a meaningless term in the privacy realm. Study after study after study has shown that it's relatively trivial to identify a user's "anonymized" footprint when that data is combined with a variety of other datasets. For a long time the press couldn't be bothered to point this out, something that's thankfully starting to change.

Also, just like most privacy scandals, the organization caught selling access to this data goes out of its way to portray it as something much different than it actually is. In this case, they're acting as if they're just being super altruistic:

"We view the relationship with Loris.ai as a valuable way to put more empathy into the world, while rigorously upholding our commitment to protecting the safety and anonymity of our texters,” Rodriguez wrote. He added that "sensitive data from conversations is not commercialized, full stop."

Obviously there are layers of dysfunction that have helped normalize this kind of stupidity. One, it's 2021 and we still don't have even a basic privacy law for the internet era that sets out clear guidelines and imposes stiff penalties on negligent companies, nonprofits, and executives. And we don't have a basic law because it's hard (though writing any decent law certainly isn't easy), but because a parade of large corporations, lobbyists, and revolving door regulators don't want the data monetization party to suffer even a modest drop in revenues from the introduction of modest accountability, transparency, and empowered end users. It's just boring old greed. There's a lot of tap dancing that goes on to pretend that's not the reason, but it doesn't make it any less true.

We also don't adequately fund mental health care in the states, forcing desperate people to reach out to startups that clearly don't fully understand the scope of their responsibility. We also don't adequately fund and resource our privacy regulators at agencies like the FTC. And even when the FTC does act (which it often can't in terms of nonprofits), the penalties and fines are often pathetic in scale of the money being made.

Even before these problems are considered, you have to factor that the entire adtech space reaches across industries from big tech to telecom, and is designed specifically to be a convoluted nightmare making oversight as difficult as possible. The end result of this is just about what you'd expect. A steady parade of scandals (like the other big scandal last week in which gay/bi dating and Muslim prayer apps were caught selling user location data) that briefly generate a few headlines and furrowed eyebrows without any meaningful change.

Karl Bode

Massachusetts Court Says Breathaylzers Are A-OK Less Than Three Months After Declaring Them Hot Garbage

3 years 5 months ago

Breathalyzers are like drug dogs and field tests: they are considered infallible right up until they're challenged in court. Once challenged, the evidence seems to indicate all of the above are basically coin tosses the government always claims to win. Good enough for a search or an arrest when only examined by an interested outsider who's been subjected to warrantless searches and possibly bogus criminal charges. But when the evidentiary standard is a little more rigorous than roadside stops, probable cause assertions seem to start falling apart.

Drug dogs are only as good as their handlers. They perform probable cause tricks in exchange for praise and treats. Field drug tests turn bird poop and donut crumbs into probable cause with a little roadside swirling of $2-worth of chemicals. And breathalyzers turn regular driving into impaired driving with devices that see little in the way of calibration or routine maintenance.

Courts have seldom felt compelled to argue against law enforcement expertise and training, even when said expertise/training relies on devices never calibrated or maintained, even when said devices are capable of depriving people of their freedom.

Once every so often courts take notice of the weak assertions of probable cause -- ones almost entirely supported by cop tools that remain untested and unproven. Late last year, a state judge issued an order forbidding the use of breathalyzer results as evidence in impaired driving prosecutions. District court judge Robert Brennan said he had numerous concerns about the accuracy of the tests, and the oversight of testing, and the testing of test equipment by the Massachusetts Office of Alcohol Testing.

“Breathalyzer results undeniably are among the most incriminating and powerful pieces of evidence in prosecutions involving either alcohol impairment or “per se” blood alcohol percentage as an element. Their improper inclusion in criminal cases not only unfairly impacts individual defendants, but also undermines public confidence in the criminal justice system.”

The pause on using breathalyzer tests as evidence is only the most recent development in a year's long challenge of their accuracy. In 2017, ruling on the reliability of tests taken between 2012 and 2014, Brennan found that while the tests were accurate, the way the state maintained them was not.

A court finally found a reason to push back against assertions of training and expertise, as well as assertions that cop tech should be considered nigh invulnerable. But the pushback is over. The same court is apparently now satisfied that the tech it questioned last November is good enough to make determinations that can deprive people of their property and freedom.

Breathalyzers are back in business in the Bay State after a judge dropped the suspension on breath tests, which cops use to bust and prosecute drunk drivers.

Salem Judge Robert Brennan, who in November ordered the statewide exclusion of breath test results, has tossed out the police Breathalyzer pause.

The Draeger Alcotest 9510 breath tests have come under fire for several years, as a Springfield OUI attorney represents defendants in statewide Breathalyzer litigation. Lead defense attorney Joseph Bernard has been raising concerns about the software problems impacting the scientific reliability of the breath test.

But the Salem judge in the ruling vacating the Breathalyzer suspension said the Draeger Alcotest 9510 "produces scientifically reliable breath test results."

Judge Brennan isn't willing to let the possibly subpar be the enemy of the verifiable good. If you went long on breathalyzers late last year, it's time to cash out. According to Judge Brennan, whatever's determined to be good enough is, well, good enough to deprive people of their liberties. Brennan's decision notes there's no such thing as "perfect source code" or "flawless machines." Therefore, state residents should just resign themselves to the fact their freedom is reliant on the Massachusetts' OKest Breathalyzers.

"This Court remains satisfied that the public can have full confidence in the results produced by the Alcotest 9510…"

But can they though? Who knows? Certainly not this court. Certification information has been offered but prior to the November 2021 decision, state prosecutors were voluntarily excluding breathalyzer evidence. That's not exactly a vote of confidence. And this vote against breathalyzers was coming from entities judged almost solely on their prosecutorial wins, necessitating the need to achieve as many easy wins as possible.

Weirdly, the judge says the tests are OK but their oversight isn't. Despite the fact that both facets need to be on the same level to avoid abuse and unjustified arrests, the judge is allowing roadside testing to move forward while criticizing the Office of Alcohol Testing for its "lack of candor and transparency" when dealing with the court and criminal defendants.

In the end, the system prevails. Massachusetts cops can continue to use questionable tech to effect arrests and engage in warrantless searches and detentions. As for its oversight, it's only being threatened with the possibility of further action from this court -- the same court that ended breathalyzer testing in November (citing concerns about equipment and accuracy) only to reverse course three months later.

One imagines the demands placed on the Office of Alcohol Testing will be just as temporary as this court's momentary pause on the use of unproven tech. The desire to be in the police business once again outweighs the public's concern about being on the wrong end of baseless prosecutions. The onus is back on presumably innocent defendants to prove the government isn't using faulty tech to lock them up.

Tim Cushing