a Better Bubble™

TechDirt 🕸

Can We At Least Make Sure Antitrust Isn't Deliberately Designed To Make Everyone Worse Off?

2 years 10 months ago

For decades here on Techdirt I've argued that competition is the biggest driver of innovation, and so I'm very interested in policies designed to drive more competition. Historically this has been antitrust policy, but over the past decade or so it feels like antitrust policy has become less and less about competition, and more and more about punishing companies that politicians dislike. We can debate whether or not consumer welfare is the right standard for antitrust -- I think there are people on both sides of that debate who make valid points -- but I have significant concerns about any antitrust policy that seems deliberately designed to make consumers worse off.

That's why I'm really perplexed by the push recently to push through the “American Innovation and Choice Online Act” from Amy Klobuchar which, for the most part, doesn't seem to be about increasing competition, innovation, or choice. It seems almost entirely punitive in not just punishing the very small number of companies it targets, but rather everyone who uses those platforms.

There's not much I agree with Michael Bloomberg about, but I think his recent opinion piece on the AICOA bill is exactly correct.

At the heart of the bill is an effort to prevent big tech companies from using a widespread business practice called self-preferencing, which is generally good for both consumers and competition. Think of it this way: An ice-cream parlor makes its own flavors and sells other companies’ flavors, too. Its storefront window carries a large sign advertising its homemade wares. In smaller letters, the sign mentions that Haagen-Dazs and Breyers are available, too. Should Congress force the ice-cream store owners to advertise Haagen-Dazs and Breyers as prominently as their own products?

That’s essentially what this bill would force a handful of the largest tech companies to do. For instance, Google users searching the name of a local business now get, in their search results, the option of clicking a Google-built map. But under the bill’s requirements, the search results would likely have to exclude the Google map. Similarly, Amazon would likely be prevented from promoting its less-expensive generic goods against the biggest brand names.

Lots of businesses offer configurations of products and services in ways that are attractive to customers, often for both price and convenience. Doing this can allow companies to enter — and potentially disrupt — new markets, to the great advantage of customers.

Yet the bill views such standard business conduct as harmful. It would require covered companies — essentially Amazon, Apple, Google, Facebook and TikTok — to prove that any new instance of preferencing would “maintain or enhance the core functionality” of their business. Failure to comply could lead to fines of up to 15% of a company’s total U.S. revenue over the offending period.

Now, I think there's a very legitimate argument that if a dominant company is using its dominant position to preference something in a manner that harms competition and the end user experience, then that can be problematic, and existing antitrust law can take care of that. But this bill seems to assume that any effort to offer your own services is somehow de facto against the law.

And whether or not that harms these companies is besides the point: it will absolutely harm the users and customers of these companies, and why should that be enabled by US competition policy? The goal seems to be "if we force these companies to be worse, maybe it will drive people to competitors," which is a really bizarre way of pushing competition. We should drive competition by encouraging great innovation, not limiting how companies can innovate.

Even if you don't think that the "consumer welfare" standard makes sense for antitrust, I hope most people can at least agree that any such policy should never deliberately be making consumers worse off.

Mike Masnick

Texas Town To Start Issuing Traffic Tickets By Text Message

2 years 10 months ago

Way back in 2014, Oklahoma state senator (and former police officer) Al McAffrey had an idea: what if cops could issue traffic tickets electronically, without ever having to leave the safety and comfort of their patrol cars?

The idea behind it was officer safety. This would keep officers from standing exposed on open roads and/or interacting face-to-face with a possibly dangerous driver. The public's safety was apparently low on the priority list, since this lack of interaction could permit impaired drivers to continue driving or allow actually dangerous people to drive away from a moving violation to do more dangerous things elsewhere.

It also would allow law enforcement agencies to convert drivers to cash more efficiently by speeding up the process and limiting things that might slow down the revenue stream, like having actual conversations with drivers. On the more positive side, it would also have lowered the chance of a traffic stop turning deadly (either for the officer or the driver) by limiting personal interactions that might result in the deployment of excessive or deadly force. And it also would limit the number of pretextual stops by preventing officers from claiming to have smelled something illegal while conducting the stop.

Up to now, this has only been speculative legislation. But it's becoming a reality, thanks to government contractor Trusted Driver. Run by former police officer Val Garcia, the program operates much like the TSA's Trusted Traveler program. Users create accounts and enter personal info and then receive traffic citations via text messages.

The program is debuting in Texas, where drivers who opt in will start being texted by cops when they've violated the law.

It's a concept never done before, and it's about to happen in Bexar County: Getting a traffic ticket sent to your phone without an officer pulling you over. One police department will be the first in the nation to test it.

"It's not a 100% solution, but it's a step forward in the right direction," said Val Garcia, President & CEO of the Trusted Driver Program.

Garcia is one of five former SAPD officers who are part of a 12-member team that created and developed Trusted Driver.

"We're proud to still give back with what we've gained with our experience as a law enforcement officer," said Garcia.

The company claims the program will have several benefits, above and beyond limiting cop-to-driver interactions that have the possibility of escalating into deadly encounters. Some of the benefits aren't immediately discernible, but giving cops more personal information could actually help prevent the senseless injury or killing of drivers who may have medical reasons that would explain their seeming non-compliance. Here's Scott Greenfield highlighting this particular aspect of the Trusted Driver Program.

But this also offers an opportunity that can be critical in police interactions and has led to a great many tragic encounters.

“If you’re deaf, if you have PTSD, autism, a medical condition like diabetes or a physical disability but you’re still allowed to drive,” said Garcia. “It really gives an officer information faster in the field to handle a traffic stop if it does occur and be able to deescalate.”

That police will be aware that a driver is deaf or autistic could be of critical importance in preventing a mistaken shooting, provided the cop reads it and is adequately trained not to kill deaf people because they didn’t comply with commands.

Unfortunately, the cadre of cops behind Trusted Driver seem to feel citizens are looking for even more ways to interact with officers, even if this interaction is limited to text messages.

Through Trusted Driver, police are also able to send positive messages to drivers who are doing a stellar job obeying traffic laws.

Just like cops thinking they're doing a good thing by pulling over drivers who haven't committed a crime to give them a thumbs up or a Thanksgiving turkey, Trusted Driver seems to believe the public will be receptive to text messages from cops telling them they're doing a good job driving, delivered to them via a number they associate with punishment for criminal acts. And it's not like drivers in the program will be able to select which messages they receive: once you've opted in, you can have your heart rate temporarily increased by the law enforcement equivalent of slacktivism -- one Trusted Driver believes will somehow build and repair the public's relationship with the law enforcement officers that serve them.

This lies somewhere between the frontier of law enforcement and the inevitability of tech development. It's not that it's an inherently bad idea, but there's a lot in there that's problematic, including officers receiving increased access to driver's personal info, which will now include their cell phone numbers. Law enforcement officers have a history of abusing access to personal info and this program gives them the opportunity to do so without ever leaving their patrol cars.

Then there's the unanswered question about enforcement. Will members of this program receive more tickets just because they're easier to ticket? Or will traffic enforcement still be evenly distributed (so to speak) across all drivers? Like other automated traffic enforcement efforts, tickets will be issued to the owner of the vehicle, rather than the actual driver, which is going to cause problems for people who haven't actually committed a moving violation, beginning with increased insurance rates and possibly ending with bench warrants for unpaid tickets that were issued to the wrong person.

Still, it's worth experimenting with. But it needs to be subject to intense scrutiny the entire time it's deployed. There's too much at risk for agencies and the general public to just let it hum along unattended in the background, steadily generating revenue. Unfortunately, if it does that part of the job (deepening the revenue stream), concerns about its use and operation are likely to become background noise easily drowned out by the sound of city coffers being filled.

Tim Cushing

Daily Deal: The 2022 FullStack Web Developer Bundle

2 years 10 months ago

The 2022 FullStack Web Developer Bundle has 11 courses to help you step up your game as a developer. You'll learn frontend and backend web technologies like HTML, CSS, JavaScript, MySQL, and PHP. You'll also learn how to use Git and GitHub, Vuex, Docker, Ramda, and more. The bundle is on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

With Stephen Breyer's Retirement, The Supreme Court Has Lost A Justice Who Was Wary Of Overly Burdensome Copyright

2 years 10 months ago

Whatever the (I'd argue unfortunate) politics behind Stephen Breyer's decision to retire as a Supreme Court Justice at the conclusion of this term, it is notable around here for his views on copyright. Breyer has generally been seen as the one Justice on the court most open to the idea that overly aggressive copyright policy was dangerous and potentially unconstitutional. Perhaps ironically, given that they are often lumped together on the overly simplistic "left/right" spectrum -- Justices Breyer and Ginsburg -- presented somewhat opposite ends of the copyright spectrum. Ginsburg consistently was a voice in favor of expanding copyright law to extreme degrees, while Breyer seemed much more willing to recognize that the rights of users -- including fair use -- were extremely important.

If you want to see that clearly, read Ginsburg's majority opinion in the Eldred case (on whether or not copyright term extension is constitutional) as compared to Breyer's dissent. To this day I believe that 21st century copyright law would have been so much more reasonable and so much more for the benefit of the public if Breyer had been able to convince others on the court to his views. As Breyer notes in his dissent, a copyright law that does not benefit the public should not be able to survive constitutional scrutiny:

Thus, I would find that the statute lacks the constitutionally necessary rational support (1) if the significant benefits that it bestows are private, not public; (2) if it threatens seriously to undermine the expressive values that the Copyright Clause embodies; and (3) if it cannot find justification in any significant Clause-related objective.

(As an aside, the book No Law has a very, very thorough breakdown of how the majority ruling by Justice Ginsburg in that case was just, fundamentally, objectively wrong.)

That said, Breyer wasn't -- as he was sometimes painted -- a copyleft crusader or anything. As Jonathan Band details, Breyer's views on copyright appeared to be extremely balanced -- sometimes ruling for the copyright holder, and sometimes not. Indeed, to this day, I still cannot fathom how he came to write the majority opinion in the Aereo case, which used a "looks like a duck" kind of test. In that case, the company carefully followed the letter of the law regarding copyright, and the end result was that, even by playing within the lines, because it felt like some other service, the court was fine with declaring it to be a different kind of service (even though technically it was not). We are still suffering from the impact of that case today.

So, while I didn't always think that Breyer got copyright cases correct, he was -- consistently -- much more thoughtful on copyright issues that any other Justice on today's court, and that perspective will certainly be missed.

Mike Masnick

Congress Introduces New Agricultural 'Right to Repair' Bill With Massive Farmer Support

2 years 10 months ago

Back in 2015, frustration at John Deere's draconian tractor DRM helped birth a grassroots tech movement dubbed "right to repair." The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM (and the company's EULA) prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair (which for many owners involved hauling tractors hundreds of miles and shelling out thousands of additional dollars), or toying around with pirated firmware just to ensure the products they owned actually worked.

Seven years later and this movement is only growing. This week Senator Jon Tester said he was introducing new legislation (full text here, pdf) that would require tractor and other agricultural hardware manufacturers to make manuals, spare parts, and and software access codes publicly available:

"We’ve got to figure out ways to empower farmers to make sure they can stay on the land. This is one of the ways to do it,” Tester said. “I think that the more we can empower farmers to be able to control their own destiny, which is what this bill does, the safer food chains are going to be."

The legislation comes as John Deere recently was hit with two new lawsuits accusing the company of violating antitrust laws by unlawfully monopolizing the tractor repair market. In 2018 John Deere had promised to make sweeping changes to address farmers' complaints, though by 2021 those changes had yet to materialize. Tester's legislation also comes as a new US PIRG survey shows that a bipartisan mass of famers overwhelmingly support reform on this front.

Tester's proposal is just one of several new efforts to rein in attempts to monopolize repair, be it John Deere or Apple. More that a dozen state-level laws have been proposed, and the Biden administration's recent executive order on competition also urges the FTC to craft tougher rules on repair monopolization efforts. In an era rife with partisan bickering, it's refreshing to see an issue with such broad, bipartisan public support, resulting in an issue that only had niche support a half decade ago rocketing into the mainstream.

Karl Bode

YouTube Dusts Off Granular National Video Blocking To Assist YouTuber Feuding With Toei Animation

2 years 10 months ago

Hopefully, you will recall our discussion about one YouTuber, Totally Not Mark, suddenly getting flooded with 150 copyright claims on his YouTube channel all at once from Toei Animation. Mark's channel is essentially a series of videos that discuss, critique, and review anime. Toei Animation produces anime, including the popular Dragon Ball series. While notable YouTuber PewDiePie weighed in with some heavy criticism over how YouTube protects its community in general from copyright claims, the real problem here was one of location. Matt is in Ireland, while Toei Animation is based out of Japan. Japan has terrible copyright laws when it comes to anything resembling fair use, whereas Ireland is governed by fair dealing laws. In other words, Matt's use was just fine in Ireland, where he lives, but would not be permitted in Japan. Since YouTube is a global site, takedowns have traditionally been global.

Well, Matt has updated the world to note that he was victorious in getting his videos restored and cleared, with a YouTube rep working directly with him on this.

But shortly after, as Fitzpatrick revealed in a new video providing an update on the legal saga, someone “high up at YouTube’’ who wished to remain anonymous, reached out to him via Discord. Fitzpatrick said the contact not only apologized for his situation not being addressed sooner, but divulged a prior conflict between YouTube and Toei regarding his videos fair use status.

“I’m not going to lie, hearing a human voice that felt both sincerely eager to help and understanding of this impossible situation felt like a weight lifted off my shoulders,” Fitzpatrick said.

Hey, Twitch folks, if you're reading this, this is how it is done. But it isn't the whole story. Before the videos were claimed and blocked, Toei had requested that YouTube manually take Matt's videos offline. YouTube pushed back on Toei, asking for more information on its requested takedowns, specifically asking if the company had considered fair use/fair dealing laws in its request. Alongside that, YouTube also asked Toei to provide more information as to what and why Matt's videos were infringing. Instead of complying, Toei utilized YouTube's automated tools to simply claim and block those 150 videos.

The following week, a game of phone tag ensued between Toei, the Japanese YouTube team, the American YouTube team, Fitzpatrick’s YouTube contact, and himself to reach “some sort of understanding” regarding his copyright situation. Toei ended up providing a new list of 86 videos of the original 150 or so that the company deemed should not remain on YouTube, a move Fitzpatrick described as “baffling” and “inconsistent.” Toei, he concludes, has no idea of the meaning of fair use or the rules the company wants creators to abide by.

“Contained in this list was frankly the most arbitrary assortment of videos that I had ever seen,” he said. “It honestly appeared as if someone chose videos at random as if chucking darts at a dart board.”

While Matt regained control of his videos thanks to his work alongside the YouTube rep, he was still in danger of Toei filing a lawsuit in Japan that he would almost certainly lose, given that country's laws. Fortunately, YouTube has a method for blocking videos based on copyright claims in certain countries for these types of disputes. The Kotaku post linked above suggests that this method is brand new for YouTube, but it isn't. It's been around for a while but, somewhat amazingly, it appears to have never been used specifically when it comes to copyright laws in specific countries.

YouTube’s new copyright rule allows owners like Toei to have videos removed from, say, Japan’s YouTube site, but said videos will remain up in other territories as long as they fall under the country’s fair use policies. To have videos removed from places with more allowances for fair use, companies would have to argue their cases following the copyright laws of those territories.

And so Matt's review videos remain up everywhere except in Japan. That isn't a perfect solution by any stretch, but it seems to be as happy a middle ground as we're likely to find given the circumstances. Those circumstances chiefly being that Toei Animation for some reason wants to go to war with a somewhat popular YouTuber who, whatever else you might want to say about his content, is certainly driving interest publicly in Toei's products, for good or bad. This is a YouTuber the company could have collaborated with in one form or another, but instead it is busy burning down bridges.

“Similarly to how video games have embraced the online sphere, I sincerely believe that a collaborative or symbiotic relationship between online creators and copyright owners is not only more than possible but would likely work extremely well for both sides if they are open to it,” Fitzpatrick said.

That Toei Animation is not open to it is the chief problem here.

Timothy Geigner

That's A Wrap On The Public Domain Game Jam! Check Out All This Year's Great Entries

2 years 10 months ago

Last night at midnight, we reached the end of Gaming Like It's 1926, our fourth annual public domain game jam celebrating the new works that entered the public domain this year. At final count, we got 31 entries representing a huge variety of different kinds of digital and analog games!

For the next couple of weeks, we'll be digging into all the games and selecting the winners in our six categories — but there's no need to wait before playing! You can check out all the entries on itch.io:

At first glance (and having poked around in a couple of the early entries) I can already tell it's going to be tough to narrow these down to just six winners — there are lots of games here that do fun and interesting things with public domain works. As in past years, once we've selected and announced the winners we'll discuss each one in detail in a podcast and a series of posts.

Until then, a huge thanks to everyone who participated this year, and also to everyone who takes some time to play the games and give these designers the attention they deserve!

Leigh Beadon

Chicago Cops Love Them Some Facebook Sharing, According To Internal Facial Recognition Presentation

2 years 10 months ago

Somewhere between the calls to end encryption and calls to do literally anything about crime rate spikes at this time of year, at this time of day, in [insert part of the country], localized entirely within [add geofence] lies the reality of law enforcement. While many continue to loudly decry the advent of by-default encryption, the reality of the situation is people are generating more data and content than ever. And most of it is less than a warrant away.

While certain suspect individuals continue to proclaim encryption will result in an apocalypse of criminal activity, others are reaping the benefits of always-on internet interactivity. Clearview, for example, has compiled a database of 10 billion images by doing nothing more than scraping the web, grabbing everything that's been made public by an extremely online world population.

You want facial images free of charge and no Fourth Amendment strings attached? You need look no further than the open web, which has all the faces you want and almost none of the attendant restrictions. "Going dark" is for chumps who don't know how to leverage the public's willingness to share almost anything with the rest of the internet.

The Chicago PD knows who's keeping the internet bread buttered and which side they're on. A report from Business Insider (written by Caroline Haskins) highlights an internal CPD presentation that makes it explicit cops have gained plenty from the rise of social media platforms, easily outweighing the subjective losses end-to-end encryption may have recently created.

Images posted on social media have become so valuable to police investigations that the Chicago Police Department thanked Facebook, "selfie culture," and "high-definition cameras" on cellphones during a presentation on how to use facial-recognition technology.

"THANK YOU FACEBOOK!" read one slide from the document, which was obtained by Insider through a public-record request.

Thank you, Facebook, indeed. The presentation [PDF] namechecks the most popular social media platform in the United States -- one that has deployed its own facial recognition to tag individuals in photos whether or not said individuals have specifically agreed to be identified by the social network. Hence the rise of the "I'm in this photo and I don't like it" meme.

Facebook (now Meta) had no comment. The Chicago PD provided no comment. But little commentary is necessary. Whatever's sent out into the open ether of the internet is there for the taking. Clearview made it explicit by scraping everything that wasn't nailed down. Facebook's terms of service and privacy policy make it far less explicit, but whatever can be accessed by non-cops roaming the platform can also be accessed by cops.

The presentation at least notes that facial recognition should be viewed as only one investigative tool to be used by investigators. Better, it points out that matching a face to social media detritus is only a small part of the equation. No officer should assume a single match means positive identification of a criminal suspect. Whether or not this part of the training carries over to actual investigations remains to be seen. If cops are assuming matches are positive IDs and acting accordingly, it's only a matter of time before the Chicago PD gets sued for arresting or jailing the wrong person.

The document obtained by Business Insider shows the CPD is using multiple facial recognition vendors in their quest for the highly subjective "truth:" ranging from Amazon's no-longer-for-law-enforcement Rekognition to NEC, Cognitec, and Dataworks Plus.

It's difficult to even golf clap for the Chicago PD, given its long history of rights violations and internal corruption, but it would be disingenuous to acknowledge this presentation at least tries to steer investigators away from rights violations.

The document says CCTV footage and social media could lead to "suspect identification." But it also notes prospective pitfalls of the technology, saying that facial recognition was a "narrow tool" that couldn't be used to "'confirm' an identification by other means."

Again, the words are only as good as their interpretation by officers utilizing this technology and the wealth of information made accessible by social media platforms. And there's a shit ton of inputs. Millions of images are easily accessible through Facebook. Millions more have been harvested by the Chicago PD, which operates or has access to more than 30,000 surveillance cameras located in the city.

The Chicago PD's relationship with emerging surveillance tech has been no better than its constantly deteriorating relationship with the people it serves. The PD has been an enthusiastic early adopter of unproven tech, blowing tax dollars on ShotSpotter (which is terrible at spotting shots) and Clearview's facial recognition AI (which has been assailed by law enforcement agencies as mostly useless).

We want law enforcement agencies to be good stewards of the money and power they're entrusted with. The Chicago PD has been neither for decades. While this presentation does a good job explaining the pitfalls of utilizing open source images in conjunction with facial recognition tech, the fact is Chicago cops are results-oriented. When that happens, the ends justify the means, even when the ends are ultimately tossed by trial court judges and federal civil rights lawsuits. Officers are on notice that facial recognition tech is highly-fallible. But, until we see otherwise, we can probably assume CPD officers are more interested in deploying the tech than ensuring search results are accurate.

]

Tim Cushing

Techdirt Podcast Episode 309: Remembering The SOPA Fight, With Rep. Zoe Lofgren

2 years 10 months ago

As many of you know, last week we hosted an online event for the latest Techdirt Greenhouse edition, all about looking back on the lessons learned from the 2012 protests against SOPA and PIPA. Our special guest was Rep. Zoe Lofgren, one of the strongest voices in congress speaking out against the disastrous bills, who provided all kinds of excellent insight into what happened then and what's happening now. In case you missed it, for this week's episode of the podcast (yes, we're finally back with new episodes!) we've got the full conversation and Q&A from the event.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Leigh Beadon

Senators' 'Myths & Facts' About EARN IT Is Mostly Myths, Not Facts

2 years 10 months ago

I already wrote a long post earlier about the very very real problems with the EARN IT Act -- namely that it would make the problem of child sexual abuse material significantly worse by repeating the failed FOSTA playbook, and that it would attack encryption by making it potential "evidence" in a case against a tech company for any CSAM on its site. But with the bill, the sponsors of the bill, Senators Richard Blumenthal and Lindsey Graham, released a "Myth v. Fact" document to try to counter the criticisms of EARN IT. Unfortunately, the document presents an awful lot of "myths" as "facts." And that's a real problem.

The document starts out noting, correctly:

The reporting of CSAM and online child sexual exploitation provides law enforcement with vital information on active predators and ongoing cases of rape and abuse.

More reports, and more accurate information within those reports, means more children freed from one of the most horrific and life-changing crimes imaginable.

But it ignores that taking away Section 230 protections doesn't magically make them do more reporting. It does the opposite. Because now making the effort to find and report CSAM actually puts you at risk of greater liability under EARN IT. The bill literally creates less incentive for a website to build systems to find and report CSAM because merely doing so gives you the knowledge (scienter) necessary under the law to face liability. The bill gets this all exactly backwards.

The document then has the following listed as a "myth":

Given that some tech companies report significant amounts of CSAM to the National Center for Missing and Exploited Children (NCMEC) and provide technical resources to address child exploitation, the tech industry is doing enough to address this crime.

To address it, it lists these facts:

For tech companies that are already taking clear steps to report CSAM, little will change under this bill.

Except, that's not true at all. Because now, if those companies make any mistakes -- which they will because you can't get everything right -- they face potentially crippling liability. The idea that no one will go after them because they report a lot of CSAM is completely divorced from reality. We see companies getting sued all the time in similar circumstances. Under FOSTA now we're seeing companies like Salesforce and Mailchimp being sued because other companies used their services and then sex traffickers used those other services, and somehow that magically makes Salesforce and Mailchimp liable. The same thing would happen under EARN IT.

According to NCMEC’s 2020 statistics on reports of the online exploitation of children, while Facebook issued over 20 million reports that year, in contrast Amazon (which hosts a significant percentage of global commerce and web infrastructure) reported 2,235 cases.

Maybe that's because Facebook is a user-generated content social media platform and Amazon... is not? I mean, I don't even need to say this is "comparing apples to oranges" here because it's "comparing Facebook to Amazon." The two companies do very, very different things that are simply not comparable.

Of course, the underlying (and pretty fucking scary) suggestion here is that Amazon should be scanning every AWS instance for bad stuff, which raises really serious privacy concerns. It's amazing that the very same Senators pushing this bill, which now they're basically saying should require websites to spy on everything, will turn around next week and argue that these companies are "collecting too much information" and are engaged in "surveillance capitalism."

So which is it, Senators? Should Amazon be spying on everyone, or are they not spying hard enough?

There is a sustained problem of underreporting and neglect of legal obligations by some tech companies. During a Senate Judiciary Committee hearing on the EARN IT Act, NCMEC disclosed that it had reported nearly nine times more cases of CSAM material hosted on Amazon to Amazon, than Amazon had found itself, and that Amazon had not taken legally required action on those cases.

Again, this is taken incredibly out of context, and when put back into context means that Amazon isn't spying on all their customers' data. That should be seen as a good thing? Note what this "fact" doesn't say: when Amazon was alerted to CSAM by NCMEC did it remove it? Did it add hashes to the NCMEC database? Because that's what matters here. Otherwise, these Senators are just admitting that they want more surveillance by private companies and less privacy for the public.

Before introducing the EARN IT Act, a bipartisan group of Senators sent detailed questions to more than thirty of the most prominent tech companies. The responses showed that even startups and small firms were willing and able to build safety into their platform using automated tools. Meanwhile, some large companies like Amazon admitted that they were not even using common and free tools to automatically stop CSAM despite substantial and known abuse of their platforms by predators.

They're really throwing Amazon under the bus here. But this "fact" again demonstrates that most internet companies that host user generated content are already doing what is appropriate and finding, reporting, and removing CSAM. The only example they have of a company that is not is Amazon, and that's because Amazon is in a totally different business. They're not a platform for user generated content, they're just a giant computer for other services. Those other services, built on top of Amazon, can (and do!) scan their own systems for CSAM.

This whole "fact" list is basically a category error, in which they lump Amazon in with other companies because whoever wrote this can't find any actual problem out there with actual social media companies.

It is clear that many tech companies will only take CSAM seriously when it becomes their financial interest to do so, and the way to make that a reality is by permitting survivors and state law enforcement to take the companies to court for their role in child exploitation and abuse.

Except this document's own fact check said that every company they asked was doing what was necessary. And, it already is very much in every company's "financial interest" to find, report, and remove CSAM because if you don't you already face significant legal consequences, since hosting CSAM is already very, very much illegal.

The next "myth" listed is:

This bill opens up tech companies to new and unimaginable liability that necessitated CDA Section 230’s unqualified immunities two decades ago

Which is... absolutely true. And we don't need to look any further than what happened with FOSTA to see that this is true. But the Senators deny it. Because they're lying.

The EARN IT Act creates a targeted carve out for the specific, illegal act of possession or distribution of child sexual abuse material.

And FOSTA created "a targeted carve out for the specific, illegal act of human trafficking" but in practice has resulted in a series of totally frivolous lawsuits against ancillary services used by a company that was then used by sex traffickers.

Any tech company that is concerned that its services or applications could be used to distribute CSAM has plenty of tools and options available to prevent this crime without hindering their operations or creating significant costs.

The detection, prevention, and reporting of CSAM is one of the most easily addressed abuses and crimes in the digital era. There are readily accessible, and often free, software and cloud services, such as PhotoDNA, to automate the detection of known CSAM material and report it to NCMEC.

The naming of PhotoDNA is interesting here. It's a Microsoft project (big tech!) that is very important in finding/reporting/removing CSAM. But Microsoft actually limits who can use it, and I've heard of multiple websites that were not allowed to use PhotoDNA. I don't think Techdirt would qualify to use PhotoDNA, for example. In the meantime, Cloudflare actually introduced its own tool that I think came about because Microsoft made it difficult to impossible for many websites to use PhotoDNA.

But just the fact that PhotoDNA and Cloudflare's solution exist and are being used again suggests that "the problem" here doesn't actually exist. As noted in the first post, we don't see companies being sued for CSAM and using Section 230 as a defense, because that's not the problem.

Also, left out of the "fact" is the actual "fact" that PhotoDNA has very real limitations. That article, published a few months ago, notes that (1) Microsoft and NCMEC seem to go out of their way to avoid allowing researchers to study PhotoDNA, (2) contrary to Microsoft/NCMEC claims, the algorithm is able to be reversed (i.e., enabling users to recreate CSAM images from hashes!) (3) it is easily defeatable with minor changes to images and (4) it is subject to false positives. In other words, while PhotoDNA is an important tool for fighting CSAM, it has real problems, and mandating it (as this "fact" suggests is the goal of EARN IT) could create significant (and potentially dangerous) consequences.

The next "myth" is... just weird.

Requiring companies to be on the lookout for child abuse will harm startups and nascent businesses.

No one has made that argument. The actual argument is that adding very serious liability for anyone making any mistake as they're on the lookout for child abuse will do tremendous harm to startups and nascent businesses.

No other type of business in the country is provided such blanket and unqualified immunity for sexual crimes against children.

Except... tech companies aren't given a "blanket and unqualified immunity for sexual crimes against children." This "fact" is just wrong. What Section 230 does it provide immunity for third party speech -- but not if federal crimes are involved, which is certainly the case with CSAM. The whole attempt to blame Section 230 here is just weird. And wrong.

Startups and small businesses have a critical role in the fight against online CSAM. Smaller social media sites and messaging applications, such as Kik Messenger, are routinely used by abusers. The EARN IT Act will ensure that abusers do not flock to small platforms to evade the protections and accountability put in place on larger platforms.

So, now we're blaming Kik? Okay. Except the timing on this is interesting, as just a few days ago the DOJ literally announced that it had arrested a woman for distributing CSAM on Kik, showing again that when law enforcement actually bothers to do so, it can find and arrest those responsible.

Moreover, there are simple, readily accessible, and often free, software and cloud services, such as PhotoDNA, that can be used by any tech company to automate the detection of known CSAM material and report it to NCMEC.

Again, PhotoDNA involves a "qualification" process, and has significant problems. If the point of this bill is to force every website to use PhotoDNA, write that into the law and deal with the fact that a mandated filter raises other Constitutional concerns. Instead, these Senators are basically saying "every website must use PhotoDNA, but we can't legally say that, so wink, wink."

Indeed, it's pretty funny that right after more or less admitting that they're demanding mandatory filters, they claim this is the next "myth":

The EARN IT Act violates the First Amendment.

The "fact" they used to reply to this kinda gives away the game:

Child sexual abuse is not protected speech. Possession of child pornography is a criminal violation and there is no defensible claim that the First Amendment protects child sexual abuse material.

That's correct, but... misleading. No one is concerned about taking down CSAM (again, pretty much every major internet platform already does this as it's already required by law). The concern is that by mandating filters that are not publicly reviewable, you end up taking down other speech. And that other speech may be protected. Again, look at the link above regarding research into PhotoDNA, which suggests that the "false positive" problem with PhotoDNA is very, very real.

And then we get to the encryption stuff with the next "myth":

The EARN IT Act is simply an attempt to ban encryption.

Actually, it seems to be only partially an attempt to ban encryption. The Senators' "facts" on this is just... again, the part that is actually mythical:

The EARN IT Act does not target, limit, or create liability for encryption or privacy services. In fact, in order to ensure the EARN IT Act would not be misconstrued as limiting encryption, specific protections were included in the bill to explicitly state that a court should not consider offering encryption or privacy services as an independent basis for legal liability.

Weasel words. Again, see what I wrote in the last post about the encryption section. It says it can't be the "independent basis" for liability, but it explicitly states that the use of encryption can still be used as evidence against a website under this law. So it very much increases the legal liability for any website that uses encryption, because it will be used against them in court.

Stopping the abuse of children is not at odds with preserving online privacy. Some online platforms have been using automated tools to check images and videos against CSAM databases for more than a decade without endangering privacy or creating consumer concerns. As Facebook has testified to the Senate Judiciary Committee, tech companies can readily implement tools to detect child sexual abuse while offering strong encryption tools.

This is correct, but does not address the point. Of course, you can fight CSAM while preserving privacy, but this bill makes that much more difficult by adding liability risk to anyone who uses encryption (and anyone who tries to go above and beyond in fighting CSAM, but some slips through).

Then there's a "myth" that's actually a fact. Section 230 exempts federal crimes, and CSAM is a federal crime -- and the real issue is that law enforcement tends not to spend much time and resources on fighting the actual creators and distributors of CSAM:

Since CDA 230 already exempts federal crimes, the solution to this problem is increasing resources for law enforcement and hiring more federal prosecutors.

The "facts" the Senators present in response are incredibly misleading.

We support increasing resources for law enforcement officials fighting sex crimes against children. But no amount of money can compensate for the disengagement of the online platforms actually hosting this material.

That second sentence is a non sequitur, since (again...) EARN IT doesn't do anything to stop sites from hosting CSAM. It just opens them up to being sued for trying to stop it!

Hiring more federal investigators cannot replace having companies committed to the fight against child abuse, especially when it comes to monitoring the content posted on online platforms and checking closed groups for abuse.

Again, companies are committed to fighting child abuse, and EARN IT makes it more risky for them to "monitor" the content posted online!

By requiring that only the Department of Justice can bring criminal cases for child sexual exploitation crimes, CDA Section 230 drastically limits the number and types of cases that are brought.

No, it avoids bogus, wasteful lawsuits like the ones that were brought against Salesforce and Mailchimp under FOSTA.

States and survivors have a well-established role in holding offenders accountable, especially with respect to child sexual exploitation, for a reason: under enforcement of child protection laws fails victims and fosters more abuse.

Yes, and they can already hold "offenders accountable" for sexual exploitation. The problem is that this bill distracts from going after actual offenders, and instead blames random internet services that the offenders used for not magically knowing they were being used by offenders.

The EARN Act would ensure that there is more than one cop on the beat by enabling states and civil litigants to seek justice against those who enable child sexual exploitation.

No. It would allow that anyone can go after just about any website for incidental usage by an actual offender, rather than going after the offenders themselves.

This bill is bad and dangerous. It will make the very real problem of CSAM worse and undermine encryption at the same time. This "myth v. fact" sheet reverses the myths and facts in the services of getting bogus "for the children" headlines for Senators desperate to look like they're doing something about a real problem, while they're really moving to make the problem much, much worse.

Mike Masnick

Senate's New EARN IT Bill Will Make Child Exploitation Problem Worse, Not Better, And Still Attacks Encryption

2 years 10 months ago

You may recall the terrible and dangerous EARN IT Act from two years ago, which was a push by Senators Richard Blumenthal and Lindsey Graham to chip away more at Section 230 and to blame tech companies for child sexual abuse material (CSAM). When it was initially introduced, many people noticed that it would undermine both encryption and Section 230 in a single bill. While the supporters of the bill insisted that it wouldn't undermine encryption, the nature of the bill clearly set things up so that you either needed to encrypt everything or to spy on everything. Eventually, the Senators were persuaded to adopt an amendment from Senator Patrick Leahy to more explicitly attempt to exempt encryption from the bill, but it was done in a pretty weak manner. That said, the bill still died.

But, as with 2020, 2022 is an election year, and in an election year some politicians just really want to get their name in headlines about how they're "protecting the children," and Senator Richard Blumenthal loves the fake "protecting the children" limelight more than most other Senators. And thus he has reintroduced the EARN IT Act, claiming (falsely) that it will somehow "hold tech companies responsible for their complicity in sexual abuse and exploitation of children." This is false. It will actually make it more difficult to stop child sexual abuse, but we'll get there. You can read the bill text here, and note that it is nearly identical to the version that came out of the 2020 markup process with the Leahy Amendment, with a few very minor tweaks. The bill has a lot of big name Senators as co-sponsors, and that's from both parties, suggesting that this bill has a very real chance of becoming law. And that would be dangerous.

If you want to know just how bad the bill is, I found out about the re-introduction of the bill -- before it was announced anywhere else -- via a press release sent to me by NCOSE, formerly "morality in media," the busybody organization of prudes who believe that all pornography should be banned. NCOSE was also a driving force behind FOSTA -- the dangerous law with many similarities to EARN IT that (as we predicted) did nothing to stop sex trafficking, and plenty of things to increase the problem of sex trafficking, while putting women in danger and making it more difficult for the police to actually stop trafficking.

Amusingly (?!?) NCOSE's press release tells me both that without EARN IT tech platforms "have no incentive to prevent" CSAM, and that in 2019 tech platforms reported 70 million CSAM images to NCMEC. They use the former to insist that the law is needed, and the latter to suggest that the problem is obviously out of control -- apparently missing the fact that the latter actually shows how the platforms are doing everything they can to stop CSAM on their platforms (and others!) by following existing laws and reporting it to NCMEC where it can be put into a hash database and shared and blocked elsewhere.

But facts are not what's important here. Emotions, headlines, and votes in November are.

Speaking of the lack of facts necessary, with the bill, they also have a "myth v. fact" sheet which is just chock full of misleading and simply incorrect nonsense. I'll break that down in a separate post, but just as one key example, the document really leans heavily on the fact that Amazon sends a lot fewer reports of CSAM to NCMEC than Facebook does. But, if you think for more than 3 seconds about it (and aren't just grandstanding for headlines) you might notice that Facebook is a social media site and Amazon is not. It's comparing two totally different types of services.

However, for this post I want to focus on the key problems of EARN IT. In the very original version of EARN IT, the bill created a committee to study if exempting CSAM from Section 230 would help stop CSAM. Then it shifted to the same form it's in now where the committee still exists, but they skip the part where the committee has to determine if chipping away at 230 will help, and just includes that as a key part of the bill. The 230 part mimics FOSTA (again which has completely failed to do what it claimed and has made the actual problems worse), in that it adds a new exemption to Section 230 that exempts any CSAM from Section 230.

EARN IT will make the CSAM problem much, much worse.

At least in the FOSTA case, supporters could (incorrectly and misleadingly, as it turned out) point to Backpage as an example of a site that had been sued for trafficking and used Section 230 to block the lawsuit. But here... there's nothing. There really aren't examples of websites using Section 230 to try to block claims of child sexual abuse material. So it's not even clear what problem these Senators think they're solving (unless the problem is "not enough headlines during an election year about how I'm protecting the children.")

The best they can say is that companies need the threat of law to report and takedown CSAM. Except, again, pretty much every major website that hosts user content already does this. This is why groups like NCOSE can trumpet "70 million CSAM images" being reported to NCMEC. Because all of the major internet companies actually do what they're supposed to do.

And here's where we get into one of the many reasons this bill is so dangerous. It totally misunderstands how Section 230 works, and in doing so (as with FOSTA) it is likely to make the very real problem of CSAM worse, not better. Section 230 gives companies the flexibility to try different approaches to dealing with various content moderation challenges. It allows for greater and greater experimentation and adjustments as they learn what works -- without fear of liability for any "failure." Removing Section 230 protections does the opposite. It says if you do anything, you may face crippling legal liability. This actually makes companies less willing to do anything that involves trying to seek out, take down, and report CSAM because of the greatly increased liability that comes with admitting that there is CSAM on your platform to search for and deal with.

EARN IT gets the problem exactly backwards. It disincentivizes action by companies, because the vast majority of actions will actually increase rather than decrease liability. As Eric Goldman wrote two years ago, this version of EARN IT doesn't penalize companies for CSAM, it penalizes them for (1) not magically making all CSAM disappear, for (2) knowing too much about CSAM (i.e., telling them to stop looking for it and taking it down) or (3) not exiting the industry altogether (as we saw a bunch of dating sites do post FOSTA).

EARN IT is based on the extremely faulty assumption that internet companies don't care about CSAM and need more incentive to do so, rather than the real problem, which is that CSAM has always been a huge problem and stopping it requires actual law enforcement work focused on the producers of that content. But by threatening internet websites with massive liability if they make a mistake, it actually makes law enforcement's job harder, because they will be less able to actually work with law enforcement. This is not theoretical. We already saw exactly this problem with FOSTA, in which multiple law enforcement agencies have said that FOSTA made their job harder because they can no longer find the information they need to stop sex traffickers. EARN IT creates the exact same problem for CSAM.

So the end result is that by misunderstanding Section 230, by misunderstanding internet company's existing willingness to fight CSAM, EARN IT will undoubtedly make the CSAM problem worse by making it more difficult for companies to track CSAM down and report it, and more difficult for law enforcement to track down an arrest those actually responsible for it. It's a very, very bad and dangerous bill -- and that's before we even get to the issue of encryption!

EARN IT is still very dangerous for encryption

EARN IT supporters claim they "fixed" the threat to encryption in the original bill by using text similar to Senator Leahy's amendment to say that using encryption cannot "serve as an independent basis for liability." But, the language still puts encryption very much at risk. As we've seen, the law enforcement/political class is very quick to want to (falsely) blame encryption for CSAM. And by saying that encryption cannot serve as "an independent basis" for liability, that still leaves open the door to using it as one piece of evidence in a case under EARN IT.

Indeed, one of the changes to the bill from the one in 2020 is that immediately after saying encryption can't be an independent basis for liability it adds a new section that wasn't there before that effectively walks back the encryption-protecting stuff. The new section says: "Nothing in [the part that says encryption isn't a basis for liability] shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph if the evidence is otherwise admissable." In other words, as long as anyone bringing a case under EARN IT can point to something that is not related to encryption, it can point to the use of encryption as additional evidence of liability for CSAM on the platform.

Again, the end result is drastically increasing liability for the use of encryption. While no one will be able to use the encryption alone as evidence, as long as they point to one other thing -- such as a failure to find a single piece of CSAM -- then they can bring the encryption evidence back in and suggest (incorrectly) some sort of pattern or willful blindness.

And this doesn't even touch on what will come out of the "committee" and its best practices recommendations, which very well might include an attack on end-to-end encryption.

The end result is that (1) EARN IT is attacking a problem that doesn't exist (the use Section 230 to avoid responsibility for CSAM) (2) EARN IT will make the actual problem of CSAM worse by making it much more risky for internet companies to fight CSAM and (3) EARN IT puts encryption at risk by potentially increasing the liability risk of any company that offers encryption.

It's a bad and dangerous bill and the many, many Senators supporting it for kicks and headlines should be ashamed of themselves.

Mike Masnick

Daily Deal: The Stellar Utility Software Bundle

2 years 10 months ago

The Stellar Utility Software Bundle has what you need to recover data, reinforce security, erase sensitive documents, and organize photos. It features Stellar Data Recovery Standard Windows, Ashampoo Backup Pro 15, Ashampoo WinOptimizer 19, InPixio Photo Editor v9, Nero AI Photo Tagger Manager, and BitRaser File Eraser. It is on sale for $39.95.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

ID.me Finally Admits It Runs Selfies Against Preexisting Databases As IRS Reconsiders Its Partnership With The Company

2 years 10 months ago

Tech company ID.me has made amazing inroads with government customers over the past several months. Some of this is due to unvetted claims by the company's CEO, Blake Hall, who has asserted (without evidence) that the federal government lost $400 billion to fraudulent COVID-related claims in 2020. He also claimed (without providing evidence) that ID.me's facial recognition tech was sturdy, sound, accurate, and backstopped by human review.

These claims were made after it became apparent the AI was somewhat faulty, resulting in people being locked out of their unemployment benefits in several states. This was a problem, considering ID.me was now being used by 27 states to handle dispersal of various benefits. And it was bound to get worse, if for no other reason than ID.me would be expected to handle an entire nation of beneficiaries, thanks to its contract with the IRS.

The other problem is the CEO's attitude towards reported failures. He has yet to produce anything that backs up his $400 billion in fraud claim and when confronted with mass failures at state level has chosen to blame these on the actions of fraudsters, rather than people simply being denied access to benefits due to imperfect selfies.

Another claim made by Hall has resulted in a walk-back by ID.me's CEO, prompted by increased scrutiny of his company's activities. First, the company's AI has never been tested by an outside party, which means any accuracy claims should be given some serious side-eye until it's been independently verified.

But Hall also claimed the company wasn't using any existing databases to match faces, insinuating the company relied on 1:1 matching to verify someone's identity. But this couldn't possibly be true for all benefit seekers, who had never previously uploaded a photo to the company's servers, only to be rejected when ID.me claimed to not find a match.

It's obvious the company was using 1:many matching, which carries with it a bigger potential for failure, as well as the inherent flaws of almost all facial recognition tech: the tendency to be less reliable when dealing with women and minorities.

This increased outside scrutiny of ID.me has forced CEO Blake Hall to come clean. And it started with his own employees pointing out how continuing to maintain this line of "1-to-1" bullshit would come back to haunt the company. Internal chats obtained by CyberScoop show employees imploring Hall to be honest about the company's practices before his dishonesty caused it any more damage.

“We could disable the 1:many face search, but then lose a valuable fraud-fighting tool. Or we could change our public stance on using 1:many face search,” an engineer wrote in a message posted to a company Slack channel on Tuesday. “But it seems we can’t keep doing one thing and saying another as that’s bound to land us in hot water.”

The internal messages, obtained by CyberScoop, also imply that the company discussed the use of 1:many with the IRS in a meeting.

Those messages had a direct effect: Blake Hall issued a LinkedIn post that admitted the company used 1:many verification, which indicates the company also relies on outside databases to verify identity.

In the Wednesday LinkedIn post Hall said that 1:many verification is used “once during enrollment” and “is not tied to identity verification.”

“It does not block legitimate users from verifying their identity, nor is it used for any other purpose other than to prevent identity theft,” he writes.

Hall's post hedges things quite a bit by insinuating any failures to access benefits is the result of malicious fraudsters, rather than any flaws in ID.me's tech. But this belated honesty -- along with the company's multiple failures at the state level -- has caused the IRS to reconsider its reliance on ID.me's AI. (Archived link here.)

The Treasury Department is reconsidering the Internal Revenue Service’s reliance on facial recognition software ID.me for access to its website, an official said Friday amid scrutiny of the company’s collection of images of tens of millions of Americans’ faces.

Treasury and the IRS are looking into alternatives to ID.me, the department official said, and the agencies are in the meantime attentive to concerns around the software.

This doesn't mean the IRS has divested itself of ID.me completely. At the moment, it's only doing some shopping around. Filing your taxes online still means subjecting yourself to ID.me's verification software for the time being.

A recent blog post on ID.me's site explains how the company verifies identity as well as names the algorithms it relies on to match faces, which include Paravision (which has been tested by the NIST) and Amazon's Rekognition, a product Amazon took off the law enforcement market in 2020, perhaps sensing the public's reluctance to embrace even more domestic surveillance tech.

This may be too little too late for ID.me. Its refusal to engage honestly and transparently with the public while gobbling up state and federal government contracts has expanded its scrutiny past that of the Extremely Online. Senator Ron Wyden wants to know why the IRS has made ID.me the only option for online filing.

I’m very disturbed that Americans may have to submit to a facial recognition system, wait on hold for hours, or both, to access personal data on the IRS website. While e-filing returns remain unaffected, I’m pushing the IRS for greater transparency on this plan.

But e-filing is affected. As the IRS's spokesperson noted in a statement to Bloomberg, ID.me is still standing between e-filers and e-filing.

[IRS spokesperson Barbara] LaManna noted that any taxpayer who does not want to use ID.me can opt against filing his or her taxes online.

It may be true that people with existing accounts might be able to route around this tech impediment, but new filers are still forced to interact with ID.me to set up accounts for e-filing. If spotty state interactions created national headlines, just wait until a nation of millions starts putting ID.me's tech through its paces.

Tim Cushing

Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did

2 years 10 months ago

Another day, another privacy scandal that likely ends with nothing changing.

Crisis Text Line, one of the nation's largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers... to create and market customer service software. More specifically, Crisis Text Line says it "anonymizes" some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.

As we've seen in countless privacy scandals before this one, the idea that this data is "anonymized" is once again held up as some kind of get out of jail free card:

"Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable."

But as we've noted more times than I can count, "anonymized" is effectively a meaningless term in the privacy realm. Study after study after study has shown that it's relatively trivial to identify a user's "anonymized" footprint when that data is combined with a variety of other datasets. For a long time the press couldn't be bothered to point this out, something that's thankfully starting to change.

Also, just like most privacy scandals, the organization caught selling access to this data goes out of its way to portray it as something much different than it actually is. In this case, they're acting as if they're just being super altruistic:

"We view the relationship with Loris.ai as a valuable way to put more empathy into the world, while rigorously upholding our commitment to protecting the safety and anonymity of our texters,” Rodriguez wrote. He added that "sensitive data from conversations is not commercialized, full stop."

Obviously there are layers of dysfunction that have helped normalize this kind of stupidity. One, it's 2021 and we still don't have even a basic privacy law for the internet era that sets out clear guidelines and imposes stiff penalties on negligent companies, nonprofits, and executives. And we don't have a basic law because it's hard (though writing any decent law certainly isn't easy), but because a parade of large corporations, lobbyists, and revolving door regulators don't want the data monetization party to suffer even a modest drop in revenues from the introduction of modest accountability, transparency, and empowered end users. It's just boring old greed. There's a lot of tap dancing that goes on to pretend that's not the reason, but it doesn't make it any less true.

We also don't adequately fund mental health care in the states, forcing desperate people to reach out to startups that clearly don't fully understand the scope of their responsibility. We also don't adequately fund and resource our privacy regulators at agencies like the FTC. And even when the FTC does act (which it often can't in terms of nonprofits), the penalties and fines are often pathetic in scale of the money being made.

Even before these problems are considered, you have to factor that the entire adtech space reaches across industries from big tech to telecom, and is designed specifically to be a convoluted nightmare making oversight as difficult as possible. The end result of this is just about what you'd expect. A steady parade of scandals (like the other big scandal last week in which gay/bi dating and Muslim prayer apps were caught selling user location data) that briefly generate a few headlines and furrowed eyebrows without any meaningful change.

Karl Bode

Massachusetts Court Says Breathaylzers Are A-OK Less Than Three Months After Declaring Them Hot Garbage

2 years 10 months ago

Breathalyzers are like drug dogs and field tests: they are considered infallible right up until they're challenged in court. Once challenged, the evidence seems to indicate all of the above are basically coin tosses the government always claims to win. Good enough for a search or an arrest when only examined by an interested outsider who's been subjected to warrantless searches and possibly bogus criminal charges. But when the evidentiary standard is a little more rigorous than roadside stops, probable cause assertions seem to start falling apart.

Drug dogs are only as good as their handlers. They perform probable cause tricks in exchange for praise and treats. Field drug tests turn bird poop and donut crumbs into probable cause with a little roadside swirling of $2-worth of chemicals. And breathalyzers turn regular driving into impaired driving with devices that see little in the way of calibration or routine maintenance.

Courts have seldom felt compelled to argue against law enforcement expertise and training, even when said expertise/training relies on devices never calibrated or maintained, even when said devices are capable of depriving people of their freedom.

Once every so often courts take notice of the weak assertions of probable cause -- ones almost entirely supported by cop tools that remain untested and unproven. Late last year, a state judge issued an order forbidding the use of breathalyzer results as evidence in impaired driving prosecutions. District court judge Robert Brennan said he had numerous concerns about the accuracy of the tests, and the oversight of testing, and the testing of test equipment by the Massachusetts Office of Alcohol Testing.

“Breathalyzer results undeniably are among the most incriminating and powerful pieces of evidence in prosecutions involving either alcohol impairment or “per se” blood alcohol percentage as an element. Their improper inclusion in criminal cases not only unfairly impacts individual defendants, but also undermines public confidence in the criminal justice system.”

The pause on using breathalyzer tests as evidence is only the most recent development in a year's long challenge of their accuracy. In 2017, ruling on the reliability of tests taken between 2012 and 2014, Brennan found that while the tests were accurate, the way the state maintained them was not.

A court finally found a reason to push back against assertions of training and expertise, as well as assertions that cop tech should be considered nigh invulnerable. But the pushback is over. The same court is apparently now satisfied that the tech it questioned last November is good enough to make determinations that can deprive people of their property and freedom.

Breathalyzers are back in business in the Bay State after a judge dropped the suspension on breath tests, which cops use to bust and prosecute drunk drivers.

Salem Judge Robert Brennan, who in November ordered the statewide exclusion of breath test results, has tossed out the police Breathalyzer pause.

The Draeger Alcotest 9510 breath tests have come under fire for several years, as a Springfield OUI attorney represents defendants in statewide Breathalyzer litigation. Lead defense attorney Joseph Bernard has been raising concerns about the software problems impacting the scientific reliability of the breath test.

But the Salem judge in the ruling vacating the Breathalyzer suspension said the Draeger Alcotest 9510 "produces scientifically reliable breath test results."

Judge Brennan isn't willing to let the possibly subpar be the enemy of the verifiable good. If you went long on breathalyzers late last year, it's time to cash out. According to Judge Brennan, whatever's determined to be good enough is, well, good enough to deprive people of their liberties. Brennan's decision notes there's no such thing as "perfect source code" or "flawless machines." Therefore, state residents should just resign themselves to the fact their freedom is reliant on the Massachusetts' OKest Breathalyzers.

"This Court remains satisfied that the public can have full confidence in the results produced by the Alcotest 9510…"

But can they though? Who knows? Certainly not this court. Certification information has been offered but prior to the November 2021 decision, state prosecutors were voluntarily excluding breathalyzer evidence. That's not exactly a vote of confidence. And this vote against breathalyzers was coming from entities judged almost solely on their prosecutorial wins, necessitating the need to achieve as many easy wins as possible.

Weirdly, the judge says the tests are OK but their oversight isn't. Despite the fact that both facets need to be on the same level to avoid abuse and unjustified arrests, the judge is allowing roadside testing to move forward while criticizing the Office of Alcohol Testing for its "lack of candor and transparency" when dealing with the court and criminal defendants.

In the end, the system prevails. Massachusetts cops can continue to use questionable tech to effect arrests and engage in warrantless searches and detentions. As for its oversight, it's only being threatened with the possibility of further action from this court -- the same court that ended breathalyzer testing in November (citing concerns about equipment and accuracy) only to reverse course three months later.

One imagines the demands placed on the Office of Alcohol Testing will be just as temporary as this court's momentary pause on the use of unproven tech. The desire to be in the police business once again outweighs the public's concern about being on the wrong end of baseless prosecutions. The onus is back on presumably innocent defendants to prove the government isn't using faulty tech to lock them up.

Tim Cushing