a Better Bubble™

TechDirt 🕸

Daily Deal: GoSafe S780 Dash Cam with Sony Image Sensor

2 years 9 months ago

Looking for a great dash cam that records well in low light? Check out the GoSafe S780. With its revolutionary Sony Starvis sensor, the S780 delivers remarkable performance in those tricky dusk driving situations. Plus, thanks to its dual-channel system, you can record both the front and rear of your vehicle at the same time. It's on sale for $200.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Can You Solve The Miserable Being Miserable Online By Regulating Tech?

2 years 9 months ago

Over the last few months, I've been asking a general question which I don't know the answer to, but which I think needs a lot more research. It gets back to the issue of how much of the "bad" that many people seem to insist is caused by social media (and Facebook in particular) is caused by social media, and how much of it is just shining a light on what was always there. I've suggested that it would be useful to have a more nuanced account of this, because it's become all too common for people to insist that anything bad they see talked about on social media was magically caused by social media (oddly, traditional media, including cable news, rarely gets this kind of treatment). The reality, of course, is likely that there are a mix of things happening, and they're not easily teased apart, unfortunately. So, what I'd like to see is some more nuanced accounting of how much of the "bad stuff" we see online is (1) just social media reflecting back things bad things that have always been there, but which we were less aware of as opposed to (2) enabled by social media connecting and amplifying people spreading the bad stuff. On top of that, I think we should similarly be comparing how social media also has connected tons of people for good purposes as well -- and see how much of that happens as compared to the bad.

I'm not holding my breath for anyone to actually produce this research, but I did find a recent Charlie Warzel piece very interesting, and worth reading, in which he suggests (with some interesting citations), that social media disproportionately encourages the miserable to connect with each other and egg each other on. It's a very nuanced piece that does a good job highlighting the very competing incentives happening, and notes that part of the reason there's so much garbage online is that there's tremendous demand for it:

But online garbage (whether political and scientific misinformation or racist memes) is also created because there’s an audience for it. The internet, after all, is populated by people—billions of them. Their thoughts and impulses and diatribes are grist for the algorithmic content mills. When we talk about engagement, we are talking about them. They—or rather, we—are the ones clicking. We are often the ones telling the platforms, “More of this, please.”

This is a disquieting realization. As the author Richard Seymour writes in his book The Twittering Machine, if social media “confronts us with a string of calamities—addiction, depression, ‘fake news,’ trolls, online mobs, alt-right subcultures—it is only exploiting and magnifying problems that are already socially pervasive.” He goes on, “If we’ve found ourselves addicted to social media, in spite or because of its frequent nastiness … then there is something in us that’s waiting to be addicted.”

In other words, at least some of this shouldn't be laid at the feet of the technology, but rather us, as humanity, in what we want out of the technology. It's potentially a sad statement on human psychology that we'd rather seek out the garbage than the other stuff, but it also kind of suggests that the "solution" is not so much in attacking the technology, but maybe figuring out solutions that have more to do with our own societal and psychological outlook on the world.

However, as Warzel notes, if social media is preternaturally good at linking up the miserable, and encouraging them to be more miserable together, then you could argue that it does deserve some of the blame.

Misery is a powerful grouping force. In a famous 1950s study, the social psychologist Stanley Schachter found that when research subjects were told that an upcoming electrical-shock test would be painful, most wished to wait for their test in groups, but most of those who thought the shock would be painless wanted to wait alone. “Misery doesn’t just love any kind of company,” Schachter memorably argued. “It loves only miserable company.”

The internet gives groups the ability not just to express and bond over misery but to inflict it on others—in effect, to transfer their own misery onto those they resent. The most extreme examples come in the form of racist or misogynist harassment campaigns—many led by young white men—such as Gamergate or the hashtag campaigns against Black feminists.

Misery trickles down in subtler ways too. Though the field is still young, studies on social media suggest that emotions are highly contagious on the web. In a review of the science, Harvard’s Amit Goldenberg and Stanford’s James J. Gross note that people “share their personal emotions online in a way that affects not only their own well-being, but also the well-being of others who are connected to them.” Some studies found that positive posts could drive engagement as much as, if not more than, negative ones, but of all the emotions expressed, anger seems to spread furthest and fastest. It tends to “cascade to more users by shares and retweets, enabling quicker distribution to a larger audience.”

This part is fascinating to me in that it actually does try to tease out some of the differences between what anger does to us at an emotional level as compared to happiness. It also reminds me of the (misleadingly reported) Washington Post story regarding how Facebook kept adjusting the "weighting" of the various emoji responses it added, especially focused on how to weight the "anger" emoji.

Anger certainly feels like the kind of emotion that will lead something to spread quickly -- we've all had that moment of anger over something, and spreading the news feels like at least some kind of outlet when you feel powerless over something awful that has happened. But I'm still not clear on how to break down the different aspects of how all of this interacts with social media, as compared to how much it's shining a light on deeper, more underlying societal problems that need solving at their core.

Warzel argues that the connecting of the miserable is something different, and perhaps leads to a more combustible world:

But it also means that miserable people, who were previously alienated and isolated, can find one another, says Kevin Munger, an assistant professor at Penn State who studies how platforms shape political and cultural opinions. This may offer them some short-term succor, but it’s not at all clear that weak online connections provide much meaningful emotional support. At the same time, those miserable people can reach the rest of us too. As a result, the average internet user, Munger told me in a recent interview, has more exposure than previous generations to people who, for any number of reasons, are hurting. Are they bringing all of us down?

Some of the other research he highlights suggests something similar:

“Our data show that social-media platforms do not merely reflect what is happening in society,” Molly Crockett said recently. She is one of the authors of a Yale study of almost 13 million tweets that found that users who expressed outrage were rewarded with engagement, which made them express yet more outrage. Surprisingly, the study found that politically moderate users were the most susceptible to this feedback loop. “Platforms create incentives that change how users react to political events over time,” Crockett said.

But in the end, he notes that, well, this is all interconnected and way more complicated than most people proposing solutions would like to admit. Destroying Facebook doesn't solve this. Removing Section 230 doesn't solve this (and, would almost certainly make this much, much worse).

But the technology is only part of the battle. Think of it in terms of supply and demand. The platforms provide the supply (of fighting, trolling, conspiracies, and junk news), but the people—the lost and the miserable and the left-behind—provide the demand. We can reform Facebook and Twitter while also reckoning with what they reveal about the nation’s mental health. We should examine more urgently the deeper forces—inequality, a weak social safety net, a lack of accountability for unchecked corporate power—that have led us here. And we should interrogate how our broken politics drive people to seek out easy, conspiratorial answers. This is a bigger ask than merely regulating technology platforms, because it implicates our entire country.

I think his suggestion is correct. We need to be looking across the board at how we build a better society -- and in doing so, we're doing everyone a disservice if we just think that "regulating tech" somehow will solve any of the underlying societal problems. But, as the article makes clear, there are so many different factors at play that's it not easy to tease them a part.

Mike Masnick

New FCC Broadband 'Nutrition Label' Will More Clearly Inform You You're Being Ripped Off

2 years 9 months ago

For years we've noted how broadband providers impose all manner of bullshit fees on your bill to drive up the cost of service post sale. They've also historically had a hard time being transparent about what kind of broadband connection you're buying. As was evident back when Comcast thought it would be a good idea to throttle all upstream BitTorrent traffic (without telling anybody), or AT&T decided to cap and throttle the usage of its "unlimited" wireless users (without telling anybody), or Verizon decided to modify user packets to track its customers around the internet (without telling anybody).

Maybe you see where I'm going with this.

Back in 2016 the FCC eyed the voluntary requirement that broadband providers be required to provide a sort of "nutrition label" for broadband. The idea was that this label would clearly disclose speeds, throttling, limitation, sneaky fees, and all the stuff big predatory ISPs like to bury in their fine print (if they disclose it at all). This was the example image the FCC circulated at the time:

While the idea was scuttled by the Trump administration, Congress demanded the FCC revisit it as part of the recent infrastructure bill. So the Rosenworcel FCC last week, as instructed by Congress, voted 4-0 to begin exploring new rules:

We’ve got nutrition labels on foods. They make it easy to compare products. It’s time to have the same simple nutrition labels on broadband. Everyone should be able to compare service, price and data. No more hiding fees in fine print.https://t.co/Jdc3fj4HgP

— Jessica Rosenworcel (@JRosenworcel) January 27, 2022

A final vote on approved rules will come after the Biden FCC finally has a voting majority, likely this summer. And unlike the first effort, this time the requirements will be mandatory, so ISPs will have to comply.

This is all well intentioned, and to be clear it's a good thing Comcast and AT&T will now need to be more transparent in the ways they're ripping you off. In fact, when AT&T recently announced it would be providing faster 2 and 5 Gbps fiber to some users, it stated it would be getting rid of hidden fees and caps entirely on those tiers. AT&T announced this as if they'd come up with the idea, when in reality they were just getting out ahead of the requirement they knew was looming anyway. So stuff like this does matter.

The problem of course is that forcing ISPs to be transparent about how they're ripping you off doesn't stop them from ripping you off. Big broadband providers are able to nickel-and-dime the hell out of users thanks to two things: regional monopolization causing limited competition, and the state and federal corruption that protects it. U.S. policymakers and lawmakers can't (and often won't) tackle that real problem, so instead we get these layers of band aids that only treat the symptom of a broken U.S. telecom market, not the underlying disease.

Karl Bode

Moar Consolidation: Sony Acquires Bungie, But Appears To Be More Hands Off Than Microsoft

2 years 9 months ago

A couple of weeks back we asked the question: is the video game industry experiencing an age of hyper-consolidation? The answer to that increasingly looks to be "yes". That post was built off of a pair of Microsoft acquisitions of Zenimax for $7 billion and then a bonkers acquisition of Activision Blizzard King for roughly $69 billion. Whereas consolidations in industries are a somewhat regular thing, what caused my eyes to narrow was all of the confused communications coming out of Microsoft as to how the company would handle these properties when it came to exclusivity on Microsoft platforms. It all went from vague suggestions that the status quo would be the path forward to, eventually, the announcement that some (many?) titles would in fact be Microsoft exclusives.

So, back to my saying that consolidation does seem to be the order of the day: Sony recently announced it had acquired game studio Bungie for $3.6 billion.

Sony Interactive Entertainment today announced a deal to acquire Bungie for $3.6 billion, the latest in a string of big-ticket consolidation deals in the games industry.

After the deal closes, Bungie will be "an independent subsidiary" of SIE run by a board of directors consisting of current CEO and chairman Pete Parsons and the rest of the studio's current management team.

This is starkly different than the Microsoft acquisitions in a couple of ways. Chief among them is that Bungie will continue to operate with much more independence than those acquired by Microsoft. While Sony obviously wants to recoup its investment in Bungie, the focus there appears to be on continuing to make great games using existing IP, building new IP, and creating content for that IP that expands far beyond just the video game publishing space.

What does not appear to be part of the plan are PlayStation exclusives, as explicitly stated in this interview with both Sony Interactive Entertainment CEO Jim Ryan and Bungies' CEO Pete Parsons.

In an interview with GamesIndustry.biz, Sony Interactive Entertainment CEO Jim Ryan says that Destiny 2 and future Bungie games will continue to be published on other platforms, including rival consoles. The advantages Bungie offers Sony is in its ability to make huge, multiplatform, live-service online games, which is something the wider organisation is eager to learn from.

"The first thing to say unequivocally is that Bungie will stay an independent, multiplatform studio and publisher. Pete [Parsons, CEO] and I have spoken about many things over recent months, and this was one of the first, and actually easiest and most straightforward, conclusions we reached together. Everybody wants the extremely large Destiny 2 community, whatever platform they're on, to be able to continue to enjoy their Destiny 2 experiences. And that approach will apply to future Bungie releases. That is unequivocal."

That's about as firm a stance as you're going to get in this industry. And it is a welcome sign in a few ways. Primarily, Bungie fans will be pleased to know the acquisition doesn't mean they'll lose out on game releases if they don't own a PlayStation. But perhaps just as important is that this demonstrates another route big gaming companies can go with these acquisitions.

As I stated in previous posts on the Microsoft acquisitions: consolidation doesn't have to be a bad thing, but when it results in less customer choice, that's not great. That Sony is doing this differently is a good sign.

Timothy Geigner

Spying Begins At Home: Israel's Government Used NSO Group Malware To Surveill Its Own Citizens

2 years 9 months ago

Israeli malware purveyor NSO Group may want to consider changing its company motto to "No News Is Good News." The problem is there's always more news.

The latest report from Calcalist shows NSO is aiding and abetting domestic abuse. No, we're not talking about the king of Dubai deploying NSO's Pegasus spyware to keep tabs on his ex-wife and her lawyer. This is all about how the government of Israel uses NSO's phone hacking tools. And that use appears to be, in a word, extremely irresponsible.

Israel police uses NSO’s Pegasus spyware to remotely hack phones of Israeli citizens, control them and extract information from them, Calcalist has revealed. Among those who had their phones broken into by police are mayors, leaders of political protests against former Prime Minister Benjamin Netanyahu, former governmental employees, and a person close to a senior politician.

Not exactly the terrorists and dangerous criminals NSO claims its customers target. Instead, the targets appear to be more of the same non-terrorists and non-criminals NSO customers have targeted with alarming frequency: political opponents, activists, etc.

That already looks pretty terrible (but extremely on-brand for NSO customers). But it gets a lot worse. The government didn't even bother trying to fake up any justification for this spying.

Calcalist learned that the hacking wasn’t done under court supervision, and police didn’t request a search or bugging warrant to conduct the surveillance.

Is it a "rogue state" when the entire state has decided the rules don't apply to them? Asking for people I would never consider friends.

Perhaps this abuse could have been contained, curtailed, or averted entirely. But the upper layers of the Israeli government cake couldn't be bothered.

There is also no supervision on the data being collected, the way police use it, and how it distributes it to other investigative agencies, like the Israel Securities Authority and the Tax Authority.

"Fuck it," said multiple levels of the Israeli government. It would be a shame to let these powerful hacking tools go to waste -- not when there are anti-government activists out doing activism. Israeli law enforcement decided -- not incorrectly, it appears -- it was a law unto itself, and issued its own paperwork to target protesters demonstrating against the former Prime Minister and COVID restrictions handed down by the Israeli government.

At least some of these malware attacks were targeted. In other cases, law enforcement engaged in almost-literal fishing expeditions to find more targets for NSO's Pegasus spyware.

NSO’s spyware was also used by police for phishing purposes: attempts to phish for information in an intelligence target’s phone without knowing in advance that the target committed any crime. Pegasus was installed in a cellphone of a person close to a senior politician in order to try and find evidence relating to a corruption investigation.

If you like your damning reports to be breathtaking in their depiction of government audacity, click through to read more. The further you scroll down, the worse it gets. Evidence obtained with illicit malware deployments was laundered via parallel construction. Employees of government contractors were targeted without consultation with any level of oversight. A town's mayor was hacked -- allegedly because the Israeli government suspected corruption -- but no evidence of corruption was obtained. However, all data and communications harvested from the compromised phone still remains in the hands of the government. In one case, cops used NSO malware -- again without court permission -- to identify a phone thief suspected of publishing "intimate images" from the stolen phone online.

In only a few cases was the malware used to investigate serious crimes. But even in those cases, no legal approval was obtained and the malware was deployed furtively to fly under the oversight radar.

NSO's response to this report is more of the same: Hey, we just sell the stuff. We can't control how its used, even when it's being purchased by our own government.

The Israeli police statement is far more defensive:

“The claims included in your request are untrue. Israel Police acts according to the authority granted to it by law and when necessary according to court orders and within the rules and regulations set by the responsible bodies. The police’s activity in this sector is under constant supervision and inspection of the Attorney General of Israel and additional external legal entities…"

Well, then I assume the paperwork containing signatures and explicit approval of all relevant authorities is being swiftly couriered to Calcalist HQ to provide evidence refuting the claims made in its article. Otherwise, this just sounds like the bitter muttering of an angry government spokesperson willing to do nothing more than allude to the Emperor's New Court Orders. Given the routine abuse of NSO Group malware by governments around the world, it comes as absolutely no surprise it's being abused at home as well. And the non-denials by governments are starting to wear as thin as NSO's "hey, we're only an enabler of abuse" statements.

Tim Cushing

Hollywood, Media, And Telecom Giants Are Clearly Terrified Gigi Sohn Will Do Her Job At The FCC

2 years 9 months ago

Media and telecom giants have been desperately trying to stall the nomination of Gigi Sohn to the FCC. Both desperately want to keep the Biden FCC gridlocked at 2-2 Commissioners thanks to the rushed late 2020 Trump appointment of Nathan Simington to the Commission. Both industries most assuredly don't want the Biden FCC to do popular things like restore the FCC's consumer protection authority, net neutrality, or media consolidation rules. But because Sohn is so popular, they've had a hell of a time coming up with any criticisms that make any coherent sense.

One desperate claim being spoon fed to GOP lawmakers is that Sohn wants to "censor conservatives," despite the opposite being true: Sohn has considerable support from conservatives for protecting speech and fostering competition and diversity in media (even if she disagrees with them). Another lobbying talking point being circulated is that because Sohn briefly served on the board of the now defunct Locast, she's somehow incapable of regulating things like retransmission disputes objectively. Despite the claim being a stretch, Sohn has agreed to recuse herself from such issues for the first three years of her term.

Hoping to seize on the opportunity, former FCC boss turned top cable lobbyist Mike Powell is now trying to claim that because Sohn has experience working on consumer protection issues at both Public Knowledge and the FCC (she helped craft net neutrality rules under Tom Wheeler), she should also be recused from anything having to do with telecom companies. It's a dumb Hail Mary from a revolving door lobbyist whose only interest is in preventing competent oversight of clients like Comcast:

"He said it is not clear why those would be the only issues from which she would recuse herself, “given the breadth of issues in which Public Knowledge was involved” under Sohn. He said the recusal should ”logically extend“ to all the matters she advocated for at Public Knowledge, or none.

Second, he said: “Next, in the more recent years since her service at the Commission during the Obama administration, Ms. Sohn has been publicly involved on matters of direct interest to our membership. There is no logical basis for treating these matters differently from the retransmission and copyright issues for purposes of recusal."

Facebook, Amazon, and Google all tried similar acts of desperation to thwart FTC boss Lina Khan, suggesting that because she opined on antitrust matters as an influential academic, she was utterly incapable of regulating these companies objectively. But both have a deep understanding of the sectors they're tasked with regulating. Both are also the opposite of revolving door policymakers with financial conflicts of interest, which you'll note none of these critics have the slightest issue with.

Of course telecom and big broadcasters aren't the only industries terrified of competent, popular women in positions of authority. Hollywood (and the politicians paid to love them) are also clearly terrified of someone competent at the FCC. The Directors Guild of America is also urging the Senate Commerce Committee to kill Sohn's nomination. Their justification for their opposition? Sohn once attempted to (gasp) bring competition to the cable box:

"Hollander pointed to one of the proposals that Sohn championed when she served as counselor to FCC Chairman Tom Wheeler during Barack Obama’s second term. Wheeler and Sohn saw the proposal, introduced in 2016, as a way to free cable and satellite subscribers from having to pay monthly rental fees for their set top box. The proposal would have required that pay TV providers offer a free app to access the channels, but ran into objections from the MPAA, which said it would be akin to a “compulsory copyright license.” It’s unlikely that the proposal would come up again in that form, as it was sidelined when Jessica Rosenworcel, who now is chairwoman of the FCC, declined to support it."

You might recall the 2016 proposal in question tried to force open the cable industry's dated monopoly over cable boxes by requiring cable companies provide their existing services in app form (it wasn't "free"). You might also recall that the plan failed in part because big copyright, with the help of the Copyright Office, falsely claimed the proposal was an attack on the foundations of copyright. It wasn't. But the claims, hand in hand with all kinds of other bizarre and false claims from media and cable (including the false claim the proposal would harm minorities), killed it before it really could take its first steps.

I had my doubts about the proposal. Streaming competition will inevitably kill the cable box if we wait long enough, so it would seemingly make sense to focus the FCC's limited resources on more pressing issues: like regional broadband monopolization and the resulting dearth of competition. But the FCC's doomed cable box proposal most absolutely was not an "attack on copyright." Companies just didn't want a cash cow killed (cable boxes generate about $20 billion in fee revenue annually), and the usual suspects were just absolutely terrified of disruption, competition, and change.

Congress was supposed to vote Sohn's nomination forward on Wednesday, but that has been delayed because Senator Ben Ray LujĂĄn suffered a stroke (he's expected to make a full recovery). Industry opponents to Sohn's nomination then exploited that stroke to convince Senator Maria Cantwell to postpone the vote further so they could hold yet another hearing. Industry wanted an additional hearing so they can either try to scuttle the nomination with bogus controversies they spoon feed to select lawmakers, or simply delay the vote even further.

We're now a year into Biden's first term and his FCC still doesn't have a voting majority. If you're a telecom or shitty media giant (looking at you, Rupert), that gridlock is intentional; it prevents the agency from restoring any of the unpopular favors doled out during Trumpism, be it the neutering of the FCC's consumer protection authority, or decades old media consolidation rules crafted with bipartisan support. It's once again a shining example of how U.S. gridlock and dysfunction are a lobbyist-demanded feature, not a bug or some inherent, unavoidable part of the American DNA.

These companies, organizations, and politicians aren't trying to thwart Sohn's nomination because they have meaningful, good faith concerns. Guys like Mike Powell couldn't give any less of a shit about ethics or what's appropriate. They're trying to thwart Sohn's nomination because she knows what she's doing, values competition and consumer welfare, and threatens them with the most terrifying of possibilities if you're a monopoly or bully: competent, intelligent oversight.

Karl Bode

Can We At Least Make Sure Antitrust Isn't Deliberately Designed To Make Everyone Worse Off?

2 years 9 months ago

For decades here on Techdirt I've argued that competition is the biggest driver of innovation, and so I'm very interested in policies designed to drive more competition. Historically this has been antitrust policy, but over the past decade or so it feels like antitrust policy has become less and less about competition, and more and more about punishing companies that politicians dislike. We can debate whether or not consumer welfare is the right standard for antitrust -- I think there are people on both sides of that debate who make valid points -- but I have significant concerns about any antitrust policy that seems deliberately designed to make consumers worse off.

That's why I'm really perplexed by the push recently to push through the “American Innovation and Choice Online Act” from Amy Klobuchar which, for the most part, doesn't seem to be about increasing competition, innovation, or choice. It seems almost entirely punitive in not just punishing the very small number of companies it targets, but rather everyone who uses those platforms.

There's not much I agree with Michael Bloomberg about, but I think his recent opinion piece on the AICOA bill is exactly correct.

At the heart of the bill is an effort to prevent big tech companies from using a widespread business practice called self-preferencing, which is generally good for both consumers and competition. Think of it this way: An ice-cream parlor makes its own flavors and sells other companies’ flavors, too. Its storefront window carries a large sign advertising its homemade wares. In smaller letters, the sign mentions that Haagen-Dazs and Breyers are available, too. Should Congress force the ice-cream store owners to advertise Haagen-Dazs and Breyers as prominently as their own products?

That’s essentially what this bill would force a handful of the largest tech companies to do. For instance, Google users searching the name of a local business now get, in their search results, the option of clicking a Google-built map. But under the bill’s requirements, the search results would likely have to exclude the Google map. Similarly, Amazon would likely be prevented from promoting its less-expensive generic goods against the biggest brand names.

Lots of businesses offer configurations of products and services in ways that are attractive to customers, often for both price and convenience. Doing this can allow companies to enter — and potentially disrupt — new markets, to the great advantage of customers.

Yet the bill views such standard business conduct as harmful. It would require covered companies — essentially Amazon, Apple, Google, Facebook and TikTok — to prove that any new instance of preferencing would “maintain or enhance the core functionality” of their business. Failure to comply could lead to fines of up to 15% of a company’s total U.S. revenue over the offending period.

Now, I think there's a very legitimate argument that if a dominant company is using its dominant position to preference something in a manner that harms competition and the end user experience, then that can be problematic, and existing antitrust law can take care of that. But this bill seems to assume that any effort to offer your own services is somehow de facto against the law.

And whether or not that harms these companies is besides the point: it will absolutely harm the users and customers of these companies, and why should that be enabled by US competition policy? The goal seems to be "if we force these companies to be worse, maybe it will drive people to competitors," which is a really bizarre way of pushing competition. We should drive competition by encouraging great innovation, not limiting how companies can innovate.

Even if you don't think that the "consumer welfare" standard makes sense for antitrust, I hope most people can at least agree that any such policy should never deliberately be making consumers worse off.

Mike Masnick

Texas Town To Start Issuing Traffic Tickets By Text Message

2 years 9 months ago

Way back in 2014, Oklahoma state senator (and former police officer) Al McAffrey had an idea: what if cops could issue traffic tickets electronically, without ever having to leave the safety and comfort of their patrol cars?

The idea behind it was officer safety. This would keep officers from standing exposed on open roads and/or interacting face-to-face with a possibly dangerous driver. The public's safety was apparently low on the priority list, since this lack of interaction could permit impaired drivers to continue driving or allow actually dangerous people to drive away from a moving violation to do more dangerous things elsewhere.

It also would allow law enforcement agencies to convert drivers to cash more efficiently by speeding up the process and limiting things that might slow down the revenue stream, like having actual conversations with drivers. On the more positive side, it would also have lowered the chance of a traffic stop turning deadly (either for the officer or the driver) by limiting personal interactions that might result in the deployment of excessive or deadly force. And it also would limit the number of pretextual stops by preventing officers from claiming to have smelled something illegal while conducting the stop.

Up to now, this has only been speculative legislation. But it's becoming a reality, thanks to government contractor Trusted Driver. Run by former police officer Val Garcia, the program operates much like the TSA's Trusted Traveler program. Users create accounts and enter personal info and then receive traffic citations via text messages.

The program is debuting in Texas, where drivers who opt in will start being texted by cops when they've violated the law.

It's a concept never done before, and it's about to happen in Bexar County: Getting a traffic ticket sent to your phone without an officer pulling you over. One police department will be the first in the nation to test it.

"It's not a 100% solution, but it's a step forward in the right direction," said Val Garcia, President & CEO of the Trusted Driver Program.

Garcia is one of five former SAPD officers who are part of a 12-member team that created and developed Trusted Driver.

"We're proud to still give back with what we've gained with our experience as a law enforcement officer," said Garcia.

The company claims the program will have several benefits, above and beyond limiting cop-to-driver interactions that have the possibility of escalating into deadly encounters. Some of the benefits aren't immediately discernible, but giving cops more personal information could actually help prevent the senseless injury or killing of drivers who may have medical reasons that would explain their seeming non-compliance. Here's Scott Greenfield highlighting this particular aspect of the Trusted Driver Program.

But this also offers an opportunity that can be critical in police interactions and has led to a great many tragic encounters.

“If you’re deaf, if you have PTSD, autism, a medical condition like diabetes or a physical disability but you’re still allowed to drive,” said Garcia. “It really gives an officer information faster in the field to handle a traffic stop if it does occur and be able to deescalate.”

That police will be aware that a driver is deaf or autistic could be of critical importance in preventing a mistaken shooting, provided the cop reads it and is adequately trained not to kill deaf people because they didn’t comply with commands.

Unfortunately, the cadre of cops behind Trusted Driver seem to feel citizens are looking for even more ways to interact with officers, even if this interaction is limited to text messages.

Through Trusted Driver, police are also able to send positive messages to drivers who are doing a stellar job obeying traffic laws.

Just like cops thinking they're doing a good thing by pulling over drivers who haven't committed a crime to give them a thumbs up or a Thanksgiving turkey, Trusted Driver seems to believe the public will be receptive to text messages from cops telling them they're doing a good job driving, delivered to them via a number they associate with punishment for criminal acts. And it's not like drivers in the program will be able to select which messages they receive: once you've opted in, you can have your heart rate temporarily increased by the law enforcement equivalent of slacktivism -- one Trusted Driver believes will somehow build and repair the public's relationship with the law enforcement officers that serve them.

This lies somewhere between the frontier of law enforcement and the inevitability of tech development. It's not that it's an inherently bad idea, but there's a lot in there that's problematic, including officers receiving increased access to driver's personal info, which will now include their cell phone numbers. Law enforcement officers have a history of abusing access to personal info and this program gives them the opportunity to do so without ever leaving their patrol cars.

Then there's the unanswered question about enforcement. Will members of this program receive more tickets just because they're easier to ticket? Or will traffic enforcement still be evenly distributed (so to speak) across all drivers? Like other automated traffic enforcement efforts, tickets will be issued to the owner of the vehicle, rather than the actual driver, which is going to cause problems for people who haven't actually committed a moving violation, beginning with increased insurance rates and possibly ending with bench warrants for unpaid tickets that were issued to the wrong person.

Still, it's worth experimenting with. But it needs to be subject to intense scrutiny the entire time it's deployed. There's too much at risk for agencies and the general public to just let it hum along unattended in the background, steadily generating revenue. Unfortunately, if it does that part of the job (deepening the revenue stream), concerns about its use and operation are likely to become background noise easily drowned out by the sound of city coffers being filled.

Tim Cushing

Daily Deal: The 2022 FullStack Web Developer Bundle

2 years 9 months ago

The 2022 FullStack Web Developer Bundle has 11 courses to help you step up your game as a developer. You'll learn frontend and backend web technologies like HTML, CSS, JavaScript, MySQL, and PHP. You'll also learn how to use Git and GitHub, Vuex, Docker, Ramda, and more. The bundle is on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

With Stephen Breyer's Retirement, The Supreme Court Has Lost A Justice Who Was Wary Of Overly Burdensome Copyright

2 years 9 months ago

Whatever the (I'd argue unfortunate) politics behind Stephen Breyer's decision to retire as a Supreme Court Justice at the conclusion of this term, it is notable around here for his views on copyright. Breyer has generally been seen as the one Justice on the court most open to the idea that overly aggressive copyright policy was dangerous and potentially unconstitutional. Perhaps ironically, given that they are often lumped together on the overly simplistic "left/right" spectrum -- Justices Breyer and Ginsburg -- presented somewhat opposite ends of the copyright spectrum. Ginsburg consistently was a voice in favor of expanding copyright law to extreme degrees, while Breyer seemed much more willing to recognize that the rights of users -- including fair use -- were extremely important.

If you want to see that clearly, read Ginsburg's majority opinion in the Eldred case (on whether or not copyright term extension is constitutional) as compared to Breyer's dissent. To this day I believe that 21st century copyright law would have been so much more reasonable and so much more for the benefit of the public if Breyer had been able to convince others on the court to his views. As Breyer notes in his dissent, a copyright law that does not benefit the public should not be able to survive constitutional scrutiny:

Thus, I would find that the statute lacks the constitutionally necessary rational support (1) if the significant benefits that it bestows are private, not public; (2) if it threatens seriously to undermine the expressive values that the Copyright Clause embodies; and (3) if it cannot find justification in any significant Clause-related objective.

(As an aside, the book No Law has a very, very thorough breakdown of how the majority ruling by Justice Ginsburg in that case was just, fundamentally, objectively wrong.)

That said, Breyer wasn't -- as he was sometimes painted -- a copyleft crusader or anything. As Jonathan Band details, Breyer's views on copyright appeared to be extremely balanced -- sometimes ruling for the copyright holder, and sometimes not. Indeed, to this day, I still cannot fathom how he came to write the majority opinion in the Aereo case, which used a "looks like a duck" kind of test. In that case, the company carefully followed the letter of the law regarding copyright, and the end result was that, even by playing within the lines, because it felt like some other service, the court was fine with declaring it to be a different kind of service (even though technically it was not). We are still suffering from the impact of that case today.

So, while I didn't always think that Breyer got copyright cases correct, he was -- consistently -- much more thoughtful on copyright issues that any other Justice on today's court, and that perspective will certainly be missed.

Mike Masnick

Congress Introduces New Agricultural 'Right to Repair' Bill With Massive Farmer Support

2 years 9 months ago

Back in 2015, frustration at John Deere's draconian tractor DRM helped birth a grassroots tech movement dubbed "right to repair." The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM (and the company's EULA) prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair (which for many owners involved hauling tractors hundreds of miles and shelling out thousands of additional dollars), or toying around with pirated firmware just to ensure the products they owned actually worked.

Seven years later and this movement is only growing. This week Senator Jon Tester said he was introducing new legislation (full text here, pdf) that would require tractor and other agricultural hardware manufacturers to make manuals, spare parts, and and software access codes publicly available:

"We’ve got to figure out ways to empower farmers to make sure they can stay on the land. This is one of the ways to do it,” Tester said. “I think that the more we can empower farmers to be able to control their own destiny, which is what this bill does, the safer food chains are going to be."

The legislation comes as John Deere recently was hit with two new lawsuits accusing the company of violating antitrust laws by unlawfully monopolizing the tractor repair market. In 2018 John Deere had promised to make sweeping changes to address farmers' complaints, though by 2021 those changes had yet to materialize. Tester's legislation also comes as a new US PIRG survey shows that a bipartisan mass of famers overwhelmingly support reform on this front.

Tester's proposal is just one of several new efforts to rein in attempts to monopolize repair, be it John Deere or Apple. More that a dozen state-level laws have been proposed, and the Biden administration's recent executive order on competition also urges the FTC to craft tougher rules on repair monopolization efforts. In an era rife with partisan bickering, it's refreshing to see an issue with such broad, bipartisan public support, resulting in an issue that only had niche support a half decade ago rocketing into the mainstream.

Karl Bode

YouTube Dusts Off Granular National Video Blocking To Assist YouTuber Feuding With Toei Animation

2 years 9 months ago

Hopefully, you will recall our discussion about one YouTuber, Totally Not Mark, suddenly getting flooded with 150 copyright claims on his YouTube channel all at once from Toei Animation. Mark's channel is essentially a series of videos that discuss, critique, and review anime. Toei Animation produces anime, including the popular Dragon Ball series. While notable YouTuber PewDiePie weighed in with some heavy criticism over how YouTube protects its community in general from copyright claims, the real problem here was one of location. Matt is in Ireland, while Toei Animation is based out of Japan. Japan has terrible copyright laws when it comes to anything resembling fair use, whereas Ireland is governed by fair dealing laws. In other words, Matt's use was just fine in Ireland, where he lives, but would not be permitted in Japan. Since YouTube is a global site, takedowns have traditionally been global.

Well, Matt has updated the world to note that he was victorious in getting his videos restored and cleared, with a YouTube rep working directly with him on this.

But shortly after, as Fitzpatrick revealed in a new video providing an update on the legal saga, someone “high up at YouTube’’ who wished to remain anonymous, reached out to him via Discord. Fitzpatrick said the contact not only apologized for his situation not being addressed sooner, but divulged a prior conflict between YouTube and Toei regarding his videos fair use status.

“I’m not going to lie, hearing a human voice that felt both sincerely eager to help and understanding of this impossible situation felt like a weight lifted off my shoulders,” Fitzpatrick said.

Hey, Twitch folks, if you're reading this, this is how it is done. But it isn't the whole story. Before the videos were claimed and blocked, Toei had requested that YouTube manually take Matt's videos offline. YouTube pushed back on Toei, asking for more information on its requested takedowns, specifically asking if the company had considered fair use/fair dealing laws in its request. Alongside that, YouTube also asked Toei to provide more information as to what and why Matt's videos were infringing. Instead of complying, Toei utilized YouTube's automated tools to simply claim and block those 150 videos.

The following week, a game of phone tag ensued between Toei, the Japanese YouTube team, the American YouTube team, Fitzpatrick’s YouTube contact, and himself to reach “some sort of understanding” regarding his copyright situation. Toei ended up providing a new list of 86 videos of the original 150 or so that the company deemed should not remain on YouTube, a move Fitzpatrick described as “baffling” and “inconsistent.” Toei, he concludes, has no idea of the meaning of fair use or the rules the company wants creators to abide by.

“Contained in this list was frankly the most arbitrary assortment of videos that I had ever seen,” he said. “It honestly appeared as if someone chose videos at random as if chucking darts at a dart board.”

While Matt regained control of his videos thanks to his work alongside the YouTube rep, he was still in danger of Toei filing a lawsuit in Japan that he would almost certainly lose, given that country's laws. Fortunately, YouTube has a method for blocking videos based on copyright claims in certain countries for these types of disputes. The Kotaku post linked above suggests that this method is brand new for YouTube, but it isn't. It's been around for a while but, somewhat amazingly, it appears to have never been used specifically when it comes to copyright laws in specific countries.

YouTube’s new copyright rule allows owners like Toei to have videos removed from, say, Japan’s YouTube site, but said videos will remain up in other territories as long as they fall under the country’s fair use policies. To have videos removed from places with more allowances for fair use, companies would have to argue their cases following the copyright laws of those territories.

And so Matt's review videos remain up everywhere except in Japan. That isn't a perfect solution by any stretch, but it seems to be as happy a middle ground as we're likely to find given the circumstances. Those circumstances chiefly being that Toei Animation for some reason wants to go to war with a somewhat popular YouTuber who, whatever else you might want to say about his content, is certainly driving interest publicly in Toei's products, for good or bad. This is a YouTuber the company could have collaborated with in one form or another, but instead it is busy burning down bridges.

“Similarly to how video games have embraced the online sphere, I sincerely believe that a collaborative or symbiotic relationship between online creators and copyright owners is not only more than possible but would likely work extremely well for both sides if they are open to it,” Fitzpatrick said.

That Toei Animation is not open to it is the chief problem here.

Timothy Geigner

That's A Wrap On The Public Domain Game Jam! Check Out All This Year's Great Entries

2 years 9 months ago

Last night at midnight, we reached the end of Gaming Like It's 1926, our fourth annual public domain game jam celebrating the new works that entered the public domain this year. At final count, we got 31 entries representing a huge variety of different kinds of digital and analog games!

For the next couple of weeks, we'll be digging into all the games and selecting the winners in our six categories — but there's no need to wait before playing! You can check out all the entries on itch.io:

At first glance (and having poked around in a couple of the early entries) I can already tell it's going to be tough to narrow these down to just six winners — there are lots of games here that do fun and interesting things with public domain works. As in past years, once we've selected and announced the winners we'll discuss each one in detail in a podcast and a series of posts.

Until then, a huge thanks to everyone who participated this year, and also to everyone who takes some time to play the games and give these designers the attention they deserve!

Leigh Beadon

Chicago Cops Love Them Some Facebook Sharing, According To Internal Facial Recognition Presentation

2 years 9 months ago

Somewhere between the calls to end encryption and calls to do literally anything about crime rate spikes at this time of year, at this time of day, in [insert part of the country], localized entirely within [add geofence] lies the reality of law enforcement. While many continue to loudly decry the advent of by-default encryption, the reality of the situation is people are generating more data and content than ever. And most of it is less than a warrant away.

While certain suspect individuals continue to proclaim encryption will result in an apocalypse of criminal activity, others are reaping the benefits of always-on internet interactivity. Clearview, for example, has compiled a database of 10 billion images by doing nothing more than scraping the web, grabbing everything that's been made public by an extremely online world population.

You want facial images free of charge and no Fourth Amendment strings attached? You need look no further than the open web, which has all the faces you want and almost none of the attendant restrictions. "Going dark" is for chumps who don't know how to leverage the public's willingness to share almost anything with the rest of the internet.

The Chicago PD knows who's keeping the internet bread buttered and which side they're on. A report from Business Insider (written by Caroline Haskins) highlights an internal CPD presentation that makes it explicit cops have gained plenty from the rise of social media platforms, easily outweighing the subjective losses end-to-end encryption may have recently created.

Images posted on social media have become so valuable to police investigations that the Chicago Police Department thanked Facebook, "selfie culture," and "high-definition cameras" on cellphones during a presentation on how to use facial-recognition technology.

"THANK YOU FACEBOOK!" read one slide from the document, which was obtained by Insider through a public-record request.

Thank you, Facebook, indeed. The presentation [PDF] namechecks the most popular social media platform in the United States -- one that has deployed its own facial recognition to tag individuals in photos whether or not said individuals have specifically agreed to be identified by the social network. Hence the rise of the "I'm in this photo and I don't like it" meme.

Facebook (now Meta) had no comment. The Chicago PD provided no comment. But little commentary is necessary. Whatever's sent out into the open ether of the internet is there for the taking. Clearview made it explicit by scraping everything that wasn't nailed down. Facebook's terms of service and privacy policy make it far less explicit, but whatever can be accessed by non-cops roaming the platform can also be accessed by cops.

The presentation at least notes that facial recognition should be viewed as only one investigative tool to be used by investigators. Better, it points out that matching a face to social media detritus is only a small part of the equation. No officer should assume a single match means positive identification of a criminal suspect. Whether or not this part of the training carries over to actual investigations remains to be seen. If cops are assuming matches are positive IDs and acting accordingly, it's only a matter of time before the Chicago PD gets sued for arresting or jailing the wrong person.

The document obtained by Business Insider shows the CPD is using multiple facial recognition vendors in their quest for the highly subjective "truth:" ranging from Amazon's no-longer-for-law-enforcement Rekognition to NEC, Cognitec, and Dataworks Plus.

It's difficult to even golf clap for the Chicago PD, given its long history of rights violations and internal corruption, but it would be disingenuous to acknowledge this presentation at least tries to steer investigators away from rights violations.

The document says CCTV footage and social media could lead to "suspect identification." But it also notes prospective pitfalls of the technology, saying that facial recognition was a "narrow tool" that couldn't be used to "'confirm' an identification by other means."

Again, the words are only as good as their interpretation by officers utilizing this technology and the wealth of information made accessible by social media platforms. And there's a shit ton of inputs. Millions of images are easily accessible through Facebook. Millions more have been harvested by the Chicago PD, which operates or has access to more than 30,000 surveillance cameras located in the city.

The Chicago PD's relationship with emerging surveillance tech has been no better than its constantly deteriorating relationship with the people it serves. The PD has been an enthusiastic early adopter of unproven tech, blowing tax dollars on ShotSpotter (which is terrible at spotting shots) and Clearview's facial recognition AI (which has been assailed by law enforcement agencies as mostly useless).

We want law enforcement agencies to be good stewards of the money and power they're entrusted with. The Chicago PD has been neither for decades. While this presentation does a good job explaining the pitfalls of utilizing open source images in conjunction with facial recognition tech, the fact is Chicago cops are results-oriented. When that happens, the ends justify the means, even when the ends are ultimately tossed by trial court judges and federal civil rights lawsuits. Officers are on notice that facial recognition tech is highly-fallible. But, until we see otherwise, we can probably assume CPD officers are more interested in deploying the tech than ensuring search results are accurate.

]

Tim Cushing

Techdirt Podcast Episode 309: Remembering The SOPA Fight, With Rep. Zoe Lofgren

2 years 9 months ago

As many of you know, last week we hosted an online event for the latest Techdirt Greenhouse edition, all about looking back on the lessons learned from the 2012 protests against SOPA and PIPA. Our special guest was Rep. Zoe Lofgren, one of the strongest voices in congress speaking out against the disastrous bills, who provided all kinds of excellent insight into what happened then and what's happening now. In case you missed it, for this week's episode of the podcast (yes, we're finally back with new episodes!) we've got the full conversation and Q&A from the event.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Leigh Beadon

Senators' 'Myths & Facts' About EARN IT Is Mostly Myths, Not Facts

2 years 9 months ago

I already wrote a long post earlier about the very very real problems with the EARN IT Act -- namely that it would make the problem of child sexual abuse material significantly worse by repeating the failed FOSTA playbook, and that it would attack encryption by making it potential "evidence" in a case against a tech company for any CSAM on its site. But with the bill, the sponsors of the bill, Senators Richard Blumenthal and Lindsey Graham, released a "Myth v. Fact" document to try to counter the criticisms of EARN IT. Unfortunately, the document presents an awful lot of "myths" as "facts." And that's a real problem.

The document starts out noting, correctly:

The reporting of CSAM and online child sexual exploitation provides law enforcement with vital information on active predators and ongoing cases of rape and abuse.

More reports, and more accurate information within those reports, means more children freed from one of the most horrific and life-changing crimes imaginable.

But it ignores that taking away Section 230 protections doesn't magically make them do more reporting. It does the opposite. Because now making the effort to find and report CSAM actually puts you at risk of greater liability under EARN IT. The bill literally creates less incentive for a website to build systems to find and report CSAM because merely doing so gives you the knowledge (scienter) necessary under the law to face liability. The bill gets this all exactly backwards.

The document then has the following listed as a "myth":

Given that some tech companies report significant amounts of CSAM to the National Center for Missing and Exploited Children (NCMEC) and provide technical resources to address child exploitation, the tech industry is doing enough to address this crime.

To address it, it lists these facts:

For tech companies that are already taking clear steps to report CSAM, little will change under this bill.

Except, that's not true at all. Because now, if those companies make any mistakes -- which they will because you can't get everything right -- they face potentially crippling liability. The idea that no one will go after them because they report a lot of CSAM is completely divorced from reality. We see companies getting sued all the time in similar circumstances. Under FOSTA now we're seeing companies like Salesforce and Mailchimp being sued because other companies used their services and then sex traffickers used those other services, and somehow that magically makes Salesforce and Mailchimp liable. The same thing would happen under EARN IT.

According to NCMEC’s 2020 statistics on reports of the online exploitation of children, while Facebook issued over 20 million reports that year, in contrast Amazon (which hosts a significant percentage of global commerce and web infrastructure) reported 2,235 cases.

Maybe that's because Facebook is a user-generated content social media platform and Amazon... is not? I mean, I don't even need to say this is "comparing apples to oranges" here because it's "comparing Facebook to Amazon." The two companies do very, very different things that are simply not comparable.

Of course, the underlying (and pretty fucking scary) suggestion here is that Amazon should be scanning every AWS instance for bad stuff, which raises really serious privacy concerns. It's amazing that the very same Senators pushing this bill, which now they're basically saying should require websites to spy on everything, will turn around next week and argue that these companies are "collecting too much information" and are engaged in "surveillance capitalism."

So which is it, Senators? Should Amazon be spying on everyone, or are they not spying hard enough?

There is a sustained problem of underreporting and neglect of legal obligations by some tech companies. During a Senate Judiciary Committee hearing on the EARN IT Act, NCMEC disclosed that it had reported nearly nine times more cases of CSAM material hosted on Amazon to Amazon, than Amazon had found itself, and that Amazon had not taken legally required action on those cases.

Again, this is taken incredibly out of context, and when put back into context means that Amazon isn't spying on all their customers' data. That should be seen as a good thing? Note what this "fact" doesn't say: when Amazon was alerted to CSAM by NCMEC did it remove it? Did it add hashes to the NCMEC database? Because that's what matters here. Otherwise, these Senators are just admitting that they want more surveillance by private companies and less privacy for the public.

Before introducing the EARN IT Act, a bipartisan group of Senators sent detailed questions to more than thirty of the most prominent tech companies. The responses showed that even startups and small firms were willing and able to build safety into their platform using automated tools. Meanwhile, some large companies like Amazon admitted that they were not even using common and free tools to automatically stop CSAM despite substantial and known abuse of their platforms by predators.

They're really throwing Amazon under the bus here. But this "fact" again demonstrates that most internet companies that host user generated content are already doing what is appropriate and finding, reporting, and removing CSAM. The only example they have of a company that is not is Amazon, and that's because Amazon is in a totally different business. They're not a platform for user generated content, they're just a giant computer for other services. Those other services, built on top of Amazon, can (and do!) scan their own systems for CSAM.

This whole "fact" list is basically a category error, in which they lump Amazon in with other companies because whoever wrote this can't find any actual problem out there with actual social media companies.

It is clear that many tech companies will only take CSAM seriously when it becomes their financial interest to do so, and the way to make that a reality is by permitting survivors and state law enforcement to take the companies to court for their role in child exploitation and abuse.

Except this document's own fact check said that every company they asked was doing what was necessary. And, it already is very much in every company's "financial interest" to find, report, and remove CSAM because if you don't you already face significant legal consequences, since hosting CSAM is already very, very much illegal.

The next "myth" listed is:

This bill opens up tech companies to new and unimaginable liability that necessitated CDA Section 230’s unqualified immunities two decades ago

Which is... absolutely true. And we don't need to look any further than what happened with FOSTA to see that this is true. But the Senators deny it. Because they're lying.

The EARN IT Act creates a targeted carve out for the specific, illegal act of possession or distribution of child sexual abuse material.

And FOSTA created "a targeted carve out for the specific, illegal act of human trafficking" but in practice has resulted in a series of totally frivolous lawsuits against ancillary services used by a company that was then used by sex traffickers.

Any tech company that is concerned that its services or applications could be used to distribute CSAM has plenty of tools and options available to prevent this crime without hindering their operations or creating significant costs.

The detection, prevention, and reporting of CSAM is one of the most easily addressed abuses and crimes in the digital era. There are readily accessible, and often free, software and cloud services, such as PhotoDNA, to automate the detection of known CSAM material and report it to NCMEC.

The naming of PhotoDNA is interesting here. It's a Microsoft project (big tech!) that is very important in finding/reporting/removing CSAM. But Microsoft actually limits who can use it, and I've heard of multiple websites that were not allowed to use PhotoDNA. I don't think Techdirt would qualify to use PhotoDNA, for example. In the meantime, Cloudflare actually introduced its own tool that I think came about because Microsoft made it difficult to impossible for many websites to use PhotoDNA.

But just the fact that PhotoDNA and Cloudflare's solution exist and are being used again suggests that "the problem" here doesn't actually exist. As noted in the first post, we don't see companies being sued for CSAM and using Section 230 as a defense, because that's not the problem.

Also, left out of the "fact" is the actual "fact" that PhotoDNA has very real limitations. That article, published a few months ago, notes that (1) Microsoft and NCMEC seem to go out of their way to avoid allowing researchers to study PhotoDNA, (2) contrary to Microsoft/NCMEC claims, the algorithm is able to be reversed (i.e., enabling users to recreate CSAM images from hashes!) (3) it is easily defeatable with minor changes to images and (4) it is subject to false positives. In other words, while PhotoDNA is an important tool for fighting CSAM, it has real problems, and mandating it (as this "fact" suggests is the goal of EARN IT) could create significant (and potentially dangerous) consequences.

The next "myth" is... just weird.

Requiring companies to be on the lookout for child abuse will harm startups and nascent businesses.

No one has made that argument. The actual argument is that adding very serious liability for anyone making any mistake as they're on the lookout for child abuse will do tremendous harm to startups and nascent businesses.

No other type of business in the country is provided such blanket and unqualified immunity for sexual crimes against children.

Except... tech companies aren't given a "blanket and unqualified immunity for sexual crimes against children." This "fact" is just wrong. What Section 230 does it provide immunity for third party speech -- but not if federal crimes are involved, which is certainly the case with CSAM. The whole attempt to blame Section 230 here is just weird. And wrong.

Startups and small businesses have a critical role in the fight against online CSAM. Smaller social media sites and messaging applications, such as Kik Messenger, are routinely used by abusers. The EARN IT Act will ensure that abusers do not flock to small platforms to evade the protections and accountability put in place on larger platforms.

So, now we're blaming Kik? Okay. Except the timing on this is interesting, as just a few days ago the DOJ literally announced that it had arrested a woman for distributing CSAM on Kik, showing again that when law enforcement actually bothers to do so, it can find and arrest those responsible.

Moreover, there are simple, readily accessible, and often free, software and cloud services, such as PhotoDNA, that can be used by any tech company to automate the detection of known CSAM material and report it to NCMEC.

Again, PhotoDNA involves a "qualification" process, and has significant problems. If the point of this bill is to force every website to use PhotoDNA, write that into the law and deal with the fact that a mandated filter raises other Constitutional concerns. Instead, these Senators are basically saying "every website must use PhotoDNA, but we can't legally say that, so wink, wink."

Indeed, it's pretty funny that right after more or less admitting that they're demanding mandatory filters, they claim this is the next "myth":

The EARN IT Act violates the First Amendment.

The "fact" they used to reply to this kinda gives away the game:

Child sexual abuse is not protected speech. Possession of child pornography is a criminal violation and there is no defensible claim that the First Amendment protects child sexual abuse material.

That's correct, but... misleading. No one is concerned about taking down CSAM (again, pretty much every major internet platform already does this as it's already required by law). The concern is that by mandating filters that are not publicly reviewable, you end up taking down other speech. And that other speech may be protected. Again, look at the link above regarding research into PhotoDNA, which suggests that the "false positive" problem with PhotoDNA is very, very real.

And then we get to the encryption stuff with the next "myth":

The EARN IT Act is simply an attempt to ban encryption.

Actually, it seems to be only partially an attempt to ban encryption. The Senators' "facts" on this is just... again, the part that is actually mythical:

The EARN IT Act does not target, limit, or create liability for encryption or privacy services. In fact, in order to ensure the EARN IT Act would not be misconstrued as limiting encryption, specific protections were included in the bill to explicitly state that a court should not consider offering encryption or privacy services as an independent basis for legal liability.

Weasel words. Again, see what I wrote in the last post about the encryption section. It says it can't be the "independent basis" for liability, but it explicitly states that the use of encryption can still be used as evidence against a website under this law. So it very much increases the legal liability for any website that uses encryption, because it will be used against them in court.

Stopping the abuse of children is not at odds with preserving online privacy. Some online platforms have been using automated tools to check images and videos against CSAM databases for more than a decade without endangering privacy or creating consumer concerns. As Facebook has testified to the Senate Judiciary Committee, tech companies can readily implement tools to detect child sexual abuse while offering strong encryption tools.

This is correct, but does not address the point. Of course, you can fight CSAM while preserving privacy, but this bill makes that much more difficult by adding liability risk to anyone who uses encryption (and anyone who tries to go above and beyond in fighting CSAM, but some slips through).

Then there's a "myth" that's actually a fact. Section 230 exempts federal crimes, and CSAM is a federal crime -- and the real issue is that law enforcement tends not to spend much time and resources on fighting the actual creators and distributors of CSAM:

Since CDA 230 already exempts federal crimes, the solution to this problem is increasing resources for law enforcement and hiring more federal prosecutors.

The "facts" the Senators present in response are incredibly misleading.

We support increasing resources for law enforcement officials fighting sex crimes against children. But no amount of money can compensate for the disengagement of the online platforms actually hosting this material.

That second sentence is a non sequitur, since (again...) EARN IT doesn't do anything to stop sites from hosting CSAM. It just opens them up to being sued for trying to stop it!

Hiring more federal investigators cannot replace having companies committed to the fight against child abuse, especially when it comes to monitoring the content posted on online platforms and checking closed groups for abuse.

Again, companies are committed to fighting child abuse, and EARN IT makes it more risky for them to "monitor" the content posted online!

By requiring that only the Department of Justice can bring criminal cases for child sexual exploitation crimes, CDA Section 230 drastically limits the number and types of cases that are brought.

No, it avoids bogus, wasteful lawsuits like the ones that were brought against Salesforce and Mailchimp under FOSTA.

States and survivors have a well-established role in holding offenders accountable, especially with respect to child sexual exploitation, for a reason: under enforcement of child protection laws fails victims and fosters more abuse.

Yes, and they can already hold "offenders accountable" for sexual exploitation. The problem is that this bill distracts from going after actual offenders, and instead blames random internet services that the offenders used for not magically knowing they were being used by offenders.

The EARN Act would ensure that there is more than one cop on the beat by enabling states and civil litigants to seek justice against those who enable child sexual exploitation.

No. It would allow that anyone can go after just about any website for incidental usage by an actual offender, rather than going after the offenders themselves.

This bill is bad and dangerous. It will make the very real problem of CSAM worse and undermine encryption at the same time. This "myth v. fact" sheet reverses the myths and facts in the services of getting bogus "for the children" headlines for Senators desperate to look like they're doing something about a real problem, while they're really moving to make the problem much, much worse.

Mike Masnick

Senate's New EARN IT Bill Will Make Child Exploitation Problem Worse, Not Better, And Still Attacks Encryption

2 years 9 months ago

You may recall the terrible and dangerous EARN IT Act from two years ago, which was a push by Senators Richard Blumenthal and Lindsey Graham to chip away more at Section 230 and to blame tech companies for child sexual abuse material (CSAM). When it was initially introduced, many people noticed that it would undermine both encryption and Section 230 in a single bill. While the supporters of the bill insisted that it wouldn't undermine encryption, the nature of the bill clearly set things up so that you either needed to encrypt everything or to spy on everything. Eventually, the Senators were persuaded to adopt an amendment from Senator Patrick Leahy to more explicitly attempt to exempt encryption from the bill, but it was done in a pretty weak manner. That said, the bill still died.

But, as with 2020, 2022 is an election year, and in an election year some politicians just really want to get their name in headlines about how they're "protecting the children," and Senator Richard Blumenthal loves the fake "protecting the children" limelight more than most other Senators. And thus he has reintroduced the EARN IT Act, claiming (falsely) that it will somehow "hold tech companies responsible for their complicity in sexual abuse and exploitation of children." This is false. It will actually make it more difficult to stop child sexual abuse, but we'll get there. You can read the bill text here, and note that it is nearly identical to the version that came out of the 2020 markup process with the Leahy Amendment, with a few very minor tweaks. The bill has a lot of big name Senators as co-sponsors, and that's from both parties, suggesting that this bill has a very real chance of becoming law. And that would be dangerous.

If you want to know just how bad the bill is, I found out about the re-introduction of the bill -- before it was announced anywhere else -- via a press release sent to me by NCOSE, formerly "morality in media," the busybody organization of prudes who believe that all pornography should be banned. NCOSE was also a driving force behind FOSTA -- the dangerous law with many similarities to EARN IT that (as we predicted) did nothing to stop sex trafficking, and plenty of things to increase the problem of sex trafficking, while putting women in danger and making it more difficult for the police to actually stop trafficking.

Amusingly (?!?) NCOSE's press release tells me both that without EARN IT tech platforms "have no incentive to prevent" CSAM, and that in 2019 tech platforms reported 70 million CSAM images to NCMEC. They use the former to insist that the law is needed, and the latter to suggest that the problem is obviously out of control -- apparently missing the fact that the latter actually shows how the platforms are doing everything they can to stop CSAM on their platforms (and others!) by following existing laws and reporting it to NCMEC where it can be put into a hash database and shared and blocked elsewhere.

But facts are not what's important here. Emotions, headlines, and votes in November are.

Speaking of the lack of facts necessary, with the bill, they also have a "myth v. fact" sheet which is just chock full of misleading and simply incorrect nonsense. I'll break that down in a separate post, but just as one key example, the document really leans heavily on the fact that Amazon sends a lot fewer reports of CSAM to NCMEC than Facebook does. But, if you think for more than 3 seconds about it (and aren't just grandstanding for headlines) you might notice that Facebook is a social media site and Amazon is not. It's comparing two totally different types of services.

However, for this post I want to focus on the key problems of EARN IT. In the very original version of EARN IT, the bill created a committee to study if exempting CSAM from Section 230 would help stop CSAM. Then it shifted to the same form it's in now where the committee still exists, but they skip the part where the committee has to determine if chipping away at 230 will help, and just includes that as a key part of the bill. The 230 part mimics FOSTA (again which has completely failed to do what it claimed and has made the actual problems worse), in that it adds a new exemption to Section 230 that exempts any CSAM from Section 230.

EARN IT will make the CSAM problem much, much worse.

At least in the FOSTA case, supporters could (incorrectly and misleadingly, as it turned out) point to Backpage as an example of a site that had been sued for trafficking and used Section 230 to block the lawsuit. But here... there's nothing. There really aren't examples of websites using Section 230 to try to block claims of child sexual abuse material. So it's not even clear what problem these Senators think they're solving (unless the problem is "not enough headlines during an election year about how I'm protecting the children.")

The best they can say is that companies need the threat of law to report and takedown CSAM. Except, again, pretty much every major website that hosts user content already does this. This is why groups like NCOSE can trumpet "70 million CSAM images" being reported to NCMEC. Because all of the major internet companies actually do what they're supposed to do.

And here's where we get into one of the many reasons this bill is so dangerous. It totally misunderstands how Section 230 works, and in doing so (as with FOSTA) it is likely to make the very real problem of CSAM worse, not better. Section 230 gives companies the flexibility to try different approaches to dealing with various content moderation challenges. It allows for greater and greater experimentation and adjustments as they learn what works -- without fear of liability for any "failure." Removing Section 230 protections does the opposite. It says if you do anything, you may face crippling legal liability. This actually makes companies less willing to do anything that involves trying to seek out, take down, and report CSAM because of the greatly increased liability that comes with admitting that there is CSAM on your platform to search for and deal with.

EARN IT gets the problem exactly backwards. It disincentivizes action by companies, because the vast majority of actions will actually increase rather than decrease liability. As Eric Goldman wrote two years ago, this version of EARN IT doesn't penalize companies for CSAM, it penalizes them for (1) not magically making all CSAM disappear, for (2) knowing too much about CSAM (i.e., telling them to stop looking for it and taking it down) or (3) not exiting the industry altogether (as we saw a bunch of dating sites do post FOSTA).

EARN IT is based on the extremely faulty assumption that internet companies don't care about CSAM and need more incentive to do so, rather than the real problem, which is that CSAM has always been a huge problem and stopping it requires actual law enforcement work focused on the producers of that content. But by threatening internet websites with massive liability if they make a mistake, it actually makes law enforcement's job harder, because they will be less able to actually work with law enforcement. This is not theoretical. We already saw exactly this problem with FOSTA, in which multiple law enforcement agencies have said that FOSTA made their job harder because they can no longer find the information they need to stop sex traffickers. EARN IT creates the exact same problem for CSAM.

So the end result is that by misunderstanding Section 230, by misunderstanding internet company's existing willingness to fight CSAM, EARN IT will undoubtedly make the CSAM problem worse by making it more difficult for companies to track CSAM down and report it, and more difficult for law enforcement to track down an arrest those actually responsible for it. It's a very, very bad and dangerous bill -- and that's before we even get to the issue of encryption!

EARN IT is still very dangerous for encryption

EARN IT supporters claim they "fixed" the threat to encryption in the original bill by using text similar to Senator Leahy's amendment to say that using encryption cannot "serve as an independent basis for liability." But, the language still puts encryption very much at risk. As we've seen, the law enforcement/political class is very quick to want to (falsely) blame encryption for CSAM. And by saying that encryption cannot serve as "an independent basis" for liability, that still leaves open the door to using it as one piece of evidence in a case under EARN IT.

Indeed, one of the changes to the bill from the one in 2020 is that immediately after saying encryption can't be an independent basis for liability it adds a new section that wasn't there before that effectively walks back the encryption-protecting stuff. The new section says: "Nothing in [the part that says encryption isn't a basis for liability] shall be construed to prohibit a court from considering evidence of actions or circumstances described in that subparagraph if the evidence is otherwise admissable." In other words, as long as anyone bringing a case under EARN IT can point to something that is not related to encryption, it can point to the use of encryption as additional evidence of liability for CSAM on the platform.

Again, the end result is drastically increasing liability for the use of encryption. While no one will be able to use the encryption alone as evidence, as long as they point to one other thing -- such as a failure to find a single piece of CSAM -- then they can bring the encryption evidence back in and suggest (incorrectly) some sort of pattern or willful blindness.

And this doesn't even touch on what will come out of the "committee" and its best practices recommendations, which very well might include an attack on end-to-end encryption.

The end result is that (1) EARN IT is attacking a problem that doesn't exist (the use Section 230 to avoid responsibility for CSAM) (2) EARN IT will make the actual problem of CSAM worse by making it much more risky for internet companies to fight CSAM and (3) EARN IT puts encryption at risk by potentially increasing the liability risk of any company that offers encryption.

It's a bad and dangerous bill and the many, many Senators supporting it for kicks and headlines should be ashamed of themselves.

Mike Masnick

Daily Deal: The Stellar Utility Software Bundle

2 years 9 months ago

The Stellar Utility Software Bundle has what you need to recover data, reinforce security, erase sensitive documents, and organize photos. It features Stellar Data Recovery Standard Windows, Ashampoo Backup Pro 15, Ashampoo WinOptimizer 19, InPixio Photo Editor v9, Nero AI Photo Tagger Manager, and BitRaser File Eraser. It is on sale for $39.95.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

ID.me Finally Admits It Runs Selfies Against Preexisting Databases As IRS Reconsiders Its Partnership With The Company

2 years 9 months ago

Tech company ID.me has made amazing inroads with government customers over the past several months. Some of this is due to unvetted claims by the company's CEO, Blake Hall, who has asserted (without evidence) that the federal government lost $400 billion to fraudulent COVID-related claims in 2020. He also claimed (without providing evidence) that ID.me's facial recognition tech was sturdy, sound, accurate, and backstopped by human review.

These claims were made after it became apparent the AI was somewhat faulty, resulting in people being locked out of their unemployment benefits in several states. This was a problem, considering ID.me was now being used by 27 states to handle dispersal of various benefits. And it was bound to get worse, if for no other reason than ID.me would be expected to handle an entire nation of beneficiaries, thanks to its contract with the IRS.

The other problem is the CEO's attitude towards reported failures. He has yet to produce anything that backs up his $400 billion in fraud claim and when confronted with mass failures at state level has chosen to blame these on the actions of fraudsters, rather than people simply being denied access to benefits due to imperfect selfies.

Another claim made by Hall has resulted in a walk-back by ID.me's CEO, prompted by increased scrutiny of his company's activities. First, the company's AI has never been tested by an outside party, which means any accuracy claims should be given some serious side-eye until it's been independently verified.

But Hall also claimed the company wasn't using any existing databases to match faces, insinuating the company relied on 1:1 matching to verify someone's identity. But this couldn't possibly be true for all benefit seekers, who had never previously uploaded a photo to the company's servers, only to be rejected when ID.me claimed to not find a match.

It's obvious the company was using 1:many matching, which carries with it a bigger potential for failure, as well as the inherent flaws of almost all facial recognition tech: the tendency to be less reliable when dealing with women and minorities.

This increased outside scrutiny of ID.me has forced CEO Blake Hall to come clean. And it started with his own employees pointing out how continuing to maintain this line of "1-to-1" bullshit would come back to haunt the company. Internal chats obtained by CyberScoop show employees imploring Hall to be honest about the company's practices before his dishonesty caused it any more damage.

“We could disable the 1:many face search, but then lose a valuable fraud-fighting tool. Or we could change our public stance on using 1:many face search,” an engineer wrote in a message posted to a company Slack channel on Tuesday. “But it seems we can’t keep doing one thing and saying another as that’s bound to land us in hot water.”

The internal messages, obtained by CyberScoop, also imply that the company discussed the use of 1:many with the IRS in a meeting.

Those messages had a direct effect: Blake Hall issued a LinkedIn post that admitted the company used 1:many verification, which indicates the company also relies on outside databases to verify identity.

In the Wednesday LinkedIn post Hall said that 1:many verification is used “once during enrollment” and “is not tied to identity verification.”

“It does not block legitimate users from verifying their identity, nor is it used for any other purpose other than to prevent identity theft,” he writes.

Hall's post hedges things quite a bit by insinuating any failures to access benefits is the result of malicious fraudsters, rather than any flaws in ID.me's tech. But this belated honesty -- along with the company's multiple failures at the state level -- has caused the IRS to reconsider its reliance on ID.me's AI. (Archived link here.)

The Treasury Department is reconsidering the Internal Revenue Service’s reliance on facial recognition software ID.me for access to its website, an official said Friday amid scrutiny of the company’s collection of images of tens of millions of Americans’ faces.

Treasury and the IRS are looking into alternatives to ID.me, the department official said, and the agencies are in the meantime attentive to concerns around the software.

This doesn't mean the IRS has divested itself of ID.me completely. At the moment, it's only doing some shopping around. Filing your taxes online still means subjecting yourself to ID.me's verification software for the time being.

A recent blog post on ID.me's site explains how the company verifies identity as well as names the algorithms it relies on to match faces, which include Paravision (which has been tested by the NIST) and Amazon's Rekognition, a product Amazon took off the law enforcement market in 2020, perhaps sensing the public's reluctance to embrace even more domestic surveillance tech.

This may be too little too late for ID.me. Its refusal to engage honestly and transparently with the public while gobbling up state and federal government contracts has expanded its scrutiny past that of the Extremely Online. Senator Ron Wyden wants to know why the IRS has made ID.me the only option for online filing.

I’m very disturbed that Americans may have to submit to a facial recognition system, wait on hold for hours, or both, to access personal data on the IRS website. While e-filing returns remain unaffected, I’m pushing the IRS for greater transparency on this plan.

But e-filing is affected. As the IRS's spokesperson noted in a statement to Bloomberg, ID.me is still standing between e-filers and e-filing.

[IRS spokesperson Barbara] LaManna noted that any taxpayer who does not want to use ID.me can opt against filing his or her taxes online.

It may be true that people with existing accounts might be able to route around this tech impediment, but new filers are still forced to interact with ID.me to set up accounts for e-filing. If spotty state interactions created national headlines, just wait until a nation of millions starts putting ID.me's tech through its paces.

Tim Cushing

Suicide Hotline Collected, Monetized The Data Of Desperate People, Because Of Course It Did

2 years 9 months ago

Another day, another privacy scandal that likely ends with nothing changing.

Crisis Text Line, one of the nation's largest nonprofit support options for the suicidal, is in some hot water. A Politico report last week highlighted how the company has been caught collecting and monetizing the data of callers... to create and market customer service software. More specifically, Crisis Text Line says it "anonymizes" some user and interaction data (ranging from the frequency certain words are used, to the type of distress users are experiencing) and sells it to a for-profit partner named Loris.ai. Crisis Text Line has a minority stake in Loris.ai, and gets a cut of their revenues in exchange.

As we've seen in countless privacy scandals before this one, the idea that this data is "anonymized" is once again held up as some kind of get out of jail free card:

"Crisis Text Line says any data it shares with that company, Loris.ai, has been wholly “anonymized,” stripped of any details that could be used to identify people who contacted the helpline in distress. Both entities say their goal is to improve the world — in Loris’ case, by making “customer support more human, empathetic, and scalable."

But as we've noted more times than I can count, "anonymized" is effectively a meaningless term in the privacy realm. Study after study after study has shown that it's relatively trivial to identify a user's "anonymized" footprint when that data is combined with a variety of other datasets. For a long time the press couldn't be bothered to point this out, something that's thankfully starting to change.

Also, just like most privacy scandals, the organization caught selling access to this data goes out of its way to portray it as something much different than it actually is. In this case, they're acting as if they're just being super altruistic:

"We view the relationship with Loris.ai as a valuable way to put more empathy into the world, while rigorously upholding our commitment to protecting the safety and anonymity of our texters,” Rodriguez wrote. He added that "sensitive data from conversations is not commercialized, full stop."

Obviously there are layers of dysfunction that have helped normalize this kind of stupidity. One, it's 2021 and we still don't have even a basic privacy law for the internet era that sets out clear guidelines and imposes stiff penalties on negligent companies, nonprofits, and executives. And we don't have a basic law because it's hard (though writing any decent law certainly isn't easy), but because a parade of large corporations, lobbyists, and revolving door regulators don't want the data monetization party to suffer even a modest drop in revenues from the introduction of modest accountability, transparency, and empowered end users. It's just boring old greed. There's a lot of tap dancing that goes on to pretend that's not the reason, but it doesn't make it any less true.

We also don't adequately fund mental health care in the states, forcing desperate people to reach out to startups that clearly don't fully understand the scope of their responsibility. We also don't adequately fund and resource our privacy regulators at agencies like the FTC. And even when the FTC does act (which it often can't in terms of nonprofits), the penalties and fines are often pathetic in scale of the money being made.

Even before these problems are considered, you have to factor that the entire adtech space reaches across industries from big tech to telecom, and is designed specifically to be a convoluted nightmare making oversight as difficult as possible. The end result of this is just about what you'd expect. A steady parade of scandals (like the other big scandal last week in which gay/bi dating and Muslim prayer apps were caught selling user location data) that briefly generate a few headlines and furrowed eyebrows without any meaningful change.

Karl Bode