a Better Bubble™

TechDirt 🕸

Daily Deal: The Complete 2022 Microsoft Office Master Class Bundle

2 years 10 months ago

The Complete 2022 Microsoft Office Master Class Bundle has 14 courses to help you learn all you need to know about MS Office products to help boost your productivity. Courses cover SharePoint, Word, Excel, Access, Outlook, Teams, and more. The bundle is on sale for $75.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Penguin Random House Demands Removal Of Maus From Digital Library Because The Book Is Popular Again

2 years 10 months ago

We've said it over and over again, if libraries did not exist today, there is no way publishers would allow them to come into existence. We know this, in part, because of their attempts to stop libraries from lending ebooks, and to price ebooks at ridiculous markups to discourage libraries, and their outright claims that libraries are unfair competition. And we won't even touch on their lawsuit over digital libraries.

Anyway, in other book news, you may have heard recently about how a Tennessee school board banned Art Spiegelman's classic graphic novel about the Holocaust, Maus, from being taught in an eighth-grade English class. Some people called this a ban, while others said the book is still available, so it's not a "ban." To me, I think school boards are not the teachers, and the teachers should be able to come up with their own curriculum, as they know best what will educate their students. Also, Maus is a fantastic book, and the claim that it was banned because of "rough, objectionable language" and nudity is utter nonsense.

Either way, Maus is now back atop various best seller lists, as the controversy has driven sales. Spiegelman is giving fun interviews again where he says things like "well, who's the snowflake now?" And we see op-eds about how the best way get kids not to read books... is to assign it in English class.

But, also, we have publishers getting into the banning business themselves... by trying to capitalize on the sudden new interest in Maus.

Penguin Random House doesn't want this new interest in Maus to lead to... people taking it out of the library rather than buying a copy. They're now abusing copyright law to demand the book be removed from the Internet Archive's lending library, and they flat out admit that they're doing so for their own bottom line:

A few days ago, Penguin Random House, the publisher of Maus, Art Spiegelman's Pulitzer Prize-winning graphic novel about the Holocaust, demanded that the Internet Archive remove the book from our lending library. Why? Because, in their words, "consumer interest in 'Maus' has soared" as the result of a Tennessee school board's decision to ban teaching the book. By its own admission, to maximize profits, a Goliath of the publishing industry is forbidding our non-profit library from lending a banned book to our patrons: a real live digital book-burning.

This is just blatant greed laid bare. As the article notes, whatever problems US copyright law has, it has enshrined the concept of libraries, and the right to lend out books as a key element of the public interest. And the publishers -- such as giants like Penguin Random House -- would do anything possible to stamp that right out.

Mike Masnick

Unknown American VC Firm Apparently Looking To Acquire NSO Group, Limit It To Selling To Five Eyes Countries

2 years 10 months ago

NSO Group -- the embattled, extremely controversial Israeli phone malware developer -- finally has some good news to report. It may have a white knight riding to its rescue -- a somewhat unknown American venture capital firm that could help it pay its bills and possibly even rehabilitate its image.

Integrity Partners, which according to its website deals with investments in the fields of mobility and digital infrastructure, is managed by partners Chris Gaertner, Elad Yoran, Pat Wilkinson and Thomas Morgan.

According to the document of intentions, they will establish a company called Integrity Labs that would acquire control of NSO. It would also stream $300 million to the firm, in order to rebuild the company.

It's not all good news, at least not at the outset. The VC firm had pledged to lobby the US government on NSO's behalf to get the recent blacklist lifted, which means NSO would once again be able to purchase US tech solely for the purpose of developing exploits to use against that tech. If Integrity Partners has any interest in remaining true to its name, it should probably backburner this effort until it has engaged in some reputation rehabilitation.

Fortunately, it appears the VC firm is also interested in getting NSO back on the right track. Following neverending reports of NSO exploits being used to target journalists, political opponents, ex-wives, dissidents, and religious leaders, the government of Israel drastically reduced the number of countries NSO could sell to.

Integrity Labs aims to limit that list even further.

Instead of the current 37 clients, the company will reduce its sales to only five clients: the Five Eyes Anglosphere intelligence alliance of New Zealand, the United States, Australia, Great Britain and Canada. The company would initially focus on defensive cyber products as part of its rebranding effort.

With these restrictions in place -- and the United States on the preferred customer list -- it should be pretty easy to get the blacklist lifted. It's not that none of these countries would ever abuse malware to engage in domestic surveillance, but it's a far better list of potential clients than the one NSO had compiled over the last several years, which included a number of known habitual human rights abusers.

But there are still reasons to be concerned. Much of what happens to NSO after this acquisition occurs will still be shrouded in secrecy. There may be a claimed focus on defensive tech, but offensive exploits have always been NSO's main money makers and it will be much more difficult to remain profitable without this revenue stream.

Then there's the chance NSO will enter into a partnership with a different company that may not have the same altruistic goals, which means the malware developer will be able to continue limping along as the poster child for irresponsible sales and marketing. And the market for powerful malware will continue to exist. It will just end up being handled by companies that have remained mostly off the world press radar.

Also, there's the fact that there's very little information about who "Integrity Partners" actually is. While the firm's website lists its partners -- all of whom mention their military experience -- there is no evidence of a portfolio, or any evidence of previous investments. While the firm is listed in Crunchbase (the main database tracking VCs and startups), it shows no investments, and only mentions a single fund the firm has raised... for $350,000. It seems unlikely that that's enough to buy NSO Group.

For now, NSO's financial well-being and reputation are in tatters. The company cannot meet its debt obligations without outside help and its ruinous months-long streak of negative press present challenges even a timely influx of cash may not be able to reverse. But if it can rebrand and retool to provide defensive tech to a very short list of customers it may be able to survive its precipitous plunge into the "Tech's Most Hated" pool.

Tim Cushing

Minneapolis Police Officers Demanded No-Knock Warrant, Killed Innocent Gunowner Nine Seconds After Entering Residence

2 years 10 months ago

The city of Minneapolis, Minnesota is temporarily ending the use of no-knock warrants following the killing of 22-year-old Amir Locke by Minneapolis police officers. The city's mayor, Jacob Frey, has placed a moratorium on these warrants until the policy can be reviewed by Professor Pete Kraska of Eastern Kentucky University and anti-police violence activist DeRay McKesson.

This comes as too little too late for Locke and his surviving family. The entire raid was caught on body cam and it shows Amir Locke picking up a gun (but not pointing it at officers) after he was awakened by police officers swarming into the residence.

Locke, who was not a target of the investigation, was sleeping in the downtown Minneapolis apartment of a relative when members of a Minneapolis police SWAT team burst in shortly before 7 a.m. Wednesday. Footage from one of the officers' body cameras showed police quietly unlocking the apartment door with a key before barging inside, yelling "Search warrant!" as Locke lay under a blanket on the couch. An officer kicked the couch, Locke stirred and was shot by officer Mark Hanneman within seconds as Locke held a firearm in his right hand.

Locke was shot once in the wrist and twice in the chest. He died thirteen minutes after the shooting. As you may have noticed from the preceding paragraph, Locke was not a suspected criminal. And for those who may argue simply being within reach of a firearm is justification for shooting, Locke's handgun was legal and he had a concealed carry permit. His justifiable reaction to people barging into an apartment unannounced is somehow considered less justifiable than the officers' decision to kill him.

In most cases, that's just the way it goes, which -- assuming the warrant dotted all i's and crossed all t's -- means the Second Amendment is subservient to other constitutional amendments, like the Fourth. Here's how Scott Greenfield explains this omnipresent friction in a nation where the right to bear arms is respected… but only up to a point:

The Second Amendment issue is clear. Locke had a legal gun and, upon being awoken in the night, grabbed it. He didn’t point it at anyone or put his finger on the trigger, but it was in his hand. A cop might explain that it would only take a fraction of a second for that to change, if he was inclined to point it at an officer, put his finger on the trigger and shoot. But he didn’t.

This conundrum has been noted and argued before, that if there is a fundamental personal right to keep and bear arms, and that’s what the Supreme Court informs us is our right, then the exercise of that constitutional right cannot automatically give right to police to execute you for it. The Reasonably Scared Cop Rule cannot co-exist with the Right to Keep and Bear Arms.

"Cannot co-exist." This means that, in most cases, the citizen bearing arms generally ceases to exist (along with this right) when confronted by a law enforcement officer who believes they are reasonably afraid.

There's another point to Greenfield's post that's worth reading, but one we won't discuss further in this post: the NRA's utter unwillingness to express outrage when the right to bear arms is converted to the right to remain permanently silent by police officers who have deliberately put themselves in a situation that maximizes their fears, no matter how unreasonable those fears might ultimately turn out to be.

But this is a situation that could have been avoided. A knock-and-announce warrant would have informed Locke (who was sleeping at a relative's house) that law enforcement was outside. As the owner of a legal gun and conceal/carry permit, it's highly unlikely this announcement would have resulted in Locke opening fire on officers.

It didn't have to be this way, but the Minneapolis Police Department insisted this couldn't be handled any other way.

A law enforcement source, who spoke on the condition of anonymity because of the sensitive nature of the case, said that St. Paul police filed standard applications for search warrant affidavits for three separate apartments at the Bolero Flats Apartment Homes, at 1117 S. Marquette Av., earlier this week.

But Minneapolis police demanded that, if their officers were to execute the search within its jurisdiction, St. Paul police first secure "no-knock" warrants instead. MPD would not have agreed to execute the search otherwise, according to the law enforcement source.

If it had been handled the St. Paul way, Locke might still be alive. There's no evidence here indicating deployment of a knock-and-announce warrant would have made things more dangerous for the officers. If this sort of heightened risk presented itself frequently, the St. Paul PD would respond accordingly when seeking warrants.

St. Paul police very rarely execute no-knock warrants because they are considered high-risk. St. Paul police have not served such a warrant since 2016, said department spokesman Steve Linders.

Contrast that with the Minneapolis PD, which appears to feel a majority of warrant service should be performed without niceties like knocking or announcing their presence.

A Star Tribune review of available court records found that MPD personnel have filed for, and obtained, at least 13 applications for no-knock or nighttime warrants since the start of the year — more than the 12 standard search warrants sought in that same span.

This is likely an undercount, the Star Tribune notes. Many warrants are filed under seal and are still inaccessible. But it does track with the MPD's deployment stats. According to records, the MPD carries out an average of 139 no-knock warrants a year.

This happens despite Minnesota PD policy specifically stating officers are supposed to identify themselves as police and announce their purpose (i.e., "search warrant") before entering. That rule applies even if officers have secured a no-knock warrant. If officers wish to bypass this policy that applies to no-knock warrants, they need more than a judge's permission. They also need direct permission from the Chief of Police or their designee. That's because no-knock warrants were severely restricted by police reforms passed in 2020. But it appears those reforms have done little to change the way the MPD handles its warrant business.

We'll see if the mayor's moratorium is more effective than the tepid reforms enacted following the killing of George Floyd by Officer Derek Chauvin. The undetectable change in tactics following the 2020 reforms doesn't exactly give one confidence a citywide moratorium will keep MPD officers from showing up unannounced and killing people during the ensuing confusion. It only took nine seconds for officers to end Amir Locke's life. Given what's been observed here it will apparently take several years (and several lives) before the Minneapolis PD will be willing to alter its culture and its day-to-day practices.

Tim Cushing

The Top Ten Mistakes Senators Made During Today's EARN IT Markup

2 years 10 months ago

Today, the Senate Judiciary Committee unanimously approved the EARN IT Act and sent that legislation to the Senate floor. As drafted, the bill will be a disaster. Only by monitoring what users communicate could tech services avoid vast new liability, and only by abandoning, or compromising, end-to-end encryption, could they implement such monitoring. Thus, the bill poses a dire threat to the privacy, security and safety of law-abiding Internet users around the world, especially those whose lives depend on having messaging tools that governments cannot crack. Aiding such dissidents is precisely why it was the U.S. government that initially funded the development of the end-to-end encryption (E2EE) now found in Signal, Whatsapp and other such tools. Even worse, the bill will do the opposite of what it claims: instead of helping law enforcement crack down on child sexual abuse material (CSAM), the bill will actually help the most odious criminals walk free.

As with the July 2020 markup of the last Congress’s version of this bill, the vote was unanimous. This time, no amendments were adopted; indeed, none were even put up for a vote. We knew there wouldn’t be much time for discussion because Sen. Dick Durbin kicked off the discussion by noting that Sen. Lindsey Graham would have to leave soon for a floor vote. 

The Committee didn’t bother holding a hearing on the bill before rushing it to markup. The one and only hearing on the bill occurred just six days after its introduction back in March 2020. The Committee thereafter made major (but largely cosmetic) changes to the bill, leaving its Members more confused than ever about what the bill actually does. Today’s markup was a singular low-point in the history of what is supposed to be one of the most serious bodies in Congress. It showed that there is nothing remotely judicious about the Judiciary Committee; that most of its members have little understanding of the Internet and even less of how the, ahem, judiciary actually works; and, saddest of all, that they simply do not care.

Here are the top ten legal and technical mistakes the Committee made today.

Mistake #1: “Encryption Is not Threatened by This Bill”

Strong encryption is essential to online life today. It protects our commerce and our communications from the prying eyes of criminals, hostile authorian regimes and other malicious actors.

Sen. Richard Blumenthal called encryption a “red herring,” relying on his work with Sen. Leahy’s office to implement language from his 2020 amendment to the previous version of EARN IT (even as he admitted to a reporter that encryption was a target). Leahy’s 2020 amendment aimed to preserve companies’ ability to offer secure encryption in their products by providing that a company could not be found in violation of the law because it utilized secure encryption, doesn’t have the ability to decrypt communications, or fails to undermine the security of their encryption (for example, by building in a backdoor for use by law enforcement). 

But while the 2022 EARN IT Act contains the same list of protected activities, the authors snuck in new language that undermines that very protection. This version of the bill says that those activities can’t be an independent basis of liability, but that courts can consider them as evidence while proving the civil and criminal claims permitted by the bill’s provisions. That’s a big deal. EARN IT opens the door to liability under an enormous number of state civil and criminal laws, some of which require (or could require, if state legislatures so choose) a showing that a company was only reckless in its actions—a far lower showing than federal law’s requirement that a defendant have acted “knowingly.” If a court can consider the use of encryption, or failure to create security flaws in that encryption, as evidence that a company was “reckless,” it is effectively the same as imposing liability for encryption itself. No sane company would take the chance of being found liable for transmitting CSAM; they’ll just stop offering strong encryption instead. 

Mistake #2: The Bill’s Sponsors Readily Conceded that EARN IT Would Coerce Monitoring for CSAM

EARN IT’s sponsors repeatedly complained that tech companies aren’t doing enough to monitor for CSAM—and that their goal was to force them to do more. As Sen. Blumenthal noted, free software (PhotoDNA) makes it easy to detect CSAM, and it’s simply outrageous that some sites aren’t even using it. He didn’t get specific but we will: both Parler and Gettr, the alternative social networks favored by the MAGA right, have refused to use PhotoDNA. When asked about it, Parler’s COO told The Washington Post: “I don’t look for that content, so why should I know it exists?" The Stanford Internet Observatory’s David Thiel responded:

This, frankly, is just reckless. You cannot run a social media site, particularly one targeted to include content forbidden from mainstream platforms, solely with voluntary flagging. Implementing PhotoDNA to prevent CEI is the bare minimum for a site allowing image uploads. 9/10

— David Thiel (@elegant_wallaby) August 12, 2021

We agree completely—morally. So why, as Berin asked when EARN IT was first introduced, doesn’t Congress just directly mandate the use of such easy filtering tools? The answer lies in understanding why Parler and Gettr can get away with this today. Back in 2008, Congress required tech companies that become aware of CSAM to report it immediately to NCMEC, the quasi-governmental clearinghouse that administers the database of CSAM hashes used by PhotoDNA to identify known CSAM. Instead of requiring companies to monitor for CSAM, Congress said exactly the opposite: nothing in 18 U.S.C. § 2258A “shall be construed to require a provider to monitor [for CSAM].”

Why? Was Congress soft on child predators back then? Obviously not. Just the opposite: they understood that requiring tech companies to conduct searches for CSAM would make them state actors subject to the Fourth Amendment’s warrant requirement—and they didn’t want to jeopardize criminal prosecutions. 

Conceding that the purpose of EARN IT Act is to coerce searches for CSAM is a mistake, a colossal one, because it invites courts to rule that searching wasn’t voluntary.

Mistake #3: The Leahy Amendment Alone Won’t Protect Privacy & Security, or Avoid Triggering the Fourth Amendment

While Sen. Leahy’s 2020 amendment was a positive step towards protecting the privacy and security of online communications, and Lee’s proposal today to revive it is welcome, it was always an incomplete solution. While it protected companies against liability for offering encryption or failing to undermine the security of their encryption, it did not protect the refusal to conduct monitoring of user communications. A company offering E2EE products might still be coerced into compromising the security of its devices by scanning user communications “client-side” (i.e., on the device) prior to encrypting sent communications or after decrypting received communications. 

Apple recently proposed such a technology for such client-side scanning, raising concerns from privacy advocates and civil society groups. For its part, Apple assured that safeguards would limit use of the system to known CSAM to prevent the capability from being abused by foreign governments or rogue actors. But the capacity to conduct such surveillance presents an inherent risk of being exploited by malicious actors. Some companies may be able to successfully safeguard such surveillance architecture from misuse or exploitation. However, resources and approaches will vary across companies, and it is a virtual certainty that not all of them will be successful. And if done under coercion, create a risk that such efforts will be ruled state action requiring a warrant under the Fourth Amendment. 

Our letter to the Committee proposes an easy way to expand the Leahy amendment to ensure that companies won’t be held liable for not monitoring user content: borrow language directly from Section 2258A(f).

Mistake #4: EARN IT’s Sponsors Just Don’t Understand the Fourth Amendment Problem

Sen. Blumenthal insisted, repeatedly, that EARN IT contained no explicit requirement not to use encryption. The original version of the bill would, indeed, have allowed a commission to develop “best practices” that would be “required” as conditions of “earning” back the Section 230 immunity tech companies need to operate—hence the bill’s name. But dropping that concept didn’t really make the bill less coercive because the commission and its recommendations were always a sideshow. The bill has always coerced monitoring of user communications—and, to do that, the abandonment or bypassing of strong encryption—indirectly, through the threat of vast legal liability for not doing enough to stop the spread of CSAM. 

Blumenthal simply misunderstands how the courts assess whether a company is conducting unconstitutional warrantless searches as a “government actor.” “Even when a search is not required by law, … if a statute or regulation so strongly encourages a private party to conduct a search that the search is not ‘primarily the result of private initiative,’ then the Fourth Amendment applies.” U.S. v. Stevenson, 727 F.3d 826, 829 (8th Cir. 2013) (quoting Skinner v. Railway Labor Executives' Assn, 489 U.S. 602, 615 (1989)). In that case, the court found that AOL was not a government actor because it “began using the filtering process for business reasons: to detect files that threaten the operation of AOL's network, like malware and spam, as well as files containing what the affidavit describes as “reputational” threats, like images depicting child pornography.” AOL insisted that it “operate[d] its file-scanning program independently of any government program designed to identify either sex-offenders or images of child pornography, and the government never asked AOL to scan Stevenson's e-mail.” Id. By contrast, every time EARN IT’s supporters explain their bill, they make clear that they intend to force companies to search user communications in ways they’re not doing today.

Mistake #2 Again: EARN IT’s Sponsors Make Clear that Coercion Is the Point

In his opening remarks today, Sen. Graham didn’t hide the ball:

"Our goal is to tell the social media companies 'get involved and stop this crap. And if you don't take responsibility for what's on your platform, then Section 230 will not be there for you.' And it's never going to end until we change the game."

Sen. Chris Coons added that he is “hopeful that this will send a strong signal that technology companies … need to do more.” And so on and so forth.

If they had any idea what they were doing, if they understood the Fourth Amendment issue, these Senators would never admit that they’re using liability as a cudgel to force companies to take affirmative steps to combat CSAM. By making intentions unmistakable, they’ve given the most vile criminals exactly what they need to to challenge the admissibility of CSAM evidence resulting from companies “getting involved” and “doing more.” Though some companies, concerned with negative publicity, may tell courts that they conducted searches of user communications for “business reasons,” we know what defendants will argue: the companies’ “business reason” is avoiding the wide, loose liability that EARN IT subjected them to. EARN IT’s sponsors said so.

Mistake #5: EARN IT’s Sponsors Misunderstanding How Liability Would Work 

Except for Sen. Mike Lee, no one on the Committee seemed to understand what kind of liability rolling back Section 230 immunity, as EARN IT does, would create. Sen. Blumenthal repeatedly claimed that the bill requires actual knowledge. One of the bill’s amendments (the new Section 230(e)(6)(A)) would, indeed, require actual knowledge by enabling civil claims under 18 U.S.C. § 2255 “if the conduct underlying the claim constitutes a violation of section 2252 or section 2252A,” both of which contain knowledge requirements. This amendment is certainly an improvement over the original version of EARN IT, which would have explicitly allowed 2255 claims under a recklessness standard. 

But the two other changes to Section 230 clearly don’t require knowledge. As Sen. Lee pointed out today, a church could be sued, or even prosecuted, simply because someone posted CSAM on its bulletin board. Multiple existing state laws already create liability based on something less than actual knowledge of CSAM. As Lee noted, a state could pass a law creating strict liability for hosting CSAM. Allowing states to hold websites liable for recklessness (or even less) while claiming that the bill requires actual knowledge is simply dishonest. All these less-than-knowledge standards will have the same result: coercing sites into monitoring user communications, and into abandoning strong encryption as an obstacle to such monitoring. 

Blumenthal made it clear that this is precisely what he intends, saying: “Other states may wish to follow [those using the “recklessness” standard]. As Justice Brandeis said, states are the laboratories of democracy … and as a former state attorney general I welcome states using that flexibility. I would be loath to straightjacket them in their adoption of different standards.”

Mistake #6: “This Is a Criminal statute, This Is Not Civil Liability”

So said Sen. Lindsey Graham, apparently forgetting what his own bill says. Sen. Dianne Feinstein added her own misunderstanding, saying that she “didn’t know that there was a blanket immunity in this area of the law.” But if either of those statements were true, the EARN IT Act wouldn’t really do much at all. Section 230 has always explicitly carved out federal criminal law from its immunities; companies can already be charged for knowing distribution of child sexual abuse material (CSAM) or child sexual exploitation (CSE) under federal criminal statutes. Indeed, Backpage and its founders were criminally prosecuted even without SESTA’s 2017 changes to Section 230. If the federal government needs assistance in enforcing those laws, it could adopt Sen. Mike Lee’s amendment to permit state criminal prosecutions when the conduct would constitute a violation of federal law. Better yet, the Attorney General could use an existing federal law (28 U.S.C. § 543) to deputize state, local, and tribal prosecutors as “special attorneys” empowered to prosecute violations of federal law. Why no AG has bothered to do so yet is unclear.

What is clear is that EARN IT isn’t just about criminal law. EARN IT expressly carves out civil claims under certain federal statutes, and also under whatever state laws arguably relate to “the advertisement, promotion, presentation, distribution, or solicitation of child sexual abuse material” as defined by federal law. Those laws can and do vary, not only with respect to the substance of what is prohibited, but also the mental state required for liability. This expansive breadth of potential civil liability is part of what makes this bill so dangerous in the first place.

Mistake #7: “If They Can Censor Conservatives, They Can Stop CSAM!”

As at the 2020 markup, Sen. Lee seemed to understand most clearly how EARN IT would work, the Fourth Amendment problems it raises, and how to fix at least some of them. A former Supreme Court Clerk, Lee has a sharp legal mind, but he seems to misunderstand much of how the bill would work in practice, and how content moderation works more generally.

Lee complained that, if Big Tech companies can be so aggressive in “censoring” speech they don’t like, surely they can do the same for CSAM. He’s mixing apples and oranges in two ways. First, CSAM is the digital equivalent of radioactive waste: if a platform gains knowledge of it, it must take it down immediately and report it to NCMEC, and faces stiff criminal penalties if it doesn’t. And while “free speech” platforms like Parler and Gettr refuse to proactively monitor for CSAM (as discussed below), every mainstream service goes out of its way to stamp out CSAM on unencrypted service. Like AOL in the Stevenson case, they do so for business and reputational reasons.

By contrast no website even tries to block all “conservative” speech; rather, mainstream platforms must make difficult judgment calls about taking down politically charged content, such as Trump’s account only after he incited an insurrection in an attempted coup and misinformation about the 2020 election being stolen. Republicans are mad about where tech companies draw such lines.

Second, social media platforms can only moderate content that they can monitor. Signal can’t moderate user content and that is precisely the point: end-to-end-encryption means that no one other than the parties to a communication can see it. Unlike normal communications, which may be protected by lesser forms of “encryption,” the provider isn’t standing in the middle of the communication and it doesn’t have the keys to unlock the messages that it is passing back and forth. Yes, some users will abuse E2EE to share CSAM, but the alternative is to ban it for everyone. There simply isn’t a middle ground.

There may indeed be more that some tech companies could do about content they can see—both public content like social media posts and private content like messages (protected by something less than E2EE). But their being aggressive about, say, misinformation about COVID or the 2020 election has nothing whatsoever to do with the cold, hard reality that they can’t moderate content protected by strong encryption.

It’s hard to tell whether Lee understands these distinctions. Maybe not. Maybe he’s just looking to wave the bloody shirt of “censorship” again. Maybe he’s saying the same thing everyone else is saying, essentially: “Ah, yes, but if only Facebook, Apple and Google didn’t use end-to-end encryption for their messaging services, then they could monitor those for CSAM just like they monitor and moderate other content!” Proposing to amend the bill to require actual knowledge under both state and federal law suggests he doesn’t want this result, but who knows?

Mistake #8: Assuming the Fourth Amendment Won’t Require Warrants If It Applies

Visibility to the provider relates to one important legal distinction not discussed at all today—but that may well explain why the bill’s sponsors don’t seem to care about Fourth Amendment concerns. It’s an argument Senate staffers have used to defend the bill since its introduction. Even if compulsion through vast legal liability did make tech companies government actors, the Fourth Amendment requires a warrant only for searches of material for which users have a reasonable expectation of privacy. Kyllo v. United States, 533 U.S. 27, 33 (2001); see Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring). Courts long held that users had no such expectations for digital messages like email held by third parties. 

But that began to change in 2010. If searches of emails trigger the Fourth Amendment—and U.S. v. Warshak, 631 F.3d 266 (6th Cir. 2010) said they do—searches of private messaging certainly would. The entire purpose of E2EE is to give users rock-solid expectations of privacy in their communications. More recently, the Supreme Court has said that, “given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user's claim to Fourth Amendment protection.” Carpenter v. United States, 138 S. Ct. 2206, 2217 (2018). These cases draw the line Sen. Lee is missing: no, of course users don’t have reasonable expectations of privacy in public social media posts—which is what he’s talking about when he points to “censorship” of conservative speech. EARN IT could avoid the Fourth Amendment by focusing on content providers can see, but it doesn’t, because it’s intended to force companies to be able to see all user communications.

Mistake #9: What They didn’t Discuss: Anonymous Speech

The Committee didn’t discuss how EARN IT would affect speech protected by the First Amendment. No, of course CSAM isn’t protected speech, but the bill would affect lawful speech by law-abiding citizens—primarily by restricting anonymous speech. Critically, EARN IT doesn’t just create liability for trafficking in CSAM. The bill also creates liability for failing to stop communications that “solicit” or “promote” CSAM. Software like PhotoDNA can flag CSAM (by matching cryptographic hashes to known images in NCMEC’s database) but identifying “solicitation” or “promotion” is infinitely more complicated. Every flirtatious conversation between two adult users could be “solicitation” of CSAM—or it might be two adults doing adult things. (Adults sext each other—a lot. Get over it!) But “on the Internet, nobody knows you’re a dog”—and there’s no sure way to distinguish between adults and children. 

The federal government tried to do just that in the Communications Decency Act (CDA) of 1996 (nearly all of which, except Section 230, was struck down) and the Child Online Protection Act (COPA) of 1998. Both laws were struck down as infringing on the First Amendment right to accessing lawful content anonymously. EARN IT accomplishes much the same thing indirectly, the same way it attacks encryption: basing liability on anything less than knowledge means you can be sued for not actively monitoring, or for not age-verifying users, especially when the risks are particularly high (such as when you “should have known” you were dealing with minor users). 

Indeed, EARN IT is even more constitutionally suspect. At least COPA focused on content deemed “harmful to minors.” Instead of requiring age-gating for sites that offered porn and sex-related content (e.g., LGBTQ teen health), EARN IT would affect all users of private communications services, regardless of the nature of the content they access or exchange. Again, the point of E2EE is that the service provider has no way of knowing whether messages are innocent chatter or CSAM. 

EARN IT could raise other novel First Amendment problems. Companies could be held liable not only for failing to age-verify all users—a clear First Amendment violation— but also for failing to bar minors from using E2EE services so that their communications can be monitored or failing to use client-side monitoring on minors’ devices, and even failing to segregate adults from minors so they can’t communicate with each other. 

Without the Lee Amendment, EARN IT leaves states free to base liability on explicitly requiring age-verification or limits on what minors can do. 

Mistake #10: Claiming the Bill Is “Narrowly Crafted”

If you’ve read this far, Sen. Blumenthal’s stubborn insistence that this bill is a “narrowly targeted approach” should make you laugh—or sigh. If he truly believes that, either he hasn’t adequately thought about what this bill really does or he’s so confident in his own genius that he can simply ignore the chorus of protest from civil liberties groups, privacy advocates, human rights activists, minority groups, and civil society—all of whom are saying that this bill is bad policy.

If he doesn’t truly believe what he’s saying, well… that’s another problem entirely.

Bonus Mistake!: A Postscript About the Real CSAM problem

Lee never mentioned that the only significant social media services that don’t take basic measures to identify and block CSAM are Parler, Gettr and other fringe sites celebrated by Republicans as “neutral public fora” for “free speech.” Has any Congressional Republican sent letters to these sites asking why they refuse to use PhotoDNA? 

Instead, Lee did join Rep. Ken Buck in March 2021 to interrogate Apple about its decision to take down the Parler app. Answer: Parler hadn’t bothered setting any meaningful content moderation system. Only after Parler agreed to start doing some moderation of what appeared in its Apple app (but not its website) did Apple reinstate the app.

Berin Szoka and Ari Cohn

Court (For Now) Says NY Times Can Publish Project Veritas Documents

2 years 10 months ago

We've talked about the hypocrite grifters who run Project Veritas, who, even when they have legitimate concerns about attacks on their own free speech, ran to court to try to silence the NY Times. Bizarrely, a NY judge granted Project Veritas' demand for prior restraint against the NY Times falsely claiming that attorney-client material could not be published.

The NY Times appealed that ruling and now a court has... not overturned the original ruling, but for now said that the NY Times can publish the documents, saying that it will not enforce the original ruling until an appeal can be heard. This is... better than nothing, but fully overturning the original ridiculous ruling would have been much better. Because it was clearly prior restraint. But, at least for now, the prior restraint will not be enforced.

Still, the response from Project Veritas deserves separate comment, because it's just naively stupid:

In a phone interview on Thursday, Mr. O’Keefe said: “Defamation is not a First Amendment-protected right; publishing the other litigants’ attorney-client privileged documents is not a protected First Amendment right.”

While it's accurate that defamation is not protected by the 1st Amendment, he's wrong that publishing attorney-client communications is -- in most cases -- very much protected. He's fuzzing the lines here, by basically arguing that because Project Veritas is, separately, suing the NY Times, that bans the NY Times from publishing any attorney-client privileged material it obtains via standard reporting tactics.

But that fuzzing suggests something that just isn't true: that there's some exception to the 1st Amendment from publishing attorney-client materials. That's wrong. The attorney-client privilege is with respect to having to disclose certain documents to another party in litigation. If you can successfully show that the documents are privileged, they don't need to be disclosed to the other party. That's the extent of the privilege. It has no bearing whatsoever on whether or not someone else obtaining those materials through other means has a right to publish them. Of course they do and the 1st Amendment protects that.

And, I should just note, that considering Project Veritas' main method of operating is trying to obtain private documents, or record secret conversations, it is bizarre beyond belief that Project Veritas is literally claiming that private material has some sort of 1st Amendment protection. Because that seems incredibly likely to come back and bite Project Veritas at a later time. Of course, considering they're hypocritical grifters with no fundamental principles beyond "attack people with views we don't like," I guess it's not surprising that their viewpoint on free speech and the 1st Amendment shifts depending on who it's protecting.

Mike Masnick

Yet Another Israeli Malware Manufacturer Found Selling To Human Rights Abusers, Targeting iPhones

2 years 10 months ago

Exploit developer NSO Group may be swallowing up the negative limelight these days, but let's not forget the company has plenty of competitors. The US government's blacklisting of NSO arrived with a concurrent blacklisting of malware purveyor, Candiru -- another Israeli firm with a long list of questionable customers, including Uzbekistan, Saudi Arabia, United Arab Emirates, and Singapore.

Now there's another name to add to the list of NSO-alikes. And (perhaps not oddly enough) this company also calls Israel home. Reuters was the first to report on this NSO's competitor's ability to stay competitive in the international malware race.

A flaw in Apple's software exploited by Israeli surveillance firm NSO Group to break into iPhones in 2021 was simultaneously abused by a competing company, according to five people familiar with the matter.

QuaDream, the sources said, is a smaller and lower profile Israeli firm that also develops smartphone hacking tools intended for government clients.

Like NSO, QuaDream sold a "zero-click" exploit that could completely compromise a target's phones. We're using the past tense not because QuaDream no longer exists, but because this particular exploit (the basis for NSO's FORCEDENTRY) has been patched into uselessness by Apple.

But, like other NSO competitors (looking at you, Candiru), QuaDream has no interest in providing statements, a friendly public face for inquiries from journalists, or even a public-facing website. Its Tel Aviv office seemingly has no occupants and email inquiries made by Reuters have gone ignored.

QuaDream doesn't have much of a web presence. But that's changing, due to this report, which builds on earlier reporting on the company by Haaretz and Middle East Eye. But even the earlier reporting doesn't go back all that far: June 2021. That report shows the company selling a hacking tool called "Reign" to the Saudi government. But that sale wasn't accomplished directly, apparently in a move designed to further distance QuaDream from both the product being sold and the government it sold it to.

According to Haaretz, Reign is being sold by InReach Technologies, Quadream's sister company based in Cyprus, while Quadream runs its research and development operations from an office in the Ramat Gan district in Tel Aviv.

[...]

InReach Technologies, its sales front in Cyprus, according to Haaretz, may be being used in order to fly under the radar of Israel’s defence export regulator.

Reign is apparently the equivalent of NSO's Pegasus, another powerful zero-click exploit that appears to still be able to hack most iPhone models. But it's not a true equivalent. According to this report, the tool can be rendered useless by a single system software update and, perhaps more importantly, cannot be remotely terminated by the entity deploying it, should the infection be discovered by the target. This means targeted users have the opportunity to learn a great deal about the exploit, its deployment, and possibly where it originated.

That being said, it's not cheap:

One QuaDream system, which would have given customers the ability to launch 50 smartphone break-ins per year, was being offered for $2.2 million exclusive of maintenance costs, according to the 2019 brochure. Two people familiar with the software's sales said the price for REIGN was typically higher.

With more firms in the mix -- and more scrutiny from entities like Citizen Lab -- it's only a matter of time before information linking NSO competitors to human rights abuses and indiscriminate targeting of political enemies threatens to make QuaDream and Candiru household names. And, once again, it's time to point out this all could have been avoided by refusing to sell powerful hacking tools to human rights abusers who were obviously going to use the spyware to target critics, dissidents, journalists, ex-wives, etc. That QuaDream chose to sell to countries like Saudi Arabia, Singapore, and Mexico pretty much guarantees reports of abusive deployment will surface in the future.

Tim Cushing

Surprise: U.S. Cost Of Ripping Out And Replacing Huawei Gear Jumps From $1.8 To $5.6 Billion

2 years 10 months ago

So we've noted that a lot of the U.S. politician accusations that Huawei uses its network hardware to spy on Americans on behalf of the Chinese government are lacking in the evidence department. The company's been on the receiving end of a sustained U.S. government ban based on accusations that have never actually been proven publicly, levied by a country (the United States) with a long, long history of doing exactly what it accuses Huawei of doing.

To be clear, Huawei is a terrible company. It has been happy to provide IT and telecom support to the Chinese government as it wages genocide against ethnic minorities. It has also been caught helping some African governments spy on the press and political opponents. And it may very well have helped the Chinese government spy on Americans. So it's hard to feel too bad about the company.

At the same time, if you're going to levy accusations (like "Huawei clearly spies on Americans") you need to provide public evidence. And we haven't. Eighteen months of investigations found nothing. That didn't really matter much to the FCC (under Trump and Biden) or Congress, which ordered that U.S. ISPs and network operators rip out all Huawei gear and replace it to an estimated cost of $1.8 billion. Yet just a few years later, the actual cost to replace this gear has already ballooned to $5.8 billion and is likely to get higher:

"The FCC has told Congress that applications to The Secure and Trusted Communications Networks Reimbursement Program have generated requests totaling about $5.6 billion – far more than the allocated funding. The program was established to reimburse providers with 10 million or fewer customers who must remove Huawei Technologies Company and ZTE equipment."

That's quite a windfall for companies not named Huawei, don't you think?

My problem with these efforts has always been a nuanced one. I have no interest in defending a shitty global telecom gear maker with an atrocious human rights record which very well may be a proven to be a surveillance lackey for the Chinese government. Yet at the same time, domestic companies like Cisco have, for much of the last decade, leaned on unsubstantiated allegations of spying to shift market share in their favors. DC is flooded with lobbyists who can easily exploit both xenophobia and intelligence worries to their tactical advantage, then bury the need for evidence under ambiguous claims of national security:

"What happens is you get competitors who are able to gin up lawmakers who are already wound up about China,” said one Hill staffer who was not authorized to speak publicly about the matter. “What they do is pull the string and see where the top spins.”

But some experts say these concerns are exaggerated. These experts note that much of Cisco’s own technology is manufactured in China."

So my problem here isn't necessarily that Huawei doesn't deserve what's happening to it. My problem here is generally a lack of transparency in a process that's heavily dictated by lobbyists, who can hide any need for evidence behind national security claims. This creates an environment where decisions are made on a "noble and patriotic basis" that wind up being beyond common sense, reproach, and oversight. That's a nice breeding ground for fraud.

My other problem is the hypocrisy of a country that doesn't believe in limitations on spying, complaining endlessly about spying, without modifying any of its own, very similar behaviors. AT&T has been proven to be directly tethered to the NSA to the point where it's literally impossible to determine where one ends and the other begins. Yet were another country to ban AT&T from doing business there, the heads of the very same folks breathlessly concerned about surveillance ethics would explode. What makes us beyond reproach here? Our ethical track record?

And my third problem is that the almost myopic, focus on Huawei has been so massive, we've failed to take on numerous other privacy and security issues, whether that's the lack of a meaningful federal privacy law, the rampant security and privacy issues inherent in the Internet of things space (where Chinese-made hardware is rampant), or election security with anywhere close to the same level of urgency. These all are equally important issues, all exploited by Chinese intelligence, that see a small fraction of the hand-wringing and action reserved for issues like Huawei.

Again, none of this is to defend Huawei or deny it's a shitty company with dubious ethics. But the lack of transparency or skepticism creates an environment ripe for fraud and myopia by policymakers who act as if the entirety of their efforts is driven by the noblest and most patriotic of intentions. And, were I a betting man, I'd wager this whole rip and replace effort makes headlines for all the wrong reasons several years down the road.

Karl Bode

Daily Deal: The Complete GameGuru Unlimited Bundle

2 years 10 months ago

GameGuru is a non-technical and fun game maker that offers an easy, enjoyable and comprehensive game creation process that is designed specifically for those who are not programmers or designers/artists. It allows you to build your own game world with easy to use tools. Populate your game by placing down characters, weapons, and other game items, then press one button to build your game, and it's ready to play and share. GameGuru is built using DirectX 11 and supports full PBR rendering, meaning your games can look great and take full advantage of the latest graphics technology. The bundle includes hundreds of royalty-free 3D assets. It's on sale for $50.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Senator Blumenthal, After Years Of Denial, Admits He's Targeting Encryption With EARN IT

2 years 10 months ago

Senator Richard Blumenthal has now admitted that EARN IT is targeting encryption, something he denied for two years, and then just out and said it.

Since the very beginning many of us have pointed out that the EARN IT Act will undermine encryption (as well as other parts of the internet). Senator Richard Blumenthal, the lead sponsor on the bill, has insisted over and over again that the bill has nothing to do with encryption. Right after the original bill came out, when people called this out, Blumenthal flat out said "this bill says nothing about encryption" and later claimed that "Big Tech is using encryption as a subterfuge to oppose this bill."

That's been his line ever since -- insisting the bill has nothing to do with encryption. And to "show" that it wasn't about encryption, back in 2020, he agreed to a very weak amendment from Senator Leahy that had some language about encryption, even though as we pointed out at the time, that amendment still created a problem for encryption.

The newest version of EARN IT replaced Leahy's already weak amendment with one that is a more direct attack on encryption. But it has allowed slimy "anti-porn" groups like NCOSE to falsely claim that it has "dealt with the concerns about encryption." Except, as we detailed, the language of the bill now makes encryption a liability for any web service, as it explicitly says that use of encryption can be used as evidence that a website does not properly deal with child sexual abuse material.

But still, through it all, Blumenthal kept lying through his teeth, insisting that the bill wasn't targeting encryption. Until yesterday when he finally admitted it straight up to Washington Post reporter Cat Zakrzewski. In her larger story about EARN IT, I'm not sure why Zakrewski buried this point all the way down near the bottom, because this is the story. Blumenthal is asked about the encryption bit and he admits that the bill is targeting encryption:

Blumenthal said in an interview that lawmakers incorporated these concerns into revisions, which prevent the implementation of encryption from being the sole evidence of a company’s liability for child porn. But he said lawmakers wouldn’t offer a blanket exemption to using encryption as evidence arguing companies might use it as a “get-out-of-jail-free card.”

In other words, he knows that the bill targets encryption despite two whole years of blatant denials. To go from "this bill makes no mention of encryption" to "we don't want companies using encryption as a 'get-out-of-jail-free card'" is an admission that this bill is absolutely about encryption. And if that's the case, why have their been no hearings about the impact this would have on encryption and national security? Because, that seems like a key point that should be discussed, especially with Blumenthal admitting this thing that he denied for two whole years.

During today's markup, Blumenthal also made some nonsense comments about encryption:

The treatment of encryption in this statute is the result of hours, days, of consultation involving the very wise and significant counsel from Sen. Leahy who offered the original encryption amendment and said at the time that his amendment would not protect tech companies for being held liable for doing anything that would give rise to liability today for using encryption to further illegal activity. That's the key distinction here. Doesn't prohibit the use of encryption, doesn't create liability for using encryption, but the misuse of encryption to further illegal activity is what gives rise to liability here.

This is, beyond being nonsense word salad, just utterly ridiculous. No one ever said the bill "prohibited" encryption, but that it would make it a massive liability. And he's absolutely wrong that it "doesn't create liability for using encryption" because it literally does exactly that in saying that encryption can be used as evidence of liability.

The claim that it's only the "misuse of encryption" shows that Senator Blumenthal (1) has no clue what he's talking about and (2) needs to hire staffers who actually do understand this stuff, because that's not how this works. Once you say it's the "misuse of encryption" you've sunk encryption. Because now every lawsuit will just claim that any use of encryption is misuse and the end result is that you need to go through a massive litigation process to determine if your use of encryption is okay or not.

That's the whole reason why things like Section 230 are important, because they avoid having every company have to spend over a million dollars to prove that the technical decision they made were okay and not a "misuse." But now if they have to spent a million dollars every time someone sues them for their use of encryption, then it makes it ridiculously costly -- and risky -- to use encryption.

So, Blumenthal is either too stupid to understand how all of this actually works, or as he seems to have admitted to the reporter despite two years of denial, he doesn't believe companies should be allowed to use encryption.

EARN IT is an attack on encryption, full stop. Senator Blumenthal has finally admitted that, and anyone who believes in basic privacy and security should take notice.

Oh, and as a side note, remember back in 2020 when Blumenthal flipped out at Zoom for not offering full end-to-end encryption? Under this bill, Zoom would be at risk either way. Blumenthal is threatening them if they use encryption and if they don't. It's almost as if Richard Blumenthal doesn't know what he's talking about regarding encryption.

Mike Masnick

Yes, It Really Was Nintendo That Slammed GilvaSunner YouTube Channel With Copyright Strikes

2 years 10 months ago

Well, for a story that was already over, this became somewhat fascinating. We have followed the Nintendo vs. GilvaSunner war for several years now. The GilvaSunner YouTube channel has long been dedicated to uploading and appreciating a variety of video game music, largely from Nintendo games. Roughly once a year for the past few years, Nintendo would lob copyright strikes at a swath of GilvaSunner "videos": 100 videos in 2019, a bit less than that in 2020, take 2021 off, then suddenly slam the channel with 1,300 strikes in 2022. With that last copyright MOAB, the GilvaSunner channel has been shuttered voluntarily, with the operator indicating that it's all too much hassle.

Well, on the internet, and in our comments on that last post, there began to be speculation as to whether or not it was actually Nintendo behind all of these copyright strikes... or an imposter. Those sleuthing around found little tidbits, such as the name used on the strike not matching up to the names displayed in the past when Nintendo has acted against YouTube videos.

It was... strange. Why? Well, because it looked like many people going out and trying to find a reason to believe that Nintendo wasn't behaving exactly as anyone who had witnessed Nintendo's behavior would expect. If this was someone impersonating Nintendo with these actions, it was utterly indistinguishable from how Nintendo would normally behave. Guys, they do this shit all the time.

And this time too, as it turns out. You can hear it straight from YouTube's mouth.

Jumping in – we can confirm that the claims on @GilvaSunner's channel are from Nintendo. These are all valid and in full compliance with copyright rules. If the creator believes the claims were made in error, they can dispute with these steps: https://t.co/ivyjVNwLVu

— TeamYouTube (@TeamYouTube) February 5, 2022

This is where I will stipulate for the zillionth time that Nintendo is within it's rights to take these actions. But we should also stipulate that the company doesn't have to go this route and the fact that it prioritizes control of its IP in the strictest fashion over letting its fans enjoy some video game music should tell you everything you need to know.

In the meantime, to the internet sleuths: I appreciate your dedication to either Nintendo or to simply digging into these kinds of details for funsies or whatever. That being said, as the old saying goes, if you hear the sound of hooves, assume it's a horse and not a zebra.

Timothy Geigner

Even Officials In The Intelligence Community Are Recognizing The Dangers Of Over-Classification

2 years 10 months ago

The federal government has a problem with secrecy. Well, actually it doesn't have a problem with secrecy, per se. That's often considered a feature, not a bug. But federal law says the government shouldn't have so much secrecy, what with the FOIA being in operation. And yet, the government feels compelled to keep secrets from its biggest employer: the US taxpayers.

Over-classification remains a problem. It has been a problem ever since long before a government contractor went rogue with a massive stash of NSA documents, showing that many of the government's secrets should have been shared or, at the very least, more widely discussed as the government turned 9/11 into a constitutional bypass on the information superhighway.

Since then, efforts have been made to dial back the government's proclivity for classifying documents that pose no threat to government operations and/or government security. In fact, the argument has been made (rather convincingly) that over-classification is counterproductive. It's more likely to result in the exposure of so-called secrets rather than secure the blanket-exemption-formality that keeps secrets from the general public.

Efforts have been made to counteract this overwhelming desire to keep the public locked out of discussions about government activities. These efforts have mostly failed. And that has mainly been due to vague and frequent invocations of national security concerns, which allow legislators and federal judges to shut off their brains and hammer the [REDACT] button repeatedly.

But ignoring the problem hasn't made the problem go away, no matter how many billions the federal government refuses to throw at the problem. Over-classification still stands between the public and information it should have access to. And it stands between federal agencies and efficient use of tax dollars. The federal government generates petabytes of data every month. And far too often, the agencies generating the data decide it's no one's business but their own.

It's not just legislators noting the widening gap between the government's massive stockpiles of data and the public's ability to access them. It's also those generating the most massive stashes of bits and bytes, as the Washington Post points out, using the words of an Intelligence Community official.

The U.S. government is drowning in its own secrets. Avril Haines, the director of national intelligence, recently wrote to Sens. Ron Wyden (D-Ore.) and Jerry Moran (R-Kan.) that “deficiencies in the current classification system undermine our national security, as well as critical democratic objectives, by impeding our ability to share information in a timely manner.” The same conclusions have been drawn by the senators and many others for a long time.

As this letter hints at, over-classification doesn't just affect the great unwashed whose power is generally considered to be far too limited to change things. It also affects agencies and the entities that oversee the agencies -- the latter of which are asked to engage in oversight while being locked out of the information they need to perform this task.

If there's any good news here, it's that the Intelligence Community recognizes it's part of the problem. But this is just one person in the IC. It's unlikely every official feels this way.

The government is working towards a solution, but its work is being performed at the speed of government -- something further hampered by the back-and-forth of periodic regime changes and their alternating ideas about how much transparency the government owes to its patrons.

The IC letter writer almost sees a silver lining in the nearly opaque cloud enveloping agencies involved in national security efforts.

So far, Ms. Haines said, current priorities and resources for fixing the classification systems “are simply not sufficient.” The National Security Council is working on a revised presidential executive order governing classified information, and we hope the White House will come up with an ambitious blueprint for modernization.

The silver lining is "so far," and the efforts being made elsewhere to change things. The rest of the non-lining is far less silver: the resources aren't sufficient and the National Security Council is grinding bureaucratic gears by working with the administration to change things. If it doesn't happen soon, changes will be at the discretion of the next administration. And the next administration may no longer feel streamlining declassification is a priority, putting projects that have been in the on-again, off-again works since Snowden's exposes on the back burner yet again.

Our government will never likely feel Americans can be trusted with information about the programs their tax dollars pay for. But perhaps a little more momentum -- this time propelled by something within the Intelligence Community -- will prompt some incremental changes that may eventually snowball into actual transparency and accountability.

Tim Cushing

First Circuit Tears Into Boston PD's Bullshit Gang Database While Overturning A Deportation Decision

2 years 10 months ago

A federal court has delivered a rebuke of police gang databases in, of all things, a review of a deportation hearing.

As we've been made painfully aware, gang databases are just extensions of biased policing efforts. People are placed in gang databases for numerous, incredibly stupid reasons. People are designated gang members simply for living, working, and going to school in areas where gang activity is prevalent. Infants have been added to gang databases because cops can't be bothered to perform any due diligence. There's no way for people to know they've been designated as gang-affiliated and, worse, there's often no way to challenge this designation and get yourself removed from these lists, which tend to result in additional harassment by police officers or "gang enhancements" that lengthen sentences for anyone listed in these dubious databases.

In 2015, Homeland Security Investigations officers performed a sweep in Boston, Massachusetts, rounding up suspected MS-13 gang members for deportation. This sweep snared Cristian Diaz Ortiz, who was 16, had entered the country illegally, and was now living with his uncle.

Oritz applied for asylum, citing the fear of being subjected to MS-13 gang violence if he was sent back to his home country, El Salvador. From the First Circuit Appeals Court decision [PDF]:

On October 1, 2018, Diaz Ortiz filed an application for asylum, withholding of removal, and CAT protection, basing his request on multiple grounds, including persecution because of his evangelical Christian religion. He also reported that an aunt had been murdered in 2011 by members of MS-13, and he feared that the gang would kill him as well if he returned to El Salvador. In a subsequently filed affidavit, Diaz Ortiz stated that, while he was living in El Salvador, MS-13 had threatened his life "on multiple occasions" because he was a practicing evangelical Christian. He said he repeatedly refused the gang's demands that he join MS-13, but gang members continued to follow him and issue threats. In 2015, the gang physically attacked him and warned "that they would kill [him] and [his] family if [he] did not stop saying [he] was a Christian and living and preaching against the gang way of life."

The Immigration Judge sided with the Department of Homeland Security. It largely made this decision due to the introduction of a "Gang Assessment Database" that said Ortiz was not a practicing Christian who might fear retaliation if removed from the country, but rather an MS-13 infiltrator. The "gang package" (as the court refers to it) was compiled by the Boston PD. It stated the following:

Cristian Josue DIAZ ORTIZ has been verified as an MS-13 gang member by the Boston Police Department (BPD)/Boston Regional Intelligence Center (BRIC).

Cristian Josue DIAZ ORTIZ has documented associations with MS-13 gang members by the Boston Police Department and Boston School Police Department (BSPD). (See the attached BPD & BSPD incident/field interview reports and gang intelligence bulletins.)

Cristian Josue DIAZ ORTIZ has been documented carrying common MS-13 gang related weapons by the Boston Police Department. (See the attached BPD incident/field interview reports.) [A footnote states that the only "weapon" ever documented by the BPD was a bike chain and a padlock carried in Ortiz's backpack.]

Cristian Josue DIAZ ORTIZ has been documented frequenting areas notorious for MS13 gang activity by the Boston Police Department. These areas are 104 Bennington St. and the East Boston Airport Park/Stadium in East Boston, Massachusetts which are both known for MS-13 gang activity including recent firearms arrests and a homicide.

According to the Boston PD, Oritz racked up "points" by associating with gang members and being in areas MS-13 members frequented. If enough points are accrued, a person gets placed in the gang database. But the underlying events had nothing to do with gang activity, despite what the summary provided by the DHS said.

The BPD documented nine "interactions" with Ortiz in which it assigned "gang" points to him. Three of those instances involved Ortiz smoking marijuana (a civil infraction in Massachusetts) with students and others the BPD claimed were "known MS-13 members." Four others involved Ortiz "loitering" in a place near "known gang member" or being approached and talked to by "known gang members." And one of the interactions was the time the BPD "discovered" Oritz carrying a bike lock and chain in his backpack -- something not all that uncommon for bike owners (which Ortiz was).

This "gang package" was critiqued by a law enforcement expert who testified that Ortiz should never have been included in the gang database. The former Boston police officer pointed out Ortiz had never been suspected of criminal activity and was apparently being penalized solely for spending time with people of his same ethnicity. The gang package's claim that Ortiz had a "history" of carrying weapons was clearly undercut by the BPD's documentation of a single incident where an officer recovered something that could be used as a weapon (the bike chain), but was not inherently a tool of unlawful violence.

The immigration judge ignored all of this, finding only the DHS and BPD credible. So did the Board of Immigration Appeals (BIA). Fortunately for Ortiz, the First Circuit isn't as easily impressed by the Boston PD's police work. It has some very harsh words for the two lower levels that blew off their obligations to the asylum seeker.

If the IJ and BIA had performed even a cursory assessment of reliability, they would have discovered a lack of evidence to substantiate the gang package's classification of Diaz Ortiz as a member of MS-13. Most significantly, the record contains no explanation of the basis for the point system employed by the BPD. The record is silent on how the Department determined what point values should attach to what conduct, or what point threshold is reasonable to reliably establish gang membership.

As the appeals court points out, these databases are inherently unreliable because literally anything can be used to imply someone is a gang member. The lower courts were wrong to completely dismiss Ortiz's challenge of the BPD's assessment.

That silence is so consequential because, during the period relevant to this case, the list of "items or activities" that could lead to "verification for entry into the Gang Assessment Database" was shockingly wide-ranging. It included "Prior Validation by a Law Enforcement Agency" (nine points), "Documented Association (BPD Incident Report)" (four points), and the open-ended "Information Not Covered by Other Selection Criteria" (one point). The 2017 form for submitting FIO [Field Interview Operations] reports to the database states that a "Documented Association" includes virtually any interaction with someone identified as a gang member: "[w]alking, eating, recreating, communicating, or otherwise associating with confirmed gang members or associates."

The points are easy to acquire, but there's no consistency in how the Boston PD assigns them, lending more credibility to the assumption that gang databases mainly exist to confirm cops' biases.

Moreover, the point system was applied to Diaz Ortiz in a haphazard manner. He was assigned points for most, but not all, of his documented interactions with purported MS-13 members. When he was assigned points, he was not always assigned the same number per interaction. Although he was assigned two points for "contact" with alleged gang members or associates on most occasions, he was assigned five points for the "Intelligence Report" submitted by the Boston School Police that describes an encounter that appears no different from the other "contacts." Only two items in the Rule 335 list carry five points: "Information from Reliable, Confidential Informant" and "Information Developed During Investigation and/or Surveillance." We thus cannot accept the BIA's implicit conclusion that the gang package's points-driven identification of Diaz-Ortiz as a "VERIFIED and ACTIVE" member of MS-13 was reliable.

Case in point:

The entry for November 28, 2017 -- the report from a Boston school officer -- illustrates several of these issues. The gist of the entry is that two officers made "casual conversation" with a student in a "full face mask" whom they identified as a member of MS-13, and they then saw the student walk over to a group of teenage boys that included Diaz Ortiz. The report identifies no improper conduct by any of the students; it does not say that the mask bore gang colors or symbols;23 it does not indicate that the masked student spoke directly to Diaz Ortiz. Nor does the report explain the basis for identifying the student as an MS-13 member other than to say that the BRIC labeled the student as a "verified" member. Therefore, we at most can infer from this paltry set of facts that Diaz Ortiz was standing near an individual who was identified as an MS-13 member by the BRIC, with the only basis for that identification the possible use of the same problematic point system that identified Diaz Ortiz as a member. Yet, Diaz Ortiz received five points merely because that student decided to walk over and join a group that included him.

Yes, the BPD decided Ortiz was affiliated with a notorious El Salvadoran gang internationally known for violently [checks gang package] smoking the reefer and conversing in public.

The whole opinion is worth reading. It ruthlessly picks apart the BPD's gang database, reaching conclusions that apply to every gang database run by any law enforcement agency in America. This vacates the lower courts' decisions, which means Ortiz can again plead his case before the BIA. And this time he'll get a new judge because the First Circuit feels that sending it back to the original immigration judge would just allow that judge to re-engage with their pre-existing biases.

Gang databases are garbage. Even the most cursory examination of the underlying factors common to almost every gang database makes that clear. But the immigration court couldn't be bothered to do this, which almost resulted in someone being sent back to El Salvador where interactions with actual gang members might have resulted in his death, rather than just being an unwilling participant in Boston's "Whose Gang Is It Anyway?," where everything's made up and, unfortunately, the points do matter.

Tim Cushing

Content Moderation Case Study: Russia Slows Down Access To Twitter As New Form Of Censorship (2021)

2 years 10 months ago

Summary:

On March 10 2021, the Russian Government deliberately slowed down access to Twitter after it accused the platform of repeatedly failing to remove posts about illegal drug use, child pornography, and pushing minors towards suicide. 

State communications watchdog Roskomnadzor (RKN) claimed that “throttling” the speed of uploading and downloading images and videos on Twitter was to protect its citizens by making its content less accessible. Using Deep Packet Inspection (DPI) technology, RKN essentially filtered internet traffic for Twitter-related domains. As part of Russia’s controversial 2019 Sovereign Internet Law, all Russian Internet Service Providers (ISPs) were required to install this technology, which allows internet traffic to be filtered, rerouted, and blocked with granular rules through a centralized system. In this example, it blocked or slowed down access to specific content (images and videos) rather than the entire service. DPI technology also gives Russian authorities unilateral and automatic access to ISPs’ information systems and access to keys to decrypt user communications. 

Twitter throttling in Russia meme. Translation: “Runet users; Twitter”

The University of Michigan’s researchers reported connection speeds to Twitter users were reduced on average by 87 percent and some Russian internet service providers reported a wider slowdown in access. Inadvertently, this throttling affected all website domains that included the substring t.co (Twitter’s shortened domain name), including Microsoft.com, Reddit.com, Russian state operated news site rt.com and several other Russian Government websites, including RKN’s own.

Although reports suggest that Twitter has a limited user base in Russia, perhaps as low as 3% of the population (from an overall population of 144 million), it is popular with politicians, journalists and opposition figures. The ‘throttling’ of access was likely intended as a warning shot to other platforms and a test of Russia’s technical capabilities. Russian parliamentarian, Aleksandr Khinshtein, an advocate of the 2019 Sovereign Internet Law, was quoted as saying that: 

Putting the brakes on Twitter traffic “will force all other social networks and large foreign internet companies to understand Russia won’t silently watch and swallow the flagrant ignoring of our laws.” The companies would have to obey Russian rules on content or “lose the possibility to make money in Russia.” — Aleksandr Khinshtein

The Russian Government has a history of trying to limit and control citizen’s access and use of social media. In 2018, it tried and ultimately failed to shut down Telegram, a popular messaging app. Telegram, founded by the Russian émigré, Pavel Durov, refused to hand over its encryption keys to RKN, despite a court order. Telegram was able to thwart the shutdown attempts by shifting the hosting of its website to Google Cloud and Amazon Web Services through ‘domain fronting’ – which the Russian Government later banned. The Government eventually backed down in the face of technical difficulties and strong public opposition.
Many news outlets suggest that these incidents demonstrate that Russia, where the internet has long been a last bastion of free speech as the government has shuttered independent news organizations and obstructed political opposition, is now tipping towards the more tightly controlled Chinese model and replicating aspects of its famed Great Fire Wall – including creating home-grown alternatives to Western platforms. They also warn that as Russian tactics become bolder and its censorship technology more technically sophisticated – they will be easily co-opted and scaled up by other autocratic governments.

Company considerations:

  • To what extent should companies comply with such types of government demands? 
  • Where do companies draw the line between acquiescing to government demands/local law that are contrary to its values or could result in human rights violations vs expanding into a market or ensuring that its users have access?
  • To what extent should companies align their response and/or mitigation strategies with that of other (competitor) US companies affected in a similar way by local regulation?
  • Should companies try to circumvent the ‘throttling’ or access restrictions through technical means such as reconfiguring content delivery networks?
  • Should companies alert its users that their government is restricting/throttling access?

Issue considerations:

  • When are government takedown requests too broad and overreaching? Who – companies, governments, civil society, a platform’s users – should decide when that is the case?
  • How transparent should companies be with its users about why certain content is taken down because of government requests and regulation? Would there be times when companies should not be too transparent?
  • What can users and advocacy groups do to challenge government restrictions on access to a platform?
  • Should – as the United Nations suggest – access to the internet be seen as a part of a suite of digital human rights?

Resolution:

The ‘throttling’ of access to Twitter content initially lasted two months. According to RKN, Twitter removed 91 percent of its takedown requests after RKN threatened to block Twitter if it didn’t comply. Normal speeds for desktop users resumed in May after Twitter complied with RKN’s takedown requests but reports indicate that throttling is continuing for Twitter’s mobile app users until it complies fully with RKN’s takedown requests.

Originally posted to the Trust and Safety Foundation website.

Copia Institute

Emails Show The LAPD Cut Ties With The Citizen App After Its Started A Vigilante Manhunt Targeting An Innocent Person

2 years 10 months ago

It didn't take long for Citizen -- the app that once wanted to be a cop -- to wear out its law enforcement welcome. The crime reporting app has made several missteps since its inception, beginning with its original branding as "Vigilante."

Having been booted from app stores for encouraging (unsurprisingly) vigilantism, the company rebranded as "Citizen," hooking um… citizens up with live feeds of crime reports from city residents as well as transcriptions of police scanner output. It also paid citizens to show up uninvited at crime scenes to report on developing situations.

But it never forgot its vigilante origins. When wildfires swept across Southern California last year, Citizen's principals decided it was time to put the "crime" back in "crime reporting app." The problem went all the way to the top, with Citizen CEO Andrew Frame dropping into Slack conversations and live streams, imploring employees and app users to "FIND THIS FUCK."

The problem was Citizen had identified the wrong "FUCK." The person the app claimed was responsible for the wildfire wasn't actually the culprit. Law enforcement later tracked down a better suspect, one who had actually generated some evidence implicating them.

After calling an innocent person a "FUCK" and a "devil" in need of finding, Citizen was forced to walk back its vigilantism and rehabilitate its image. Unfortunately for Citizen, this act managed to burn bridges with local law enforcement just as competently as the wildfire it had used to start a vastly ill-conceived manhunt.

As Joseph Cox reports for Motherboard, this act ignited the last straw that acted as a bridge between Citizen and one of the nation's largest law enforcement agencies, the Los Angeles Police Department. Internal communications obtained by Vice show the LAPD decided to cut ties with the app after the company decided its internal Slack channel was capable of taking the law into its own hands.

On May 21, several days after the misguided manhunt, Sergeant II Hector Guzman, a member of the LAPD Public Communications Group, emailed colleagues with a link to some of the coverage around the incident.

“I know the meeting with West LA regarding Citizen was rescheduled (TBD), but here’s a recent article you might want to look at in advance of the meeting, which again highlights some of the serious concerns with Citizen, and the user actions they promote and condone,” Guzman wrote. Motherboard obtained the LAPD emails through a public records request.

Lieutenant Raul Jovel from the LAPD’s Media Relations Division replied “given what is going on with this App, we will not be working with them from our shop.”

Guzman then replied “Copy. I concur.”

Whatever lucrative possibilities Citizen might have envisioned after making early inroads towards law enforcement acceptance were apparently burnt to a crisp by this misapprehension that nearly led to a calamitous misapprehension. Rather than entertain Citizen's mastubatorial fantasies about being the thin app line between good and evil, the LAPD (wisely) chose to kick the upstart to the curb.

The stiff arm continues to this day. The LAPD cut ties and has continued to swipe left on Citizen's extremely online advances. The same Sgt. Guzman referenced in earlier emails has ensured the LAPD operates independently of Citizen. When Citizen asked the LAPD if it would be ok to eavesdrop on radio chatter to send out push notifications to users about possible criminal activity, Guzman made it clear this would probably be a bad idea.

“It’s come up before. Always turned down for several reasons,” Guzman wrote in another email.

And now Citizen goes it alone in Los Angeles. In response to Motherboard's reporting, Citizen offered up word salad about good intentions and adjusting to "real world operational experiences." I guess that's good, in a certain sense. From the statement, it appears Citizen is willing to learn from its mistakes. The problem is its mistakes have been horrific rather than simply inconvenient, and it appears to be somewhat slow on the uptake, which only aggravates problems that may be caused by over-excited execs thinking a few minutes of police scanner copy should result in citizen arrests.

Tim Cushing

Over 60 Human Rights/Public Interest Groups Urge Congress To Drop EARN IT Act

2 years 10 months ago

We've already talked about the many problems with the EARN IT Act, how the defenders of the bill are confused about many basic concepts, how the bill will making children less safe and how the bill is significantly worse than FOSTA. I'm working on most posts about other problems with the bill, but it really appears that many in the Senate simply don't care.

Tomorrow they'll be doing a markup of the bill where it will almost certainly pass out of the Judiciary Committee, at which point it could be put up for a floor vote at any time. Why the Judiciary Committee is going straight to a markup, rather than holding hearings with actual experts, I cannot explain, but that's the process.

But for now at least over 60 human rights and public interest groups have signed onto a detailed letter from CDT outlining many of the problems in the bill, and asking the Senate to take a step back before rushing through such a dangerous bill.

Looking to the past as prelude to the future, the only time that Congress has limited Section 230 protections was in the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (SESTA/FOSTA). That law purported to protect victims of sex trafficking by eliminating providers’ Section 230 liability shield for “facilitating” sex trafficking by users. According to a 2021 study by the US Government Accountability Office, however, the law has been rarely used to combat sex trafficking.

Instead, it has forced sex workers, whether voluntarily engaging in sex work or forced into sex trafficking against their will, offline and into harm’s way. It has also chilled their online expression generally, including the sharing of health and safety information, and speech wholly unrelated to sex work. Moreover, these burdens fell most heavily on smaller platforms that either served as allies and created spaces for the LGBTQ and sex worker communities or simply could not withstand the legal risks and compliance costs of SESTA/FOSTA. Congress risks repeating this mistake by rushing to pass this misguided legislation, which also limits Section 230 protections.

It also discusses the attacks on encryption hidden deep within the bill.

End-to-end encryption ensures the privacy and security of sensitive communications such that only the sender and receiver can view them. This security is relied upon by journalists, Congress, the military, domestic violence survivors, union organizers, and anyone who seeks to keep their communications secure from malicious hackers. Everyone who communicates with others on the internet should be able to do so privately. But by opening the door to sweeping liability under state laws, the EARN IT Act would strongly disincentivize providers from providing strong encryption. Section 5(7)(A) of EARN IT states that provision of encrypted services shall not “serve as an independent basis for liability of a provider” under the expanded set of state criminal and civil laws for which providers would face liability under EARN IT. Further, Section 5(7)(B) specifies that courts will remain able to consider information about whether and how a provider employs end-to-end encryption as evidence in cases brought under EARN IT. This language, originally proposed in last session’s House companion bill, takes the form of a protection for encryption, but in practice it will do the opposite: courts could consider the offering of end-to-end encrypted services as evidence to prove that a provider is complicit in child exploitation crimes. While prosecutors and plaintiffs could not claim that providing encryption, alone, was enough to constitute a violation of state CSAM laws, they would be able to point to the use of encryption as evidence in support of claims that providers were acting recklessly or negligently. Even the mere threat that use of encryption could be used as evidence against a provider in a criminal prosecution will serve as a strong disincentive to deploying encrypted services in the first place.

Additionally, EARN IT sets up a law enforcement-heavy and Attorney General-led Commission charged with producing a list of voluntary “best practices” that providers should adopt to address CSAM on their services. The Commission is free to, and likely will, recommend against the offering of end-to-end encryption, and recommend providers adopt techniques that ultimately weaken the cybersecurity of their products. While these “best practices” would be voluntary, they could result in reputational harm to providers if they choose not to comply. There is also a risk that refusal to comply could be considered as evidence in support of a provider’s liability, and inform how judges evaluate these cases. States may even amend their laws to mandate the adoption of these supposed best practices. For many companies, the lack of clarity and fear of liability, in addition to potential public shaming, will likely disincentivize them from offering strong encryption, at a time when we should be encouraging the opposite.

There's a lot more in the letter, and the Copia Institute is proud to be one of the dozens of signatories, along with the ACLU, EFF, Wikimedia, Mozilla, Human Rights Campaign, PEN America and many, many more organizations.

Mike Masnick

Terrible Vermont Harassment Law Being Challenged After Cops Use It To Punish A Black Lives Matter Supporter Over Her Facebook Posts

2 years 10 months ago

In June 2020, in Brattleboro, Vermont, something extremely ordinary happened. Two residents of the community interacted on Facebook. It was not a friendly interaction, which made it perhaps even more ordinary.

Here's the ordinariness in all of its mundane detail, as recounted in Brattleboro resident Isabel Vinson's lawsuit [PDF] seeking to have one of the state's laws found unconstitutional.

In June 2020, Christian Antoniello, a Brattleboro resident and the owner of a local business called the Harmony Underground, criticized the Black Lives Matter movement on his personal Facebook page, stating, “How about all lives matter. Not black lives, not white lives. Get over yourself no one’s life is more important than the next. Put your race card away and grow up.”

On June 6, Ms. Vinson posted on her own Facebook page and tagged the Harmony Underground’s business page. Ms. Vinson’s post stated: “Disgusting. The owner of the Harmony Underground here in Brattleboro thinks this is okay and no matter how many people try and tell him it’s wrong he doesn’t seem to care.” In the comments on her post, Ms. Vinson recommended that everyone “leave a review on his page so [Antoniello] can never forget to be honest,” and also tagged a Facebook group called “Exposing Every Racist.”

In response to Ms. Vinson’s Facebook post, a conversation thread ensued among several people, including Ms. Vinson, about her post, Mr. Antoniello, and other complaints about the business.

That's when things stopped being normal, and started becoming increasingly more bizarre.

Several weeks later, Antoniello and his wife reported to the Brattleboro Police Department that they were being harassed on Facebook and that Ms. Vinson’s Facebook activity caused them to fear for their safety.

This is kind of a normal reaction. Kind of. Not everyone subjected to online pitchforks will choose to make it a police matter, but this couple did.

If you're wondering where the criminal activity is, the Brattleboro police department has an answer for you.

On July 7, the Brattleboro Police Department cited Ms. Vinson under § 1027 based on her Facebook activity.

Here's what the state law (Section 1027) says:

A person who, with intent to terrify, intimidate, threaten, harass, or annoy makes contact by means of a telephonic or other electronic communication with another and makes any request, suggestion, or proposal that is obscene, lewd, lascivious, or indecent; threatens to inflict injury or physical harm to the person or property of any person; or disturbs, or attempts to disturb, by repeated telephone calls or other electronic communications, whether or not conversation ensues, the peace, quiet, or right of privacy of any person at the place where the communication or communications are received shall be fined not more than $250.00 or be imprisoned not more than three months, or both.

It's an amazingly broad law that criminalizes all sorts of speech since it can be stretched to fit nearly any speech a complainant doesn't care for. "Harass" is a pretty non-specific term. "Annoy" is even more vague.

That's the law being challenged by Vinson and the ACLU. It's a vague, unconstitutional law. And it's a law the PD obviously didn't sincerely believe applied to Vinson's Facebook post because it ditched everything about this highly questionable case the moment questions started being asked.

Two weeks later -- following an ACLU public records request for all documents related to Vinson's charge and prosecution -- the Brattleboro PD approached Vinson and offered to drop the charges in exchange for her entering a diversion program that could be completed in lieu of criminal charges. Vinson refused to enter the diversion program and said she was seeking legal representation. Here's what happened next:

Two days later, the Brattleboro police informed Ms. Vinson that she would not be charged.

All's well that ends abruptly in the face of the slightest resistance. But the law is still on the books. If the Brattleboro cops may decide not to take a second swing at Isabel Vinson with this law, law enforcement officers in the state are still free to misuse the law to punish people for saying things other people didn't like. And, needless to say, the vague law presents a perfect crime of opportunity for cops if a state resident says something cops don't like. That's why the state is being sued and the Vermont federal court being asked to declare the law unconstitutional. As it stands, the law presents an existential threat to free speech in the state. And Isabel Vinson's experience in Brattleboro shows what can happen when the threat goes from theoretical to fully-realized.

Tim Cushing

Daily Deal: Certified Refurbished Vivitar VTI Phoenix Foldable Drone

2 years 10 months ago

If capturing a bird's eye view of your favorite places is a fun way for you to unwind when you have some time, then the Vivitar VTI Phoenix Foldable Camera Drone (certified refurbished) is a great choice for updating your hobby's capabilities. All the pieces come secured in the sided carrying case, which helps protect them from damage as well as keeps them neatly organized. The two included batteries allow for a combined flight time of over 32 minutes, so you can get the most out of your drone's 1152p video camera video imaging. With a range of 2000 feet, Follow Me technology, GPS location locking, and Wi-Fi transmission capability, this drone has all the bells and whistles you need. It's on sale for $159.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

WarnerMedia Sued For Giving People Want They Wanted (The Matrix, Streaming) During An Historic Health Crisis

2 years 10 months ago

AT&T got a lot wrong (and still really can't admit it) with the company's $86 billion acquisition of Time Warner. There were endless layoffs, a steady dismantling of beloved brands (DC's Vertigo imprint, Mad Magazine), all for the company to lose pay TV subscribers in the end.

But the one thing the company did get right, with a little help from COVID, was its attacks on the dated, pointless, and often punitive Hollywood release window. Typically, this has involved a 90 day gap between the time a move appears in theaters and its streaming or DVD release (in France this window is even more ridiculous at three years). Generally, this is done to protect the "sanctity of the movie going experience," as if for thirty years the "sanctity of the movie going experience" hasn't involved sticky floors, over priced popcorn, big crowds and mass shootings.

During COVID, big streamers like AT&T and Comcast shifted a lot of their tentpole films (like Dune) directly to streaming, which technically saved human lives, but resulted in no limit of raised eyebrows and scorn among the "Loews at the mall is a sacred space you can't criticize" segment of Hollywood. You might recall that AMC Theaters was positively apoplectic when Comcast showed that release windows were a dated relic, declaring it would never again show a Comcast NBC Universal picture anywhere in the world if Comcast kept threatening the sacred release window (the threat lasted about a week).

WarnerMedia (in the process of being spun off by AT&T) has faced similar whining from the industry. This week the company was hit with a lawsuit (pdf) by Village Roadshow Films, which claims the company "rushed" the release of The Matrix Resurrections from 2022 to 2021 as part of an (gasp) effort to boost streaming's popularity. All through 2021, AT&T/Time Warner released films simultaneously in theaters and on streaming to boost HBO Max subscriptions. And people liked it.

Unsurprisingly, Village Roadshow Films did not, claiming the effort (dubbed "Project Popcorn") was a "clandestine plan to materially reduce box office and correlated ancillary revenue generated from tent pole films that Village Roadshow and others would be entitled to receive in exchange for driving subscription revenue for the new HBO Max service." HBO Max and AT&T telegraphed this intention, so it seems hard to argue this was somehow clandestine. The suit also accuses WarnerMedia of ignoring the fact that piracy would have hurt the overall profits to be made from the film, though, again, metrics proving clear financial harm appear lacking.

But just as unsurprisingly, Warner Brothers thinks Village Roadshow Films is just annoyed by reality and shifting markets:

"In a statement shared with The Verge, Warner Bros. called the lawsuit “a frivolous attempt by Village Roadshow to avoid their contractual commitment to participate in the arbitration that we commenced against them last week. We have no doubt that this case will be resolved in our favor."

Again, while it's true that AT&T attacked the sacred old release window to goose streaming subscriptions, this was something that happened during an historic plague in which indoor transmission of a deadly virus could kill or disable you. It's also almost an afterthought that in the advanced home theater and mall shooting era, this is something consumers desperately wanted. For all its downsides, COVID had a strong tendency to painfully highlight shortcomings (see: broadband, the U.S. healthcare system) and dated antiquities (like release windows or a disdain for telecommuting) that no longer served us.

While there's a shrinking sect of Hollywood folks like Spielberg who still think in-person theaters and release windows are sacred and above reproach, COVID laid bare the fact that not that many people agree with them. And while that certainly disadvantaged folks financially dependent on older models (like theater owners and studios heavily vested in release windows), the reality is what it is, and a popular change was accelerated all the same.

Karl Bode

Whistleblower Alleges NSO Offered To 'Drop Off Bags Of Cash' In Exchange To Access To US Cellular Networks

2 years 10 months ago

The endless parade of bad news for Israeli malware merchant NSO Group continues. While it appears someone might be willing to bail out the beleaguered company, it still has to do business as the poster boy for the furtherance of human rights violations around the world. That the Israeli government may have played a significant part in NSO's sales to known human rights violators may ultimately be mitigating, but for now, NSO is stuck playing defense with each passing news cycle.

Late last month, the New York Times revealed some very interesting things about NSO Group. First, it revealed the company was able to undo its built-in ban on searching US phone numbers… provided it was asked to by a US government agency. The FBI took NSO's powerful Pegasus malware for a spin in 2019, but under an assumed name: Phantom. With the permission of NSO and the Israeli government, the malware was able to target US numbers, albeit ones linked to dummy phones purchased by the FBI.

The report noted the FBI liked what it saw, but found the zero-click exploit provided by NSO's bespoke "Phantom" (Pegasus, but able to target US numbers) might pose constitutional problems the agency couldn't surmount. So, it walked away from NSO. But not before running some attack attempts through US servers -- something that was inadvertently exposed by Facebook and WhatsApp in their lawsuit against NSO over the targeting of WhatsApp users. An exhibit declared NSO was using US servers to deliver malware, something that suggested NSO didn't care about its self-imposed restrictions on US targeting. In reality, it was the FBI and NSO running some tests on local applications of zero-click malware that happened to be caught by Facebook techies.

But there's more. Recent reports building on the NYT article contain statements that claim NSO approached service providers with (well, let's just say it) bribes to allow access to targets at a higher level that might mitigate some of the defensive efforts deployed by Facebook, Google, and Apple.

Here's what's been alleged in newer reports, like this one by Craig Timberg of the Washington Post:

The surveillance company NSO Group offered to give representatives of an American mobile-security firm “bags of cash” in exchange for access to global cellular networks, according to a whistleblower who has described the encounter in confidential disclosures to the Justice Department that have been reviewed by The Washington Post.

The mobile-phone security expert Gary Miller alleges that the offer came during a conference call in August 2017 between NSO Group officials and representatives of his employer at the time, Mobileum, a California-based company that provides security services to cellular companies worldwide. The NSO officials specifically were seeking access to what is called the SS7 network, which helps cellular companies route calls and services as their users roam the world, according to Miller.

Mobileum execs were (understandably) unsure how any of this was supposed to work in the unlikely event they were amenable to a foreign entity's requests for elevated access to US cellular networks. Fortunately, the NSO rep made it extremely clear how this was going to work, according to the whistleblower:

In Miller’s account to the Justice Department, when one of Mobileum’s representatives pointed out that security companies do not ordinarily offer services to surveillance companies and asked how such an arrangement would work, NSO co-founder Omri Lavie allegedly said, “We drop bags of cash at your office."

Simple enough. Except -- to quote C. Montgomery Burns -- at the end of the proposed transaction "the money and the very stupid man were still there." Mobileum execs say no such bribery took place -- not because NSO didn't offer it but because the company refused to accept the generous offer of extremely shady "bags of cash" from the Israeli malware maker.

NSO has its own explanation for these events, which is, basically: "It was a joke, probably."

In a statement through a spokesperson, Lavie said he did not believe he had made the remark. “No business was undertaken with Mobileum,” the statement said. “Mr Lavie has no recollection of using the phrase ‘bags of cash’, and believes he did not do so. However if those words were used they will have been entirely in jest.”

Hahahahahaaaa… here at the home of the zero-click exploit marketed to human rights violators we often joke about bribing tech companies to allow us more access to networks. Oh, our sides ache from the fun we have jesting about subverting networks to compromise targets of evil empires. Ell oh fucking ell.

Mobileum, on the other hand, says it has never done business with NSO and reported this proposed cash drop to the FBI in 2017 but never heard anything back from the agency. Two years later, the FBI was experimenting with NSO malware and trying to gauge the political and constitutional fallout of deploying unregulated malware against US citizens.

Even if NSO is to be believed, there's nothing good awaiting it on the US side of things. The Commerce Department has already blacklisted the company, destroying its ability to purchase US tech for the purpose of compromising it. And the Department of Justice has opened its own investigation into NSO, adding to its list of US-related woes.

NSO could have avoided all of this international attention by being more selective about who it sold to, and stripping customers of their licenses at the first hint of malfeasance. It didn't. And the fact that it may have been pressed into service as a malware-laden extension of the Israeli government's Middle East charm offensive isn't going to save it. NSO has to save itself but it lacks the tools to do so. Whatever it claims in defense of every reported allegation is presumed to be suspect, if not completely false. The reputation it has now is mostly earned. It made millions helping sketchy governments inflict further misery on citizens, dissidents, journalists, and political opponents. The company's honor is no longer presumed if, indeed, it ever was.

Tim Cushing