a Better Bubble™

TechDirt 🕸

Analog Books Go From Strength To Strength: Helped, Not Hindered, By The Digital World

2 years 2 months ago

Many of the worst ideas in recent copyright laws have been driven by some influential companies’ fear of the transition from analog to digital. Whereas analog formats – vinyl, books, cinematic releases of films – are relatively easy to control, digital ones are not. Once a creation is in a digital form, anyone can make copies and distribute them on the Internet. Traditional copyright industries seem to think that digital versions of everything will be freely available everywhere, and that no one will ever buy analog versions. That’s not the case with vinyl records, and a recent post on Publisher’s Weekly suggests that analog books too, far from dying, are going from strength to strength:

Led by the fiction categories, unit sales of print books rose 8.9% in 2021 over 2020 at outlets that report to NPD BookScan. Units sold were 825.7 million last year, up from 757.9 million in 2020. BookScan captures approximately 85% of all print sales. In 2020, unit sales were up 8.2% over 2019, which saw 693.7 million print units sold.

The young adult fiction segment had the largest increase, with unit sales jumping 30.7%, while adult fiction sales rose 25.5%. Sales in the juvenile fiction category increased 9.6%.

The two years of increased sales is part of a longer-term trend, as this article from the New York Times in 2015 indicates:

the digital apocalypse never arrived, or at least not on schedule. While analysts once predicted that e-books would overtake print by 2015, digital sales have instead slowed sharply.

Now, there are signs that some e-book adopters are returning to print, or becoming hybrid readers, who juggle devices and paper. E-book sales fell by 10 percent in the first five months of this year, according to the Association of American Publishers, which collects data from nearly 1,200 publishers. Digital books accounted last year for around 20 percent of the market, roughly the same as they did a few years ago.

Digital formats possess certain advantages over analog ones, notably convenience. Today, you can access tens of millions of tracks online with music streaming services, and carry around thousands of ebooks on your phone. But many people evidently continue to appreciate the physicality of analog books, just as they like and buy vinyl records. The Publisher’s Weekly article also shows how the digital world is driving analog sales:

Gains in the young adult category were helped by several titles that benefitted from attention drummed up by BookTok, users of the social media platform TikTok who post about their favorite books. They Both Die at the End by Adam Silvera, released in December 2018, was the #1 title in the category, selling nearly 685,000 copies.

As a recent post on Walled Culture noted, if publishing companies were less paranoid about people sharing snippets of the books they love, on BookTok and elsewhere, the already significant analog sales they produce could be even higher. If the copyright industries want to derive the maximum benefit from the online world, they need to be brave, not bullying, as they so often are today.

Follow me @glynmoody on TwitterDiaspora, or Mastodon.

Originally posted to Walled Culture.

Glyn Moody

Declassified Documents Shows The CIA Is Using A 1981 Executive Order To Engage In Domestic Surveillance

2 years 2 months ago

When most people think of the CIA (Central Intelligence Agency), they think of a foreign-facing spy agency with a long history of state sponsored coup attempts (some successful!), attempted assassinations of foreign leaders, and putting the US in the torture business. What most people don't assume about the CIA is that it's also spying on Americans. After all, we prefer our embarrassments to be foreign-facing -- something that targets (and affects) people we don't really care about and governments we have been told are irredeemable.

An entity with the power to provoke military action halfway around the world has periodically shown an unhealthy interest in domestic affairs, which are supposed to be off-limits for the nation's most morally suspect spies. The CIA (along with the FBI) routinely abuses its powers to perform backdoor searches of foreign surveillance stashes to locate US-based communications. It also has asked the FBI to do its dirty secondhand surveillance work for it in order to bypass restrictions baked into Executive Order 12333 -- an executive order issued by Ronald Reagan that significantly expanded surveillance permissions for US agencies.

Perhaps most significantly -- at least in terms of this report -- the order instructed other government agencies to be more compliant with CIA requests for information. Since its debut in December 1981, the order has been modified twice (by George W. Bush) to give the government more power.

That's the authority the CIA has been using to spy on Americans, as a recent PCLOB (Privacy and Civil Liberties Oversight Board) report shows. The PCLOB performed a "deep dive" in CIA domestic spying at the request of Senators Ron Wyden and Martin Heinrich. After its completion, the senators asked for an unclassified version of the PCLOB's report. That report has arrived. And, according to Ron Wyden's statements, it shows the CIA is utilizing EO 12333 to spy on Americans and bypass the protections (however minimal) the FISA court provides to Americans.

“FISA gets all the attention because of the periodic congressional reauthorizations and the release of DOJ, ODNI and FISA Court documents,” said Senators Wyden and Heinrich in response to the newly declassified documents. “But what these documents demonstrate is that many of the same concerns that Americans have about their privacy and civil liberties also apply to how the CIA collects and handles information under executive order and outside the FISA law. In particular, these documents reveal serious problems associated with warrantless backdoor searches of Americans, the same issue that has generated bipartisan concern in the FISA context.”

Wyden and Heinrich called for more transparency from the CIA, including what kind of records were collected and the legal framework for the collection. The PCLOB report noted problems with CIA’s handling and searching of Americans’ information under the program.

Even if the spying isn't direct, the outcome is pretty much identical to direct targeting. With EO 12333, the CIA obtains the compliance from other federal agencies envisioned by Ronald Reagan back in 1981 as his administration ran headlong into the CIA-implicating Iran-Contra scandal.

Domestic data is supposed to be "masked" if incidentally acquired by foreign-facing surveillance collections. Sometimes this simply doesn't happen. Sometimes unmasking occurs without proper permission or oversight. The FBI uses this to its advantage. So does the CIA. But the FBI handles domestic terrorism. The CIA does not. That makes the CIA's abuse possibly more egregious than the FBI's numerous violations of the same restrictions placed on domestic surveillance via foreign interception of communications by the NSA.

The PCLOB report [PDF] shows the CIA has obtained bulk financial data from other sources, possibly without proper masking of incidentally-collected US persons data. According to the CIA's response to the report, the only thing separating CIA analysts from US persons' data and communications is a pop-up box warning them that access may be illegal. This is only a warning. It does not (nor could it) prevent analysts from obtaining data they shouldn't have access to without explicit permission.

How extensive this "incidental" collection is remains to be seen. And there's a good chance no one will ever know how often this pop-up was ignored to collect data generated by US citizens and residents. Much of the report is redacted and what was shared with the PCLOB was limited to whatever the CIA felt like sharing. The oversight of programs like these is deliberately limited by the Executive Order -- one that made the assumption some things (like national security) are too important to be done properly or overseen directly.

The report does note that the CIA has internal processes to limit abuse of backdoor searches. But it also points out the CIA has read EO 12333 and its modifications to mean it can do what it wants when it wants without worrying too much about straying outside of the generous lines drawn by this Executive Order.

The limits include a requirement to use the “least intrusive collection techniques feasible within the United States or directed against United States persons abroad.” Annex A implements E.O. 12333’s “least intrusive collection technique” requirement regarding activities outside of the United States involving U.S. persons. Given that the Executive Order’s restriction only applies to activities in the United States or activities directed against U.S. persons abroad, the CIA interprets the language of Annex A to only apply to collections directed against USPs abroad. Annex A does not require [redacted] to apply the least intrusive collection technique to collections covered by this report, which are generally not directed against USPs.

There's the exploitable loop: the EO only applies to collections "directed" at US persons. Since all information is pulled from foreign-facing surveillance collections that "incidentally" collect US persons data, the resulting collection the CIA has access to is completely legal. Analysts access these collections specifically to find US persons' data, but because no agency deliberately targeted US persons, it's all above board.

This is the exploitation of foreign bulk collections to obtain information about Americans. While some may argue the damage is minimal because it only accesses information (financial records) unlikely to have an established expectation of privacy, people obviously know their financial institutions track their purchases, but that's not the same thing as people assuming the government should be able to access records -- which may contain sensitive information -- using nothing more than an Executive Order that was ostensibly written to strengthen foreign surveillance efforts.

And that's only what can be observed from this redacted release. This isn't the CIA's only attempt to hoover up info on US persons via side channels. Wyden's letter hints at FISA reforms, which likely refers to domestic phone records the NSA used to collect in bulk -- a program that was specifically targeted by Congress following the Snowden revelations. What's contained in this report is a narrow examination of one part of the CIA's exploitation of bulk collections to obtain US persons data. And if it feels this confident about its nearly unrestricted ability to perform these backdoor searches, examinations of other aspects of this program are likely to find other domestic data is ending up in the hands of CIA analysts who are supposed to be focused on foreign activities.

Tim Cushing

Can We Compare Dot-Com Bubble To Today's Web3/Blockchain Craze?

2 years 2 months ago

Recently, I re-read through various discussions about the “dot-com bubble.” Surprisingly, it sounded all too familiar. I realized there are many similarities to today's techno-optimism and techno-pessimism around Web3 and Blockchain. We have people hyping up the future promises, while others express concerns about the bubble.

The Dot-Com Outspoken Optimism

In the mid-1990s, the dot-com boom was starting to gather steam. The key players in the tech ecosystem had blind faith in the inherent good of computers. Their vision of the future represented the broader Silicon Valley culture and the claim that the digital revolution “would bring an era of transformative abundance and prosperity.” Leading tech commentators celebrated the potential for advancing democracy and empowering people.

Most tech reporting pitted the creative force of technological innovation against established powers trying to tame its disruptive inevitability. Tech companies, in this storyline, represented the young and irreverent, gleefully smashing old traditions and hierarchies. The narrative was around “the mystique of the founders,” recalled Rowan Benecke. It was about “the brashness, the arrogance, but also the brilliance of these executives who were daring to take on established industries to find a better way.”

David Karpf examined “25 years of WIRED predictions” and looked back at how both Web 1.0 and Web 2.0 imagined a future that upended traditional economics: “We were all going to be millionaires, all going to be creators, all going to be collaborators.” However, “The bright future of abundance has, time and again, been waylaid by the present realities of earnings reports, venture investments, and shareholder capitalism. On its way to the many, the new wealth has consistently been diverted up to the few.”

The Dot-Com Outspoken Pessimism

During the dot-com boom, the theme around its predicted burst was actually prominent. “At the time, there were still people who said, ‘Silicon Valley is a bubble; this is all about to burst. None of these apps have a workable business model,’” said Casey Newton. “There was a lot of really negative coverage focused on ‘These businesses are going to collapse.’”

Kara Swisher shared that in the 1990s, a lot of the coverage was, “Look at this new cool thing.” But also, “the initial coverage was ‘this is a Ponzi scheme,’ or ‘this is not going to happen.’ When the Internet came, there was a huge amount of doubt about its efficacy. Way before it was doubt about the economics, it was doubt about whether anyone was going to use it,” Then, “it became clear that there was a lot of money to be made; the ‘gold rush’ mentality was on.”

At the end of 1999, this gold rush was mocked by San Francisco Magazine. “The Greed Issue” featured the headline “Made your Million Yet?” and stated that “Three local renegades have made it easy for all of us to hit it big trading online. Yeah…right.” Soon after, came the dot-com implosion.

“In 2000, the coverage became more critical,” explained Nick Wingfield. There was a sense that, “You do have to pay attention to profitability and to create sustainable businesses.” “There was this new economy, where you didn’t need to make profits, you just needed to get a product to market and to grow a market share and to grow eyeballs,” added Rowan Benecke. It was ultimately its downfall at the dot-com crash.”

The Blockchain is Partying Like It’s 1999

While VCs are aggressively promoting Web3 - Crypto, NFTs, decentralized finance (DeFi) platforms, and a bunch of other Blockchain stuff - they are also getting more pushback. See, for example, the latest Mark Andreesen Twitter fight with Jack Dorsey, or listen to Box CEO Aaron Levie's conversation with Alex Kantrowitz. The reason the debate is heated is, in part, due to the amount of money being poured into it.

Web3 Outspoken Optimism

Andreessen Horowitz, for example, has just launched a new $2.2 billion cryptocurrency-focused fund. “The size of this fund speaks to the size of the opportunity before us: crypto is not only the future of finance but, as with the internet in the early days, is poised to transform all aspects of our lives,” a16z’s cryptocurrency group announced in a blog post. “We’re going all-in on the talented, visionary founders who are determined to be part of crypto’s next chapter.”

The vision of Web3’s believers is incredibly optimistic: “Developers, investors and early adopters imagine a future in which the technologies that enable Bitcoin and Ethereum will break up the concentrated power today's tech giants wield and usher in a golden age of individual empowerment and entrepreneurial freedom.” It will disrupt concentrations of power in banks, companies and billionaires, and deliver better ways for creators to profit from their work.

Web3 Outspoken Pessimism

Critics of the Web3 movement argue that its technology is hard to use and prone to failure. “Neither venture capital investment nor easy access to risky, highly inflated assets predicts lasting success and impact for a particular company or technology” (Tim O’Reilly).

Other critics attack “the amount of utopian bullshit” and call it a “dangerous get-rich-quick scam” (Matt Stolle) or even “worse than a Ponzi scheme” (Robert McCauley). “At its core, Web3 is a vapid marketing campaign that attempts to reframe the public’s negative associations of crypto assets into a false narrative about disruption of legacy tech company hegemony” (Stephen Diehl). “But you can’t stop a gold rush,” wrote Moxie Marlinspike. Sounds familiar?

A “Big Bang of Decentralization” is NOT Coming

In his seminal “Protocols, Not Platforms,” Mike Masnick asserted that “if the token/cryptocurrency approach is shown to work as a method for supporting a successful protocol, it may even be more valuable to build these services as protocols, rather than as centralized, controlled platforms.” At the same time, he made it clear that even decentralized systems based on protocols will still likely end up with huge winners that control most of the market (like email and Google, for example. I recommend reading the whole piece if you haven’t already).

Currently, Web3 enthusiasts are hyping that a “Big Bang of decentralization” is coming. However, as the crypto market evolves, it is “becoming more centralized, with insiders retaining a greater share of the token” (Scott Galloway). As more people enter Web3, the more likely centralized services will become dominant. The power shift is already underway. See How OpenSea took over the NFT trade.

However, Mike Masnick also emphasized that decentralization keeps the large players in check. The distributed nature incentivizes the winners to act in the best interest of their users.

Are the new winners of Web3 going to act in their users’ best interests? If you watch Dan Olson’s “Line Goes Up – The Problem With NFTs” you will probably answer, “NO.”

From “Peak of Inflated Expectations” to “Trough of Disillusionment”

In Gartner’s Hype Cycle, it is expected that hyped technologies experience “correction” in the form of a crash: A “peak of inflated expectations” is followed by a “trough of disillusionment.” In this stage, the technology can still be promoted and developed, but at a slower pace. With regards to Web3, we might be reaching the apex of the "inflated expectations". Unfortunately, there will be a few big winners and a “long tail” of losers in the upcoming “disillusionment.”

Previous evolutions of the web had this "power law distribution". Blogs, for example, were marketed as a megaphone for anyone with a keyboard. It was amazing to have access to distribution and an audience. But when you have more blogs than stars in the sky, only a fraction of them can rise to power. Accordingly, only a few of Web3’s new empowering initiatives will ultimately succeed. Then, “on its way to the many,” the question remains “would the new wealth be diverted up to the few?” As per the history of the web, in a "winner-take-all" world, the next iteration wouldn't be different. 

From a “Bubble” to a “Balloon”

Going through the dot-com description, and then, the current Web3 debate - feels like déjà vu. Nonetheless, as I argue that the tech coverage should not be in either Techlash (“tech is a threat”) or Techlust (“tech is our savior”) but rather Tech Realism – I also argue the Web3 debate should be neither “bubble burst” nor “golden age,” but rather in the middle.

A useful description of this middle was recently offered by M.G. Siegler, who said the tech bubble is not a bubble but a balloon. Following his line of thought, instead of a bubble, Web3 can be viewed as a “deflating balloons ecosystem”: The overhyped parts of Web3 might burst, and affect the whole ecosystem, but most evaluations and promises will just return closer to earth.

That’s where they should be in the first place.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

Nirit Weiss-Blatt

Cop Trainer Encouraging Cops To Run Facial Recognition Searches On People During Traffic Stops

2 years 2 months ago

Cops are out there giving each other bad advice. An instructor for Street Cop Training -- a New Jersey based provider of officer training programs -- is telling officers it's ok to run facial recognition searches during routine traffic stops, when not encouraging them to go further with their potential rights violations.

In a podcast recently uncovered by Caroline Haskins for Insider, Maryland detective Nick Jerman tells listeners there's nothing wrong with running a facial image against publicly available databases during a traffic stop.

In a July 2021 episode of the Street Cop Podcast with Dennis Benigno, the company's founder, Jerman encouraged using facial recognition software to determine the identity of the person pulled over. The Street Cop Podcast is advertised as "The training that cops deserve" and, along with Street Cop Training's other programs, is marketed to active-duty police.

"Let's say you're on a traffic stop and we have someone in the car that we suspect may be wanted," Benigno asked during the episode. "What do we do in that situation?"

"Well there's a couple of paid programs you can use where you can take their picture, and it'll put it in," Jerman said, referring to facial recognition tools, before recommending "another one called PimEyes you can use." PimEyes is a free, public-facing facial-recognition search engine.

The legality of running searches like this is still up in the air. If there's nothing beyond suspicion a vehicle occupant might be a wanted suspect, officers would likely have to develop something a little more reasonable before engaging in searches -- like utilizing a facial recognition program -- unrelated to the traffic stop. And in some states and cities, it is very definitely illegal, thanks to recent facial recognition tech bans. Just because the cops may not own the tech utilized during these searches doesn't necessarily make actions like these legal.

But that's not the only potential illegality Detective Jerman (who, as Haskins points out, is currently being investigated by his department over some very questionable social media posts) encourages. He notes that in many states officers cannot demand people they stop ID themselves, especially when they're just passengers in a vehicle. He recommends this bit of subterfuge to obtain this information without consent.

"How about, you're in a situation where you can't compel ID and before you even ask you're like there's something not right with this guy and he's gonna lie," Benigno said.

Jerman suggested getting the person's phone number, either by asking the person, or by accusing the person of stealing a phone in the car and asking if they can call the phone in order to exonerate them.

"[Say] 'I see that phone in the car, we've had a lot of thefts of phones,' say 'Is that really your phone?' and then you can call it to see if that's the real phone number," Jerman said. "If you can get the phone number from your target, the world is your oyster."

Once a cop has a phone number, they can use third-party services to discover the phone owner's name and may be able to find any social media accounts associated with that phone number. The request may sound innocuous -- seeking to see if a phone is stolen -- but the end result may be someone unwittingly sharing a great deal about themselves with an officer.

Detective Jerman also provides classes on how to create fake social media accounts using freely accessible tools. He does this despite knowing it's a terms of service violation and appears to believe that since there's no law against it, officers should avail themselves of this subterfuge option. He has also made social media posts mocking Facebook and others for telling cops they're breaking the platform's rules when they do this.

But far more worrisome is something he admitted on another Street Cop Training podcast:

He recounted that at a wedding a few years ago, his friend wanted to approach a woman in a red dress because he "thought she was pretty hot." Jerman said that on the spot, he did a geofence Instagram search for recent posts near the wedding venue. He found a picture with the woman in the red dress, named Marilisa, posted by her friend, Amanda.

"Then you can start gaining intel on Amanda, then you can go back to Marilisa and start talking to her as if you know her friend Amanda," Jerman said.

Even his host, Street Cop Training founder Dennis Bengino, seemed to consider Jerman's actions to be a little creepy. But that appears to be Detective Jerman's MO: the exploitation of any service or platform to obtain information on anyone he runs into, whether it's at a wedding or during a pretextual traffic stop.

Despite Jerman's insistence that none of this breaks any laws, the actual legality of these actions is still up in the air. The lack of courtroom precedent saying otherwise is not synonymous with "lawful." Cases involving tactics like these are bound to result in challenges of arrests or evidence, and it's not immediately clear running unjustified searches clears the (very low) bar for reasonableness during investigative stops.

However, Jerman's big mouth and enthusiasm for exploitation should make it clear what's at stake when cops start asking questions, no matter how innocuous the questions may initially appear. And documents like the one obtained by Insider -- one that lists dozens of publicly accessible search tools and facial recognition AI -- should serve as a warning to anyone stopped by police officers. Imagine the creepiest things a stalker might do to obtain information about you. Now, imagine all of that in the hands of someone with an incredible amount of power, easy access to weapons, and an insular shield on non-accountability surrounding them.

Tim Cushing

Daily Deal: The Complete 2022 Microsoft Office Master Class Bundle

2 years 2 months ago

The Complete 2022 Microsoft Office Master Class Bundle has 14 courses to help you learn all you need to know about MS Office products to help boost your productivity. Courses cover SharePoint, Word, Excel, Access, Outlook, Teams, and more. The bundle is on sale for $75.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Penguin Random House Demands Removal Of Maus From Digital Library Because The Book Is Popular Again

2 years 2 months ago

We've said it over and over again, if libraries did not exist today, there is no way publishers would allow them to come into existence. We know this, in part, because of their attempts to stop libraries from lending ebooks, and to price ebooks at ridiculous markups to discourage libraries, and their outright claims that libraries are unfair competition. And we won't even touch on their lawsuit over digital libraries.

Anyway, in other book news, you may have heard recently about how a Tennessee school board banned Art Spiegelman's classic graphic novel about the Holocaust, Maus, from being taught in an eighth-grade English class. Some people called this a ban, while others said the book is still available, so it's not a "ban." To me, I think school boards are not the teachers, and the teachers should be able to come up with their own curriculum, as they know best what will educate their students. Also, Maus is a fantastic book, and the claim that it was banned because of "rough, objectionable language" and nudity is utter nonsense.

Either way, Maus is now back atop various best seller lists, as the controversy has driven sales. Spiegelman is giving fun interviews again where he says things like "well, who's the snowflake now?" And we see op-eds about how the best way get kids not to read books... is to assign it in English class.

But, also, we have publishers getting into the banning business themselves... by trying to capitalize on the sudden new interest in Maus.

Penguin Random House doesn't want this new interest in Maus to lead to... people taking it out of the library rather than buying a copy. They're now abusing copyright law to demand the book be removed from the Internet Archive's lending library, and they flat out admit that they're doing so for their own bottom line:

A few days ago, Penguin Random House, the publisher of Maus, Art Spiegelman's Pulitzer Prize-winning graphic novel about the Holocaust, demanded that the Internet Archive remove the book from our lending library. Why? Because, in their words, "consumer interest in 'Maus' has soared" as the result of a Tennessee school board's decision to ban teaching the book. By its own admission, to maximize profits, a Goliath of the publishing industry is forbidding our non-profit library from lending a banned book to our patrons: a real live digital book-burning.

This is just blatant greed laid bare. As the article notes, whatever problems US copyright law has, it has enshrined the concept of libraries, and the right to lend out books as a key element of the public interest. And the publishers -- such as giants like Penguin Random House -- would do anything possible to stamp that right out.

Mike Masnick

Unknown American VC Firm Apparently Looking To Acquire NSO Group, Limit It To Selling To Five Eyes Countries

2 years 2 months ago

NSO Group -- the embattled, extremely controversial Israeli phone malware developer -- finally has some good news to report. It may have a white knight riding to its rescue -- a somewhat unknown American venture capital firm that could help it pay its bills and possibly even rehabilitate its image.

Integrity Partners, which according to its website deals with investments in the fields of mobility and digital infrastructure, is managed by partners Chris Gaertner, Elad Yoran, Pat Wilkinson and Thomas Morgan.

According to the document of intentions, they will establish a company called Integrity Labs that would acquire control of NSO. It would also stream $300 million to the firm, in order to rebuild the company.

It's not all good news, at least not at the outset. The VC firm had pledged to lobby the US government on NSO's behalf to get the recent blacklist lifted, which means NSO would once again be able to purchase US tech solely for the purpose of developing exploits to use against that tech. If Integrity Partners has any interest in remaining true to its name, it should probably backburner this effort until it has engaged in some reputation rehabilitation.

Fortunately, it appears the VC firm is also interested in getting NSO back on the right track. Following neverending reports of NSO exploits being used to target journalists, political opponents, ex-wives, dissidents, and religious leaders, the government of Israel drastically reduced the number of countries NSO could sell to.

Integrity Labs aims to limit that list even further.

Instead of the current 37 clients, the company will reduce its sales to only five clients: the Five Eyes Anglosphere intelligence alliance of New Zealand, the United States, Australia, Great Britain and Canada. The company would initially focus on defensive cyber products as part of its rebranding effort.

With these restrictions in place -- and the United States on the preferred customer list -- it should be pretty easy to get the blacklist lifted. It's not that none of these countries would ever abuse malware to engage in domestic surveillance, but it's a far better list of potential clients than the one NSO had compiled over the last several years, which included a number of known habitual human rights abusers.

But there are still reasons to be concerned. Much of what happens to NSO after this acquisition occurs will still be shrouded in secrecy. There may be a claimed focus on defensive tech, but offensive exploits have always been NSO's main money makers and it will be much more difficult to remain profitable without this revenue stream.

Then there's the chance NSO will enter into a partnership with a different company that may not have the same altruistic goals, which means the malware developer will be able to continue limping along as the poster child for irresponsible sales and marketing. And the market for powerful malware will continue to exist. It will just end up being handled by companies that have remained mostly off the world press radar.

Also, there's the fact that there's very little information about who "Integrity Partners" actually is. While the firm's website lists its partners -- all of whom mention their military experience -- there is no evidence of a portfolio, or any evidence of previous investments. While the firm is listed in Crunchbase (the main database tracking VCs and startups), it shows no investments, and only mentions a single fund the firm has raised... for $350,000. It seems unlikely that that's enough to buy NSO Group.

For now, NSO's financial well-being and reputation are in tatters. The company cannot meet its debt obligations without outside help and its ruinous months-long streak of negative press present challenges even a timely influx of cash may not be able to reverse. But if it can rebrand and retool to provide defensive tech to a very short list of customers it may be able to survive its precipitous plunge into the "Tech's Most Hated" pool.

Tim Cushing

Minneapolis Police Officers Demanded No-Knock Warrant, Killed Innocent Gunowner Nine Seconds After Entering Residence

2 years 2 months ago

The city of Minneapolis, Minnesota is temporarily ending the use of no-knock warrants following the killing of 22-year-old Amir Locke by Minneapolis police officers. The city's mayor, Jacob Frey, has placed a moratorium on these warrants until the policy can be reviewed by Professor Pete Kraska of Eastern Kentucky University and anti-police violence activist DeRay McKesson.

This comes as too little too late for Locke and his surviving family. The entire raid was caught on body cam and it shows Amir Locke picking up a gun (but not pointing it at officers) after he was awakened by police officers swarming into the residence.

Locke, who was not a target of the investigation, was sleeping in the downtown Minneapolis apartment of a relative when members of a Minneapolis police SWAT team burst in shortly before 7 a.m. Wednesday. Footage from one of the officers' body cameras showed police quietly unlocking the apartment door with a key before barging inside, yelling "Search warrant!" as Locke lay under a blanket on the couch. An officer kicked the couch, Locke stirred and was shot by officer Mark Hanneman within seconds as Locke held a firearm in his right hand.

Locke was shot once in the wrist and twice in the chest. He died thirteen minutes after the shooting. As you may have noticed from the preceding paragraph, Locke was not a suspected criminal. And for those who may argue simply being within reach of a firearm is justification for shooting, Locke's handgun was legal and he had a concealed carry permit. His justifiable reaction to people barging into an apartment unannounced is somehow considered less justifiable than the officers' decision to kill him.

In most cases, that's just the way it goes, which -- assuming the warrant dotted all i's and crossed all t's -- means the Second Amendment is subservient to other constitutional amendments, like the Fourth. Here's how Scott Greenfield explains this omnipresent friction in a nation where the right to bear arms is respected… but only up to a point:

The Second Amendment issue is clear. Locke had a legal gun and, upon being awoken in the night, grabbed it. He didn’t point it at anyone or put his finger on the trigger, but it was in his hand. A cop might explain that it would only take a fraction of a second for that to change, if he was inclined to point it at an officer, put his finger on the trigger and shoot. But he didn’t.

This conundrum has been noted and argued before, that if there is a fundamental personal right to keep and bear arms, and that’s what the Supreme Court informs us is our right, then the exercise of that constitutional right cannot automatically give right to police to execute you for it. The Reasonably Scared Cop Rule cannot co-exist with the Right to Keep and Bear Arms.

"Cannot co-exist." This means that, in most cases, the citizen bearing arms generally ceases to exist (along with this right) when confronted by a law enforcement officer who believes they are reasonably afraid.

There's another point to Greenfield's post that's worth reading, but one we won't discuss further in this post: the NRA's utter unwillingness to express outrage when the right to bear arms is converted to the right to remain permanently silent by police officers who have deliberately put themselves in a situation that maximizes their fears, no matter how unreasonable those fears might ultimately turn out to be.

But this is a situation that could have been avoided. A knock-and-announce warrant would have informed Locke (who was sleeping at a relative's house) that law enforcement was outside. As the owner of a legal gun and conceal/carry permit, it's highly unlikely this announcement would have resulted in Locke opening fire on officers.

It didn't have to be this way, but the Minneapolis Police Department insisted this couldn't be handled any other way.

A law enforcement source, who spoke on the condition of anonymity because of the sensitive nature of the case, said that St. Paul police filed standard applications for search warrant affidavits for three separate apartments at the Bolero Flats Apartment Homes, at 1117 S. Marquette Av., earlier this week.

But Minneapolis police demanded that, if their officers were to execute the search within its jurisdiction, St. Paul police first secure "no-knock" warrants instead. MPD would not have agreed to execute the search otherwise, according to the law enforcement source.

If it had been handled the St. Paul way, Locke might still be alive. There's no evidence here indicating deployment of a knock-and-announce warrant would have made things more dangerous for the officers. If this sort of heightened risk presented itself frequently, the St. Paul PD would respond accordingly when seeking warrants.

St. Paul police very rarely execute no-knock warrants because they are considered high-risk. St. Paul police have not served such a warrant since 2016, said department spokesman Steve Linders.

Contrast that with the Minneapolis PD, which appears to feel a majority of warrant service should be performed without niceties like knocking or announcing their presence.

A Star Tribune review of available court records found that MPD personnel have filed for, and obtained, at least 13 applications for no-knock or nighttime warrants since the start of the year — more than the 12 standard search warrants sought in that same span.

This is likely an undercount, the Star Tribune notes. Many warrants are filed under seal and are still inaccessible. But it does track with the MPD's deployment stats. According to records, the MPD carries out an average of 139 no-knock warrants a year.

This happens despite Minnesota PD policy specifically stating officers are supposed to identify themselves as police and announce their purpose (i.e., "search warrant") before entering. That rule applies even if officers have secured a no-knock warrant. If officers wish to bypass this policy that applies to no-knock warrants, they need more than a judge's permission. They also need direct permission from the Chief of Police or their designee. That's because no-knock warrants were severely restricted by police reforms passed in 2020. But it appears those reforms have done little to change the way the MPD handles its warrant business.

We'll see if the mayor's moratorium is more effective than the tepid reforms enacted following the killing of George Floyd by Officer Derek Chauvin. The undetectable change in tactics following the 2020 reforms doesn't exactly give one confidence a citywide moratorium will keep MPD officers from showing up unannounced and killing people during the ensuing confusion. It only took nine seconds for officers to end Amir Locke's life. Given what's been observed here it will apparently take several years (and several lives) before the Minneapolis PD will be willing to alter its culture and its day-to-day practices.

Tim Cushing

The Top Ten Mistakes Senators Made During Today's EARN IT Markup

2 years 2 months ago

Today, the Senate Judiciary Committee unanimously approved the EARN IT Act and sent that legislation to the Senate floor. As drafted, the bill will be a disaster. Only by monitoring what users communicate could tech services avoid vast new liability, and only by abandoning, or compromising, end-to-end encryption, could they implement such monitoring. Thus, the bill poses a dire threat to the privacy, security and safety of law-abiding Internet users around the world, especially those whose lives depend on having messaging tools that governments cannot crack. Aiding such dissidents is precisely why it was the U.S. government that initially funded the development of the end-to-end encryption (E2EE) now found in Signal, Whatsapp and other such tools. Even worse, the bill will do the opposite of what it claims: instead of helping law enforcement crack down on child sexual abuse material (CSAM), the bill will actually help the most odious criminals walk free.

As with the July 2020 markup of the last Congress’s version of this bill, the vote was unanimous. This time, no amendments were adopted; indeed, none were even put up for a vote. We knew there wouldn’t be much time for discussion because Sen. Dick Durbin kicked off the discussion by noting that Sen. Lindsey Graham would have to leave soon for a floor vote. 

The Committee didn’t bother holding a hearing on the bill before rushing it to markup. The one and only hearing on the bill occurred just six days after its introduction back in March 2020. The Committee thereafter made major (but largely cosmetic) changes to the bill, leaving its Members more confused than ever about what the bill actually does. Today’s markup was a singular low-point in the history of what is supposed to be one of the most serious bodies in Congress. It showed that there is nothing remotely judicious about the Judiciary Committee; that most of its members have little understanding of the Internet and even less of how the, ahem, judiciary actually works; and, saddest of all, that they simply do not care.

Here are the top ten legal and technical mistakes the Committee made today.

Mistake #1: “Encryption Is not Threatened by This Bill”

Strong encryption is essential to online life today. It protects our commerce and our communications from the prying eyes of criminals, hostile authorian regimes and other malicious actors.

Sen. Richard Blumenthal called encryption a “red herring,” relying on his work with Sen. Leahy’s office to implement language from his 2020 amendment to the previous version of EARN IT (even as he admitted to a reporter that encryption was a target). Leahy’s 2020 amendment aimed to preserve companies’ ability to offer secure encryption in their products by providing that a company could not be found in violation of the law because it utilized secure encryption, doesn’t have the ability to decrypt communications, or fails to undermine the security of their encryption (for example, by building in a backdoor for use by law enforcement). 

But while the 2022 EARN IT Act contains the same list of protected activities, the authors snuck in new language that undermines that very protection. This version of the bill says that those activities can’t be an independent basis of liability, but that courts can consider them as evidence while proving the civil and criminal claims permitted by the bill’s provisions. That’s a big deal. EARN IT opens the door to liability under an enormous number of state civil and criminal laws, some of which require (or could require, if state legislatures so choose) a showing that a company was only reckless in its actions—a far lower showing than federal law’s requirement that a defendant have acted “knowingly.” If a court can consider the use of encryption, or failure to create security flaws in that encryption, as evidence that a company was “reckless,” it is effectively the same as imposing liability for encryption itself. No sane company would take the chance of being found liable for transmitting CSAM; they’ll just stop offering strong encryption instead. 

Mistake #2: The Bill’s Sponsors Readily Conceded that EARN IT Would Coerce Monitoring for CSAM

EARN IT’s sponsors repeatedly complained that tech companies aren’t doing enough to monitor for CSAM—and that their goal was to force them to do more. As Sen. Blumenthal noted, free software (PhotoDNA) makes it easy to detect CSAM, and it’s simply outrageous that some sites aren’t even using it. He didn’t get specific but we will: both Parler and Gettr, the alternative social networks favored by the MAGA right, have refused to use PhotoDNA. When asked about it, Parler’s COO told The Washington Post: “I don’t look for that content, so why should I know it exists?" The Stanford Internet Observatory’s David Thiel responded:

This, frankly, is just reckless. You cannot run a social media site, particularly one targeted to include content forbidden from mainstream platforms, solely with voluntary flagging. Implementing PhotoDNA to prevent CEI is the bare minimum for a site allowing image uploads. 9/10

— David Thiel (@elegant_wallaby) August 12, 2021

We agree completely—morally. So why, as Berin asked when EARN IT was first introduced, doesn’t Congress just directly mandate the use of such easy filtering tools? The answer lies in understanding why Parler and Gettr can get away with this today. Back in 2008, Congress required tech companies that become aware of CSAM to report it immediately to NCMEC, the quasi-governmental clearinghouse that administers the database of CSAM hashes used by PhotoDNA to identify known CSAM. Instead of requiring companies to monitor for CSAM, Congress said exactly the opposite: nothing in 18 U.S.C. § 2258A “shall be construed to require a provider to monitor [for CSAM].”

Why? Was Congress soft on child predators back then? Obviously not. Just the opposite: they understood that requiring tech companies to conduct searches for CSAM would make them state actors subject to the Fourth Amendment’s warrant requirement—and they didn’t want to jeopardize criminal prosecutions. 

Conceding that the purpose of EARN IT Act is to coerce searches for CSAM is a mistake, a colossal one, because it invites courts to rule that searching wasn’t voluntary.

Mistake #3: The Leahy Amendment Alone Won’t Protect Privacy & Security, or Avoid Triggering the Fourth Amendment

While Sen. Leahy’s 2020 amendment was a positive step towards protecting the privacy and security of online communications, and Lee’s proposal today to revive it is welcome, it was always an incomplete solution. While it protected companies against liability for offering encryption or failing to undermine the security of their encryption, it did not protect the refusal to conduct monitoring of user communications. A company offering E2EE products might still be coerced into compromising the security of its devices by scanning user communications “client-side” (i.e., on the device) prior to encrypting sent communications or after decrypting received communications. 

Apple recently proposed such a technology for such client-side scanning, raising concerns from privacy advocates and civil society groups. For its part, Apple assured that safeguards would limit use of the system to known CSAM to prevent the capability from being abused by foreign governments or rogue actors. But the capacity to conduct such surveillance presents an inherent risk of being exploited by malicious actors. Some companies may be able to successfully safeguard such surveillance architecture from misuse or exploitation. However, resources and approaches will vary across companies, and it is a virtual certainty that not all of them will be successful. And if done under coercion, create a risk that such efforts will be ruled state action requiring a warrant under the Fourth Amendment. 

Our letter to the Committee proposes an easy way to expand the Leahy amendment to ensure that companies won’t be held liable for not monitoring user content: borrow language directly from Section 2258A(f).

Mistake #4: EARN IT’s Sponsors Just Don’t Understand the Fourth Amendment Problem

Sen. Blumenthal insisted, repeatedly, that EARN IT contained no explicit requirement not to use encryption. The original version of the bill would, indeed, have allowed a commission to develop “best practices” that would be “required” as conditions of “earning” back the Section 230 immunity tech companies need to operate—hence the bill’s name. But dropping that concept didn’t really make the bill less coercive because the commission and its recommendations were always a sideshow. The bill has always coerced monitoring of user communications—and, to do that, the abandonment or bypassing of strong encryption—indirectly, through the threat of vast legal liability for not doing enough to stop the spread of CSAM. 

Blumenthal simply misunderstands how the courts assess whether a company is conducting unconstitutional warrantless searches as a “government actor.” “Even when a search is not required by law, … if a statute or regulation so strongly encourages a private party to conduct a search that the search is not ‘primarily the result of private initiative,’ then the Fourth Amendment applies.” U.S. v. Stevenson, 727 F.3d 826, 829 (8th Cir. 2013) (quoting Skinner v. Railway Labor Executives' Assn, 489 U.S. 602, 615 (1989)). In that case, the court found that AOL was not a government actor because it “began using the filtering process for business reasons: to detect files that threaten the operation of AOL's network, like malware and spam, as well as files containing what the affidavit describes as “reputational” threats, like images depicting child pornography.” AOL insisted that it “operate[d] its file-scanning program independently of any government program designed to identify either sex-offenders or images of child pornography, and the government never asked AOL to scan Stevenson's e-mail.” Id. By contrast, every time EARN IT’s supporters explain their bill, they make clear that they intend to force companies to search user communications in ways they’re not doing today.

Mistake #2 Again: EARN IT’s Sponsors Make Clear that Coercion Is the Point

In his opening remarks today, Sen. Graham didn’t hide the ball:

"Our goal is to tell the social media companies 'get involved and stop this crap. And if you don't take responsibility for what's on your platform, then Section 230 will not be there for you.' And it's never going to end until we change the game."

Sen. Chris Coons added that he is “hopeful that this will send a strong signal that technology companies … need to do more.” And so on and so forth.

If they had any idea what they were doing, if they understood the Fourth Amendment issue, these Senators would never admit that they’re using liability as a cudgel to force companies to take affirmative steps to combat CSAM. By making intentions unmistakable, they’ve given the most vile criminals exactly what they need to to challenge the admissibility of CSAM evidence resulting from companies “getting involved” and “doing more.” Though some companies, concerned with negative publicity, may tell courts that they conducted searches of user communications for “business reasons,” we know what defendants will argue: the companies’ “business reason” is avoiding the wide, loose liability that EARN IT subjected them to. EARN IT’s sponsors said so.

Mistake #5: EARN IT’s Sponsors Misunderstanding How Liability Would Work 

Except for Sen. Mike Lee, no one on the Committee seemed to understand what kind of liability rolling back Section 230 immunity, as EARN IT does, would create. Sen. Blumenthal repeatedly claimed that the bill requires actual knowledge. One of the bill’s amendments (the new Section 230(e)(6)(A)) would, indeed, require actual knowledge by enabling civil claims under 18 U.S.C. § 2255 “if the conduct underlying the claim constitutes a violation of section 2252 or section 2252A,” both of which contain knowledge requirements. This amendment is certainly an improvement over the original version of EARN IT, which would have explicitly allowed 2255 claims under a recklessness standard. 

But the two other changes to Section 230 clearly don’t require knowledge. As Sen. Lee pointed out today, a church could be sued, or even prosecuted, simply because someone posted CSAM on its bulletin board. Multiple existing state laws already create liability based on something less than actual knowledge of CSAM. As Lee noted, a state could pass a law creating strict liability for hosting CSAM. Allowing states to hold websites liable for recklessness (or even less) while claiming that the bill requires actual knowledge is simply dishonest. All these less-than-knowledge standards will have the same result: coercing sites into monitoring user communications, and into abandoning strong encryption as an obstacle to such monitoring. 

Blumenthal made it clear that this is precisely what he intends, saying: “Other states may wish to follow [those using the “recklessness” standard]. As Justice Brandeis said, states are the laboratories of democracy … and as a former state attorney general I welcome states using that flexibility. I would be loath to straightjacket them in their adoption of different standards.”

Mistake #6: “This Is a Criminal statute, This Is Not Civil Liability”

So said Sen. Lindsey Graham, apparently forgetting what his own bill says. Sen. Dianne Feinstein added her own misunderstanding, saying that she “didn’t know that there was a blanket immunity in this area of the law.” But if either of those statements were true, the EARN IT Act wouldn’t really do much at all. Section 230 has always explicitly carved out federal criminal law from its immunities; companies can already be charged for knowing distribution of child sexual abuse material (CSAM) or child sexual exploitation (CSE) under federal criminal statutes. Indeed, Backpage and its founders were criminally prosecuted even without SESTA’s 2017 changes to Section 230. If the federal government needs assistance in enforcing those laws, it could adopt Sen. Mike Lee’s amendment to permit state criminal prosecutions when the conduct would constitute a violation of federal law. Better yet, the Attorney General could use an existing federal law (28 U.S.C. § 543) to deputize state, local, and tribal prosecutors as “special attorneys” empowered to prosecute violations of federal law. Why no AG has bothered to do so yet is unclear.

What is clear is that EARN IT isn’t just about criminal law. EARN IT expressly carves out civil claims under certain federal statutes, and also under whatever state laws arguably relate to “the advertisement, promotion, presentation, distribution, or solicitation of child sexual abuse material” as defined by federal law. Those laws can and do vary, not only with respect to the substance of what is prohibited, but also the mental state required for liability. This expansive breadth of potential civil liability is part of what makes this bill so dangerous in the first place.

Mistake #7: “If They Can Censor Conservatives, They Can Stop CSAM!”

As at the 2020 markup, Sen. Lee seemed to understand most clearly how EARN IT would work, the Fourth Amendment problems it raises, and how to fix at least some of them. A former Supreme Court Clerk, Lee has a sharp legal mind, but he seems to misunderstand much of how the bill would work in practice, and how content moderation works more generally.

Lee complained that, if Big Tech companies can be so aggressive in “censoring” speech they don’t like, surely they can do the same for CSAM. He’s mixing apples and oranges in two ways. First, CSAM is the digital equivalent of radioactive waste: if a platform gains knowledge of it, it must take it down immediately and report it to NCMEC, and faces stiff criminal penalties if it doesn’t. And while “free speech” platforms like Parler and Gettr refuse to proactively monitor for CSAM (as discussed below), every mainstream service goes out of its way to stamp out CSAM on unencrypted service. Like AOL in the Stevenson case, they do so for business and reputational reasons.

By contrast no website even tries to block all “conservative” speech; rather, mainstream platforms must make difficult judgment calls about taking down politically charged content, such as Trump’s account only after he incited an insurrection in an attempted coup and misinformation about the 2020 election being stolen. Republicans are mad about where tech companies draw such lines.

Second, social media platforms can only moderate content that they can monitor. Signal can’t moderate user content and that is precisely the point: end-to-end-encryption means that no one other than the parties to a communication can see it. Unlike normal communications, which may be protected by lesser forms of “encryption,” the provider isn’t standing in the middle of the communication and it doesn’t have the keys to unlock the messages that it is passing back and forth. Yes, some users will abuse E2EE to share CSAM, but the alternative is to ban it for everyone. There simply isn’t a middle ground.

There may indeed be more that some tech companies could do about content they can see—both public content like social media posts and private content like messages (protected by something less than E2EE). But their being aggressive about, say, misinformation about COVID or the 2020 election has nothing whatsoever to do with the cold, hard reality that they can’t moderate content protected by strong encryption.

It’s hard to tell whether Lee understands these distinctions. Maybe not. Maybe he’s just looking to wave the bloody shirt of “censorship” again. Maybe he’s saying the same thing everyone else is saying, essentially: “Ah, yes, but if only Facebook, Apple and Google didn’t use end-to-end encryption for their messaging services, then they could monitor those for CSAM just like they monitor and moderate other content!” Proposing to amend the bill to require actual knowledge under both state and federal law suggests he doesn’t want this result, but who knows?

Mistake #8: Assuming the Fourth Amendment Won’t Require Warrants If It Applies

Visibility to the provider relates to one important legal distinction not discussed at all today—but that may well explain why the bill’s sponsors don’t seem to care about Fourth Amendment concerns. It’s an argument Senate staffers have used to defend the bill since its introduction. Even if compulsion through vast legal liability did make tech companies government actors, the Fourth Amendment requires a warrant only for searches of material for which users have a reasonable expectation of privacy. Kyllo v. United States, 533 U.S. 27, 33 (2001); see Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring). Courts long held that users had no such expectations for digital messages like email held by third parties. 

But that began to change in 2010. If searches of emails trigger the Fourth Amendment—and U.S. v. Warshak, 631 F.3d 266 (6th Cir. 2010) said they do—searches of private messaging certainly would. The entire purpose of E2EE is to give users rock-solid expectations of privacy in their communications. More recently, the Supreme Court has said that, “given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user's claim to Fourth Amendment protection.” Carpenter v. United States, 138 S. Ct. 2206, 2217 (2018). These cases draw the line Sen. Lee is missing: no, of course users don’t have reasonable expectations of privacy in public social media posts—which is what he’s talking about when he points to “censorship” of conservative speech. EARN IT could avoid the Fourth Amendment by focusing on content providers can see, but it doesn’t, because it’s intended to force companies to be able to see all user communications.

Mistake #9: What They didn’t Discuss: Anonymous Speech

The Committee didn’t discuss how EARN IT would affect speech protected by the First Amendment. No, of course CSAM isn’t protected speech, but the bill would affect lawful speech by law-abiding citizens—primarily by restricting anonymous speech. Critically, EARN IT doesn’t just create liability for trafficking in CSAM. The bill also creates liability for failing to stop communications that “solicit” or “promote” CSAM. Software like PhotoDNA can flag CSAM (by matching cryptographic hashes to known images in NCMEC’s database) but identifying “solicitation” or “promotion” is infinitely more complicated. Every flirtatious conversation between two adult users could be “solicitation” of CSAM—or it might be two adults doing adult things. (Adults sext each other—a lot. Get over it!) But “on the Internet, nobody knows you’re a dog”—and there’s no sure way to distinguish between adults and children. 

The federal government tried to do just that in the Communications Decency Act (CDA) of 1996 (nearly all of which, except Section 230, was struck down) and the Child Online Protection Act (COPA) of 1998. Both laws were struck down as infringing on the First Amendment right to accessing lawful content anonymously. EARN IT accomplishes much the same thing indirectly, the same way it attacks encryption: basing liability on anything less than knowledge means you can be sued for not actively monitoring, or for not age-verifying users, especially when the risks are particularly high (such as when you “should have known” you were dealing with minor users). 

Indeed, EARN IT is even more constitutionally suspect. At least COPA focused on content deemed “harmful to minors.” Instead of requiring age-gating for sites that offered porn and sex-related content (e.g., LGBTQ teen health), EARN IT would affect all users of private communications services, regardless of the nature of the content they access or exchange. Again, the point of E2EE is that the service provider has no way of knowing whether messages are innocent chatter or CSAM. 

EARN IT could raise other novel First Amendment problems. Companies could be held liable not only for failing to age-verify all users—a clear First Amendment violation— but also for failing to bar minors from using E2EE services so that their communications can be monitored or failing to use client-side monitoring on minors’ devices, and even failing to segregate adults from minors so they can’t communicate with each other. 

Without the Lee Amendment, EARN IT leaves states free to base liability on explicitly requiring age-verification or limits on what minors can do. 

Mistake #10: Claiming the Bill Is “Narrowly Crafted”

If you’ve read this far, Sen. Blumenthal’s stubborn insistence that this bill is a “narrowly targeted approach” should make you laugh—or sigh. If he truly believes that, either he hasn’t adequately thought about what this bill really does or he’s so confident in his own genius that he can simply ignore the chorus of protest from civil liberties groups, privacy advocates, human rights activists, minority groups, and civil society—all of whom are saying that this bill is bad policy.

If he doesn’t truly believe what he’s saying, well… that’s another problem entirely.

Bonus Mistake!: A Postscript About the Real CSAM problem

Lee never mentioned that the only significant social media services that don’t take basic measures to identify and block CSAM are Parler, Gettr and other fringe sites celebrated by Republicans as “neutral public fora” for “free speech.” Has any Congressional Republican sent letters to these sites asking why they refuse to use PhotoDNA? 

Instead, Lee did join Rep. Ken Buck in March 2021 to interrogate Apple about its decision to take down the Parler app. Answer: Parler hadn’t bothered setting any meaningful content moderation system. Only after Parler agreed to start doing some moderation of what appeared in its Apple app (but not its website) did Apple reinstate the app.

Berin Szoka and Ari Cohn

Court (For Now) Says NY Times Can Publish Project Veritas Documents

2 years 2 months ago

We've talked about the hypocrite grifters who run Project Veritas, who, even when they have legitimate concerns about attacks on their own free speech, ran to court to try to silence the NY Times. Bizarrely, a NY judge granted Project Veritas' demand for prior restraint against the NY Times falsely claiming that attorney-client material could not be published.

The NY Times appealed that ruling and now a court has... not overturned the original ruling, but for now said that the NY Times can publish the documents, saying that it will not enforce the original ruling until an appeal can be heard. This is... better than nothing, but fully overturning the original ridiculous ruling would have been much better. Because it was clearly prior restraint. But, at least for now, the prior restraint will not be enforced.

Still, the response from Project Veritas deserves separate comment, because it's just naively stupid:

In a phone interview on Thursday, Mr. O’Keefe said: “Defamation is not a First Amendment-protected right; publishing the other litigants’ attorney-client privileged documents is not a protected First Amendment right.”

While it's accurate that defamation is not protected by the 1st Amendment, he's wrong that publishing attorney-client communications is -- in most cases -- very much protected. He's fuzzing the lines here, by basically arguing that because Project Veritas is, separately, suing the NY Times, that bans the NY Times from publishing any attorney-client privileged material it obtains via standard reporting tactics.

But that fuzzing suggests something that just isn't true: that there's some exception to the 1st Amendment from publishing attorney-client materials. That's wrong. The attorney-client privilege is with respect to having to disclose certain documents to another party in litigation. If you can successfully show that the documents are privileged, they don't need to be disclosed to the other party. That's the extent of the privilege. It has no bearing whatsoever on whether or not someone else obtaining those materials through other means has a right to publish them. Of course they do and the 1st Amendment protects that.

And, I should just note, that considering Project Veritas' main method of operating is trying to obtain private documents, or record secret conversations, it is bizarre beyond belief that Project Veritas is literally claiming that private material has some sort of 1st Amendment protection. Because that seems incredibly likely to come back and bite Project Veritas at a later time. Of course, considering they're hypocritical grifters with no fundamental principles beyond "attack people with views we don't like," I guess it's not surprising that their viewpoint on free speech and the 1st Amendment shifts depending on who it's protecting.

Mike Masnick

Yet Another Israeli Malware Manufacturer Found Selling To Human Rights Abusers, Targeting iPhones

2 years 2 months ago

Exploit developer NSO Group may be swallowing up the negative limelight these days, but let's not forget the company has plenty of competitors. The US government's blacklisting of NSO arrived with a concurrent blacklisting of malware purveyor, Candiru -- another Israeli firm with a long list of questionable customers, including Uzbekistan, Saudi Arabia, United Arab Emirates, and Singapore.

Now there's another name to add to the list of NSO-alikes. And (perhaps not oddly enough) this company also calls Israel home. Reuters was the first to report on this NSO's competitor's ability to stay competitive in the international malware race.

A flaw in Apple's software exploited by Israeli surveillance firm NSO Group to break into iPhones in 2021 was simultaneously abused by a competing company, according to five people familiar with the matter.

QuaDream, the sources said, is a smaller and lower profile Israeli firm that also develops smartphone hacking tools intended for government clients.

Like NSO, QuaDream sold a "zero-click" exploit that could completely compromise a target's phones. We're using the past tense not because QuaDream no longer exists, but because this particular exploit (the basis for NSO's FORCEDENTRY) has been patched into uselessness by Apple.

But, like other NSO competitors (looking at you, Candiru), QuaDream has no interest in providing statements, a friendly public face for inquiries from journalists, or even a public-facing website. Its Tel Aviv office seemingly has no occupants and email inquiries made by Reuters have gone ignored.

QuaDream doesn't have much of a web presence. But that's changing, due to this report, which builds on earlier reporting on the company by Haaretz and Middle East Eye. But even the earlier reporting doesn't go back all that far: June 2021. That report shows the company selling a hacking tool called "Reign" to the Saudi government. But that sale wasn't accomplished directly, apparently in a move designed to further distance QuaDream from both the product being sold and the government it sold it to.

According to Haaretz, Reign is being sold by InReach Technologies, Quadream's sister company based in Cyprus, while Quadream runs its research and development operations from an office in the Ramat Gan district in Tel Aviv.

[...]

InReach Technologies, its sales front in Cyprus, according to Haaretz, may be being used in order to fly under the radar of Israel’s defence export regulator.

Reign is apparently the equivalent of NSO's Pegasus, another powerful zero-click exploit that appears to still be able to hack most iPhone models. But it's not a true equivalent. According to this report, the tool can be rendered useless by a single system software update and, perhaps more importantly, cannot be remotely terminated by the entity deploying it, should the infection be discovered by the target. This means targeted users have the opportunity to learn a great deal about the exploit, its deployment, and possibly where it originated.

That being said, it's not cheap:

One QuaDream system, which would have given customers the ability to launch 50 smartphone break-ins per year, was being offered for $2.2 million exclusive of maintenance costs, according to the 2019 brochure. Two people familiar with the software's sales said the price for REIGN was typically higher.

With more firms in the mix -- and more scrutiny from entities like Citizen Lab -- it's only a matter of time before information linking NSO competitors to human rights abuses and indiscriminate targeting of political enemies threatens to make QuaDream and Candiru household names. And, once again, it's time to point out this all could have been avoided by refusing to sell powerful hacking tools to human rights abusers who were obviously going to use the spyware to target critics, dissidents, journalists, ex-wives, etc. That QuaDream chose to sell to countries like Saudi Arabia, Singapore, and Mexico pretty much guarantees reports of abusive deployment will surface in the future.

Tim Cushing

Surprise: U.S. Cost Of Ripping Out And Replacing Huawei Gear Jumps From $1.8 To $5.6 Billion

2 years 2 months ago

So we've noted that a lot of the U.S. politician accusations that Huawei uses its network hardware to spy on Americans on behalf of the Chinese government are lacking in the evidence department. The company's been on the receiving end of a sustained U.S. government ban based on accusations that have never actually been proven publicly, levied by a country (the United States) with a long, long history of doing exactly what it accuses Huawei of doing.

To be clear, Huawei is a terrible company. It has been happy to provide IT and telecom support to the Chinese government as it wages genocide against ethnic minorities. It has also been caught helping some African governments spy on the press and political opponents. And it may very well have helped the Chinese government spy on Americans. So it's hard to feel too bad about the company.

At the same time, if you're going to levy accusations (like "Huawei clearly spies on Americans") you need to provide public evidence. And we haven't. Eighteen months of investigations found nothing. That didn't really matter much to the FCC (under Trump and Biden) or Congress, which ordered that U.S. ISPs and network operators rip out all Huawei gear and replace it to an estimated cost of $1.8 billion. Yet just a few years later, the actual cost to replace this gear has already ballooned to $5.8 billion and is likely to get higher:

"The FCC has told Congress that applications to The Secure and Trusted Communications Networks Reimbursement Program have generated requests totaling about $5.6 billion – far more than the allocated funding. The program was established to reimburse providers with 10 million or fewer customers who must remove Huawei Technologies Company and ZTE equipment."

That's quite a windfall for companies not named Huawei, don't you think?

My problem with these efforts has always been a nuanced one. I have no interest in defending a shitty global telecom gear maker with an atrocious human rights record which very well may be a proven to be a surveillance lackey for the Chinese government. Yet at the same time, domestic companies like Cisco have, for much of the last decade, leaned on unsubstantiated allegations of spying to shift market share in their favors. DC is flooded with lobbyists who can easily exploit both xenophobia and intelligence worries to their tactical advantage, then bury the need for evidence under ambiguous claims of national security:

"What happens is you get competitors who are able to gin up lawmakers who are already wound up about China,” said one Hill staffer who was not authorized to speak publicly about the matter. “What they do is pull the string and see where the top spins.”

But some experts say these concerns are exaggerated. These experts note that much of Cisco’s own technology is manufactured in China."

So my problem here isn't necessarily that Huawei doesn't deserve what's happening to it. My problem here is generally a lack of transparency in a process that's heavily dictated by lobbyists, who can hide any need for evidence behind national security claims. This creates an environment where decisions are made on a "noble and patriotic basis" that wind up being beyond common sense, reproach, and oversight. That's a nice breeding ground for fraud.

My other problem is the hypocrisy of a country that doesn't believe in limitations on spying, complaining endlessly about spying, without modifying any of its own, very similar behaviors. AT&T has been proven to be directly tethered to the NSA to the point where it's literally impossible to determine where one ends and the other begins. Yet were another country to ban AT&T from doing business there, the heads of the very same folks breathlessly concerned about surveillance ethics would explode. What makes us beyond reproach here? Our ethical track record?

And my third problem is that the almost myopic, focus on Huawei has been so massive, we've failed to take on numerous other privacy and security issues, whether that's the lack of a meaningful federal privacy law, the rampant security and privacy issues inherent in the Internet of things space (where Chinese-made hardware is rampant), or election security with anywhere close to the same level of urgency. These all are equally important issues, all exploited by Chinese intelligence, that see a small fraction of the hand-wringing and action reserved for issues like Huawei.

Again, none of this is to defend Huawei or deny it's a shitty company with dubious ethics. But the lack of transparency or skepticism creates an environment ripe for fraud and myopia by policymakers who act as if the entirety of their efforts is driven by the noblest and most patriotic of intentions. And, were I a betting man, I'd wager this whole rip and replace effort makes headlines for all the wrong reasons several years down the road.

Karl Bode

Daily Deal: The Complete GameGuru Unlimited Bundle

2 years 2 months ago

GameGuru is a non-technical and fun game maker that offers an easy, enjoyable and comprehensive game creation process that is designed specifically for those who are not programmers or designers/artists. It allows you to build your own game world with easy to use tools. Populate your game by placing down characters, weapons, and other game items, then press one button to build your game, and it's ready to play and share. GameGuru is built using DirectX 11 and supports full PBR rendering, meaning your games can look great and take full advantage of the latest graphics technology. The bundle includes hundreds of royalty-free 3D assets. It's on sale for $50.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Senator Blumenthal, After Years Of Denial, Admits He's Targeting Encryption With EARN IT

2 years 2 months ago

Senator Richard Blumenthal has now admitted that EARN IT is targeting encryption, something he denied for two years, and then just out and said it.

Since the very beginning many of us have pointed out that the EARN IT Act will undermine encryption (as well as other parts of the internet). Senator Richard Blumenthal, the lead sponsor on the bill, has insisted over and over again that the bill has nothing to do with encryption. Right after the original bill came out, when people called this out, Blumenthal flat out said "this bill says nothing about encryption" and later claimed that "Big Tech is using encryption as a subterfuge to oppose this bill."

That's been his line ever since -- insisting the bill has nothing to do with encryption. And to "show" that it wasn't about encryption, back in 2020, he agreed to a very weak amendment from Senator Leahy that had some language about encryption, even though as we pointed out at the time, that amendment still created a problem for encryption.

The newest version of EARN IT replaced Leahy's already weak amendment with one that is a more direct attack on encryption. But it has allowed slimy "anti-porn" groups like NCOSE to falsely claim that it has "dealt with the concerns about encryption." Except, as we detailed, the language of the bill now makes encryption a liability for any web service, as it explicitly says that use of encryption can be used as evidence that a website does not properly deal with child sexual abuse material.

But still, through it all, Blumenthal kept lying through his teeth, insisting that the bill wasn't targeting encryption. Until yesterday when he finally admitted it straight up to Washington Post reporter Cat Zakrzewski. In her larger story about EARN IT, I'm not sure why Zakrewski buried this point all the way down near the bottom, because this is the story. Blumenthal is asked about the encryption bit and he admits that the bill is targeting encryption:

Blumenthal said in an interview that lawmakers incorporated these concerns into revisions, which prevent the implementation of encryption from being the sole evidence of a company’s liability for child porn. But he said lawmakers wouldn’t offer a blanket exemption to using encryption as evidence arguing companies might use it as a “get-out-of-jail-free card.”

In other words, he knows that the bill targets encryption despite two whole years of blatant denials. To go from "this bill makes no mention of encryption" to "we don't want companies using encryption as a 'get-out-of-jail-free card'" is an admission that this bill is absolutely about encryption. And if that's the case, why have their been no hearings about the impact this would have on encryption and national security? Because, that seems like a key point that should be discussed, especially with Blumenthal admitting this thing that he denied for two whole years.

During today's markup, Blumenthal also made some nonsense comments about encryption:

The treatment of encryption in this statute is the result of hours, days, of consultation involving the very wise and significant counsel from Sen. Leahy who offered the original encryption amendment and said at the time that his amendment would not protect tech companies for being held liable for doing anything that would give rise to liability today for using encryption to further illegal activity. That's the key distinction here. Doesn't prohibit the use of encryption, doesn't create liability for using encryption, but the misuse of encryption to further illegal activity is what gives rise to liability here.

This is, beyond being nonsense word salad, just utterly ridiculous. No one ever said the bill "prohibited" encryption, but that it would make it a massive liability. And he's absolutely wrong that it "doesn't create liability for using encryption" because it literally does exactly that in saying that encryption can be used as evidence of liability.

The claim that it's only the "misuse of encryption" shows that Senator Blumenthal (1) has no clue what he's talking about and (2) needs to hire staffers who actually do understand this stuff, because that's not how this works. Once you say it's the "misuse of encryption" you've sunk encryption. Because now every lawsuit will just claim that any use of encryption is misuse and the end result is that you need to go through a massive litigation process to determine if your use of encryption is okay or not.

That's the whole reason why things like Section 230 are important, because they avoid having every company have to spend over a million dollars to prove that the technical decision they made were okay and not a "misuse." But now if they have to spent a million dollars every time someone sues them for their use of encryption, then it makes it ridiculously costly -- and risky -- to use encryption.

So, Blumenthal is either too stupid to understand how all of this actually works, or as he seems to have admitted to the reporter despite two years of denial, he doesn't believe companies should be allowed to use encryption.

EARN IT is an attack on encryption, full stop. Senator Blumenthal has finally admitted that, and anyone who believes in basic privacy and security should take notice.

Oh, and as a side note, remember back in 2020 when Blumenthal flipped out at Zoom for not offering full end-to-end encryption? Under this bill, Zoom would be at risk either way. Blumenthal is threatening them if they use encryption and if they don't. It's almost as if Richard Blumenthal doesn't know what he's talking about regarding encryption.

Mike Masnick

Yes, It Really Was Nintendo That Slammed GilvaSunner YouTube Channel With Copyright Strikes

2 years 2 months ago

Well, for a story that was already over, this became somewhat fascinating. We have followed the Nintendo vs. GilvaSunner war for several years now. The GilvaSunner YouTube channel has long been dedicated to uploading and appreciating a variety of video game music, largely from Nintendo games. Roughly once a year for the past few years, Nintendo would lob copyright strikes at a swath of GilvaSunner "videos": 100 videos in 2019, a bit less than that in 2020, take 2021 off, then suddenly slam the channel with 1,300 strikes in 2022. With that last copyright MOAB, the GilvaSunner channel has been shuttered voluntarily, with the operator indicating that it's all too much hassle.

Well, on the internet, and in our comments on that last post, there began to be speculation as to whether or not it was actually Nintendo behind all of these copyright strikes... or an imposter. Those sleuthing around found little tidbits, such as the name used on the strike not matching up to the names displayed in the past when Nintendo has acted against YouTube videos.

It was... strange. Why? Well, because it looked like many people going out and trying to find a reason to believe that Nintendo wasn't behaving exactly as anyone who had witnessed Nintendo's behavior would expect. If this was someone impersonating Nintendo with these actions, it was utterly indistinguishable from how Nintendo would normally behave. Guys, they do this shit all the time.

And this time too, as it turns out. You can hear it straight from YouTube's mouth.

Jumping in – we can confirm that the claims on @GilvaSunner's channel are from Nintendo. These are all valid and in full compliance with copyright rules. If the creator believes the claims were made in error, they can dispute with these steps: https://t.co/ivyjVNwLVu

— TeamYouTube (@TeamYouTube) February 5, 2022

This is where I will stipulate for the zillionth time that Nintendo is within it's rights to take these actions. But we should also stipulate that the company doesn't have to go this route and the fact that it prioritizes control of its IP in the strictest fashion over letting its fans enjoy some video game music should tell you everything you need to know.

In the meantime, to the internet sleuths: I appreciate your dedication to either Nintendo or to simply digging into these kinds of details for funsies or whatever. That being said, as the old saying goes, if you hear the sound of hooves, assume it's a horse and not a zebra.

Timothy Geigner

Even Officials In The Intelligence Community Are Recognizing The Dangers Of Over-Classification

2 years 2 months ago

The federal government has a problem with secrecy. Well, actually it doesn't have a problem with secrecy, per se. That's often considered a feature, not a bug. But federal law says the government shouldn't have so much secrecy, what with the FOIA being in operation. And yet, the government feels compelled to keep secrets from its biggest employer: the US taxpayers.

Over-classification remains a problem. It has been a problem ever since long before a government contractor went rogue with a massive stash of NSA documents, showing that many of the government's secrets should have been shared or, at the very least, more widely discussed as the government turned 9/11 into a constitutional bypass on the information superhighway.

Since then, efforts have been made to dial back the government's proclivity for classifying documents that pose no threat to government operations and/or government security. In fact, the argument has been made (rather convincingly) that over-classification is counterproductive. It's more likely to result in the exposure of so-called secrets rather than secure the blanket-exemption-formality that keeps secrets from the general public.

Efforts have been made to counteract this overwhelming desire to keep the public locked out of discussions about government activities. These efforts have mostly failed. And that has mainly been due to vague and frequent invocations of national security concerns, which allow legislators and federal judges to shut off their brains and hammer the [REDACT] button repeatedly.

But ignoring the problem hasn't made the problem go away, no matter how many billions the federal government refuses to throw at the problem. Over-classification still stands between the public and information it should have access to. And it stands between federal agencies and efficient use of tax dollars. The federal government generates petabytes of data every month. And far too often, the agencies generating the data decide it's no one's business but their own.

It's not just legislators noting the widening gap between the government's massive stockpiles of data and the public's ability to access them. It's also those generating the most massive stashes of bits and bytes, as the Washington Post points out, using the words of an Intelligence Community official.

The U.S. government is drowning in its own secrets. Avril Haines, the director of national intelligence, recently wrote to Sens. Ron Wyden (D-Ore.) and Jerry Moran (R-Kan.) that “deficiencies in the current classification system undermine our national security, as well as critical democratic objectives, by impeding our ability to share information in a timely manner.” The same conclusions have been drawn by the senators and many others for a long time.

As this letter hints at, over-classification doesn't just affect the great unwashed whose power is generally considered to be far too limited to change things. It also affects agencies and the entities that oversee the agencies -- the latter of which are asked to engage in oversight while being locked out of the information they need to perform this task.

If there's any good news here, it's that the Intelligence Community recognizes it's part of the problem. But this is just one person in the IC. It's unlikely every official feels this way.

The government is working towards a solution, but its work is being performed at the speed of government -- something further hampered by the back-and-forth of periodic regime changes and their alternating ideas about how much transparency the government owes to its patrons.

The IC letter writer almost sees a silver lining in the nearly opaque cloud enveloping agencies involved in national security efforts.

So far, Ms. Haines said, current priorities and resources for fixing the classification systems “are simply not sufficient.” The National Security Council is working on a revised presidential executive order governing classified information, and we hope the White House will come up with an ambitious blueprint for modernization.

The silver lining is "so far," and the efforts being made elsewhere to change things. The rest of the non-lining is far less silver: the resources aren't sufficient and the National Security Council is grinding bureaucratic gears by working with the administration to change things. If it doesn't happen soon, changes will be at the discretion of the next administration. And the next administration may no longer feel streamlining declassification is a priority, putting projects that have been in the on-again, off-again works since Snowden's exposes on the back burner yet again.

Our government will never likely feel Americans can be trusted with information about the programs their tax dollars pay for. But perhaps a little more momentum -- this time propelled by something within the Intelligence Community -- will prompt some incremental changes that may eventually snowball into actual transparency and accountability.

Tim Cushing

First Circuit Tears Into Boston PD's Bullshit Gang Database While Overturning A Deportation Decision

2 years 2 months ago

A federal court has delivered a rebuke of police gang databases in, of all things, a review of a deportation hearing.

As we've been made painfully aware, gang databases are just extensions of biased policing efforts. People are placed in gang databases for numerous, incredibly stupid reasons. People are designated gang members simply for living, working, and going to school in areas where gang activity is prevalent. Infants have been added to gang databases because cops can't be bothered to perform any due diligence. There's no way for people to know they've been designated as gang-affiliated and, worse, there's often no way to challenge this designation and get yourself removed from these lists, which tend to result in additional harassment by police officers or "gang enhancements" that lengthen sentences for anyone listed in these dubious databases.

In 2015, Homeland Security Investigations officers performed a sweep in Boston, Massachusetts, rounding up suspected MS-13 gang members for deportation. This sweep snared Cristian Diaz Ortiz, who was 16, had entered the country illegally, and was now living with his uncle.

Oritz applied for asylum, citing the fear of being subjected to MS-13 gang violence if he was sent back to his home country, El Salvador. From the First Circuit Appeals Court decision [PDF]:

On October 1, 2018, Diaz Ortiz filed an application for asylum, withholding of removal, and CAT protection, basing his request on multiple grounds, including persecution because of his evangelical Christian religion. He also reported that an aunt had been murdered in 2011 by members of MS-13, and he feared that the gang would kill him as well if he returned to El Salvador. In a subsequently filed affidavit, Diaz Ortiz stated that, while he was living in El Salvador, MS-13 had threatened his life "on multiple occasions" because he was a practicing evangelical Christian. He said he repeatedly refused the gang's demands that he join MS-13, but gang members continued to follow him and issue threats. In 2015, the gang physically attacked him and warned "that they would kill [him] and [his] family if [he] did not stop saying [he] was a Christian and living and preaching against the gang way of life."

The Immigration Judge sided with the Department of Homeland Security. It largely made this decision due to the introduction of a "Gang Assessment Database" that said Ortiz was not a practicing Christian who might fear retaliation if removed from the country, but rather an MS-13 infiltrator. The "gang package" (as the court refers to it) was compiled by the Boston PD. It stated the following:

Cristian Josue DIAZ ORTIZ has been verified as an MS-13 gang member by the Boston Police Department (BPD)/Boston Regional Intelligence Center (BRIC).

Cristian Josue DIAZ ORTIZ has documented associations with MS-13 gang members by the Boston Police Department and Boston School Police Department (BSPD). (See the attached BPD & BSPD incident/field interview reports and gang intelligence bulletins.)

Cristian Josue DIAZ ORTIZ has been documented carrying common MS-13 gang related weapons by the Boston Police Department. (See the attached BPD incident/field interview reports.) [A footnote states that the only "weapon" ever documented by the BPD was a bike chain and a padlock carried in Ortiz's backpack.]

Cristian Josue DIAZ ORTIZ has been documented frequenting areas notorious for MS13 gang activity by the Boston Police Department. These areas are 104 Bennington St. and the East Boston Airport Park/Stadium in East Boston, Massachusetts which are both known for MS-13 gang activity including recent firearms arrests and a homicide.

According to the Boston PD, Oritz racked up "points" by associating with gang members and being in areas MS-13 members frequented. If enough points are accrued, a person gets placed in the gang database. But the underlying events had nothing to do with gang activity, despite what the summary provided by the DHS said.

The BPD documented nine "interactions" with Ortiz in which it assigned "gang" points to him. Three of those instances involved Ortiz smoking marijuana (a civil infraction in Massachusetts) with students and others the BPD claimed were "known MS-13 members." Four others involved Ortiz "loitering" in a place near "known gang member" or being approached and talked to by "known gang members." And one of the interactions was the time the BPD "discovered" Oritz carrying a bike lock and chain in his backpack -- something not all that uncommon for bike owners (which Ortiz was).

This "gang package" was critiqued by a law enforcement expert who testified that Ortiz should never have been included in the gang database. The former Boston police officer pointed out Ortiz had never been suspected of criminal activity and was apparently being penalized solely for spending time with people of his same ethnicity. The gang package's claim that Ortiz had a "history" of carrying weapons was clearly undercut by the BPD's documentation of a single incident where an officer recovered something that could be used as a weapon (the bike chain), but was not inherently a tool of unlawful violence.

The immigration judge ignored all of this, finding only the DHS and BPD credible. So did the Board of Immigration Appeals (BIA). Fortunately for Ortiz, the First Circuit isn't as easily impressed by the Boston PD's police work. It has some very harsh words for the two lower levels that blew off their obligations to the asylum seeker.

If the IJ and BIA had performed even a cursory assessment of reliability, they would have discovered a lack of evidence to substantiate the gang package's classification of Diaz Ortiz as a member of MS-13. Most significantly, the record contains no explanation of the basis for the point system employed by the BPD. The record is silent on how the Department determined what point values should attach to what conduct, or what point threshold is reasonable to reliably establish gang membership.

As the appeals court points out, these databases are inherently unreliable because literally anything can be used to imply someone is a gang member. The lower courts were wrong to completely dismiss Ortiz's challenge of the BPD's assessment.

That silence is so consequential because, during the period relevant to this case, the list of "items or activities" that could lead to "verification for entry into the Gang Assessment Database" was shockingly wide-ranging. It included "Prior Validation by a Law Enforcement Agency" (nine points), "Documented Association (BPD Incident Report)" (four points), and the open-ended "Information Not Covered by Other Selection Criteria" (one point). The 2017 form for submitting FIO [Field Interview Operations] reports to the database states that a "Documented Association" includes virtually any interaction with someone identified as a gang member: "[w]alking, eating, recreating, communicating, or otherwise associating with confirmed gang members or associates."

The points are easy to acquire, but there's no consistency in how the Boston PD assigns them, lending more credibility to the assumption that gang databases mainly exist to confirm cops' biases.

Moreover, the point system was applied to Diaz Ortiz in a haphazard manner. He was assigned points for most, but not all, of his documented interactions with purported MS-13 members. When he was assigned points, he was not always assigned the same number per interaction. Although he was assigned two points for "contact" with alleged gang members or associates on most occasions, he was assigned five points for the "Intelligence Report" submitted by the Boston School Police that describes an encounter that appears no different from the other "contacts." Only two items in the Rule 335 list carry five points: "Information from Reliable, Confidential Informant" and "Information Developed During Investigation and/or Surveillance." We thus cannot accept the BIA's implicit conclusion that the gang package's points-driven identification of Diaz-Ortiz as a "VERIFIED and ACTIVE" member of MS-13 was reliable.

Case in point:

The entry for November 28, 2017 -- the report from a Boston school officer -- illustrates several of these issues. The gist of the entry is that two officers made "casual conversation" with a student in a "full face mask" whom they identified as a member of MS-13, and they then saw the student walk over to a group of teenage boys that included Diaz Ortiz. The report identifies no improper conduct by any of the students; it does not say that the mask bore gang colors or symbols;23 it does not indicate that the masked student spoke directly to Diaz Ortiz. Nor does the report explain the basis for identifying the student as an MS-13 member other than to say that the BRIC labeled the student as a "verified" member. Therefore, we at most can infer from this paltry set of facts that Diaz Ortiz was standing near an individual who was identified as an MS-13 member by the BRIC, with the only basis for that identification the possible use of the same problematic point system that identified Diaz Ortiz as a member. Yet, Diaz Ortiz received five points merely because that student decided to walk over and join a group that included him.

Yes, the BPD decided Ortiz was affiliated with a notorious El Salvadoran gang internationally known for violently [checks gang package] smoking the reefer and conversing in public.

The whole opinion is worth reading. It ruthlessly picks apart the BPD's gang database, reaching conclusions that apply to every gang database run by any law enforcement agency in America. This vacates the lower courts' decisions, which means Ortiz can again plead his case before the BIA. And this time he'll get a new judge because the First Circuit feels that sending it back to the original immigration judge would just allow that judge to re-engage with their pre-existing biases.

Gang databases are garbage. Even the most cursory examination of the underlying factors common to almost every gang database makes that clear. But the immigration court couldn't be bothered to do this, which almost resulted in someone being sent back to El Salvador where interactions with actual gang members might have resulted in his death, rather than just being an unwilling participant in Boston's "Whose Gang Is It Anyway?," where everything's made up and, unfortunately, the points do matter.

Tim Cushing

Content Moderation Case Study: Russia Slows Down Access To Twitter As New Form Of Censorship (2021)

2 years 2 months ago

Summary:

On March 10 2021, the Russian Government deliberately slowed down access to Twitter after it accused the platform of repeatedly failing to remove posts about illegal drug use, child pornography, and pushing minors towards suicide. 

State communications watchdog Roskomnadzor (RKN) claimed that “throttling” the speed of uploading and downloading images and videos on Twitter was to protect its citizens by making its content less accessible. Using Deep Packet Inspection (DPI) technology, RKN essentially filtered internet traffic for Twitter-related domains. As part of Russia’s controversial 2019 Sovereign Internet Law, all Russian Internet Service Providers (ISPs) were required to install this technology, which allows internet traffic to be filtered, rerouted, and blocked with granular rules through a centralized system. In this example, it blocked or slowed down access to specific content (images and videos) rather than the entire service. DPI technology also gives Russian authorities unilateral and automatic access to ISPs’ information systems and access to keys to decrypt user communications. 

Twitter throttling in Russia meme. Translation: “Runet users; Twitter”

The University of Michigan’s researchers reported connection speeds to Twitter users were reduced on average by 87 percent and some Russian internet service providers reported a wider slowdown in access. Inadvertently, this throttling affected all website domains that included the substring t.co (Twitter’s shortened domain name), including Microsoft.com, Reddit.com, Russian state operated news site rt.com and several other Russian Government websites, including RKN’s own.

Although reports suggest that Twitter has a limited user base in Russia, perhaps as low as 3% of the population (from an overall population of 144 million), it is popular with politicians, journalists and opposition figures. The ‘throttling’ of access was likely intended as a warning shot to other platforms and a test of Russia’s technical capabilities. Russian parliamentarian, Aleksandr Khinshtein, an advocate of the 2019 Sovereign Internet Law, was quoted as saying that: 

Putting the brakes on Twitter traffic “will force all other social networks and large foreign internet companies to understand Russia won’t silently watch and swallow the flagrant ignoring of our laws.” The companies would have to obey Russian rules on content or “lose the possibility to make money in Russia.” — Aleksandr Khinshtein

The Russian Government has a history of trying to limit and control citizen’s access and use of social media. In 2018, it tried and ultimately failed to shut down Telegram, a popular messaging app. Telegram, founded by the Russian émigré, Pavel Durov, refused to hand over its encryption keys to RKN, despite a court order. Telegram was able to thwart the shutdown attempts by shifting the hosting of its website to Google Cloud and Amazon Web Services through ‘domain fronting’ – which the Russian Government later banned. The Government eventually backed down in the face of technical difficulties and strong public opposition.
Many news outlets suggest that these incidents demonstrate that Russia, where the internet has long been a last bastion of free speech as the government has shuttered independent news organizations and obstructed political opposition, is now tipping towards the more tightly controlled Chinese model and replicating aspects of its famed Great Fire Wall – including creating home-grown alternatives to Western platforms. They also warn that as Russian tactics become bolder and its censorship technology more technically sophisticated – they will be easily co-opted and scaled up by other autocratic governments.

Company considerations:

  • To what extent should companies comply with such types of government demands? 
  • Where do companies draw the line between acquiescing to government demands/local law that are contrary to its values or could result in human rights violations vs expanding into a market or ensuring that its users have access?
  • To what extent should companies align their response and/or mitigation strategies with that of other (competitor) US companies affected in a similar way by local regulation?
  • Should companies try to circumvent the ‘throttling’ or access restrictions through technical means such as reconfiguring content delivery networks?
  • Should companies alert its users that their government is restricting/throttling access?

Issue considerations:

  • When are government takedown requests too broad and overreaching? Who – companies, governments, civil society, a platform’s users – should decide when that is the case?
  • How transparent should companies be with its users about why certain content is taken down because of government requests and regulation? Would there be times when companies should not be too transparent?
  • What can users and advocacy groups do to challenge government restrictions on access to a platform?
  • Should – as the United Nations suggest – access to the internet be seen as a part of a suite of digital human rights?

Resolution:

The ‘throttling’ of access to Twitter content initially lasted two months. According to RKN, Twitter removed 91 percent of its takedown requests after RKN threatened to block Twitter if it didn’t comply. Normal speeds for desktop users resumed in May after Twitter complied with RKN’s takedown requests but reports indicate that throttling is continuing for Twitter’s mobile app users until it complies fully with RKN’s takedown requests.

Originally posted to the Trust and Safety Foundation website.

Copia Institute

Emails Show The LAPD Cut Ties With The Citizen App After Its Started A Vigilante Manhunt Targeting An Innocent Person

2 years 2 months ago

It didn't take long for Citizen -- the app that once wanted to be a cop -- to wear out its law enforcement welcome. The crime reporting app has made several missteps since its inception, beginning with its original branding as "Vigilante."

Having been booted from app stores for encouraging (unsurprisingly) vigilantism, the company rebranded as "Citizen," hooking um… citizens up with live feeds of crime reports from city residents as well as transcriptions of police scanner output. It also paid citizens to show up uninvited at crime scenes to report on developing situations.

But it never forgot its vigilante origins. When wildfires swept across Southern California last year, Citizen's principals decided it was time to put the "crime" back in "crime reporting app." The problem went all the way to the top, with Citizen CEO Andrew Frame dropping into Slack conversations and live streams, imploring employees and app users to "FIND THIS FUCK."

The problem was Citizen had identified the wrong "FUCK." The person the app claimed was responsible for the wildfire wasn't actually the culprit. Law enforcement later tracked down a better suspect, one who had actually generated some evidence implicating them.

After calling an innocent person a "FUCK" and a "devil" in need of finding, Citizen was forced to walk back its vigilantism and rehabilitate its image. Unfortunately for Citizen, this act managed to burn bridges with local law enforcement just as competently as the wildfire it had used to start a vastly ill-conceived manhunt.

As Joseph Cox reports for Motherboard, this act ignited the last straw that acted as a bridge between Citizen and one of the nation's largest law enforcement agencies, the Los Angeles Police Department. Internal communications obtained by Vice show the LAPD decided to cut ties with the app after the company decided its internal Slack channel was capable of taking the law into its own hands.

On May 21, several days after the misguided manhunt, Sergeant II Hector Guzman, a member of the LAPD Public Communications Group, emailed colleagues with a link to some of the coverage around the incident.

“I know the meeting with West LA regarding Citizen was rescheduled (TBD), but here’s a recent article you might want to look at in advance of the meeting, which again highlights some of the serious concerns with Citizen, and the user actions they promote and condone,” Guzman wrote. Motherboard obtained the LAPD emails through a public records request.

Lieutenant Raul Jovel from the LAPD’s Media Relations Division replied “given what is going on with this App, we will not be working with them from our shop.”

Guzman then replied “Copy. I concur.”

Whatever lucrative possibilities Citizen might have envisioned after making early inroads towards law enforcement acceptance were apparently burnt to a crisp by this misapprehension that nearly led to a calamitous misapprehension. Rather than entertain Citizen's mastubatorial fantasies about being the thin app line between good and evil, the LAPD (wisely) chose to kick the upstart to the curb.

The stiff arm continues to this day. The LAPD cut ties and has continued to swipe left on Citizen's extremely online advances. The same Sgt. Guzman referenced in earlier emails has ensured the LAPD operates independently of Citizen. When Citizen asked the LAPD if it would be ok to eavesdrop on radio chatter to send out push notifications to users about possible criminal activity, Guzman made it clear this would probably be a bad idea.

“It’s come up before. Always turned down for several reasons,” Guzman wrote in another email.

And now Citizen goes it alone in Los Angeles. In response to Motherboard's reporting, Citizen offered up word salad about good intentions and adjusting to "real world operational experiences." I guess that's good, in a certain sense. From the statement, it appears Citizen is willing to learn from its mistakes. The problem is its mistakes have been horrific rather than simply inconvenient, and it appears to be somewhat slow on the uptake, which only aggravates problems that may be caused by over-excited execs thinking a few minutes of police scanner copy should result in citizen arrests.

Tim Cushing

Over 60 Human Rights/Public Interest Groups Urge Congress To Drop EARN IT Act

2 years 2 months ago

We've already talked about the many problems with the EARN IT Act, how the defenders of the bill are confused about many basic concepts, how the bill will making children less safe and how the bill is significantly worse than FOSTA. I'm working on most posts about other problems with the bill, but it really appears that many in the Senate simply don't care.

Tomorrow they'll be doing a markup of the bill where it will almost certainly pass out of the Judiciary Committee, at which point it could be put up for a floor vote at any time. Why the Judiciary Committee is going straight to a markup, rather than holding hearings with actual experts, I cannot explain, but that's the process.

But for now at least over 60 human rights and public interest groups have signed onto a detailed letter from CDT outlining many of the problems in the bill, and asking the Senate to take a step back before rushing through such a dangerous bill.

Looking to the past as prelude to the future, the only time that Congress has limited Section 230 protections was in the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (SESTA/FOSTA). That law purported to protect victims of sex trafficking by eliminating providers’ Section 230 liability shield for “facilitating” sex trafficking by users. According to a 2021 study by the US Government Accountability Office, however, the law has been rarely used to combat sex trafficking.

Instead, it has forced sex workers, whether voluntarily engaging in sex work or forced into sex trafficking against their will, offline and into harm’s way. It has also chilled their online expression generally, including the sharing of health and safety information, and speech wholly unrelated to sex work. Moreover, these burdens fell most heavily on smaller platforms that either served as allies and created spaces for the LGBTQ and sex worker communities or simply could not withstand the legal risks and compliance costs of SESTA/FOSTA. Congress risks repeating this mistake by rushing to pass this misguided legislation, which also limits Section 230 protections.

It also discusses the attacks on encryption hidden deep within the bill.

End-to-end encryption ensures the privacy and security of sensitive communications such that only the sender and receiver can view them. This security is relied upon by journalists, Congress, the military, domestic violence survivors, union organizers, and anyone who seeks to keep their communications secure from malicious hackers. Everyone who communicates with others on the internet should be able to do so privately. But by opening the door to sweeping liability under state laws, the EARN IT Act would strongly disincentivize providers from providing strong encryption. Section 5(7)(A) of EARN IT states that provision of encrypted services shall not “serve as an independent basis for liability of a provider” under the expanded set of state criminal and civil laws for which providers would face liability under EARN IT. Further, Section 5(7)(B) specifies that courts will remain able to consider information about whether and how a provider employs end-to-end encryption as evidence in cases brought under EARN IT. This language, originally proposed in last session’s House companion bill, takes the form of a protection for encryption, but in practice it will do the opposite: courts could consider the offering of end-to-end encrypted services as evidence to prove that a provider is complicit in child exploitation crimes. While prosecutors and plaintiffs could not claim that providing encryption, alone, was enough to constitute a violation of state CSAM laws, they would be able to point to the use of encryption as evidence in support of claims that providers were acting recklessly or negligently. Even the mere threat that use of encryption could be used as evidence against a provider in a criminal prosecution will serve as a strong disincentive to deploying encrypted services in the first place.

Additionally, EARN IT sets up a law enforcement-heavy and Attorney General-led Commission charged with producing a list of voluntary “best practices” that providers should adopt to address CSAM on their services. The Commission is free to, and likely will, recommend against the offering of end-to-end encryption, and recommend providers adopt techniques that ultimately weaken the cybersecurity of their products. While these “best practices” would be voluntary, they could result in reputational harm to providers if they choose not to comply. There is also a risk that refusal to comply could be considered as evidence in support of a provider’s liability, and inform how judges evaluate these cases. States may even amend their laws to mandate the adoption of these supposed best practices. For many companies, the lack of clarity and fear of liability, in addition to potential public shaming, will likely disincentivize them from offering strong encryption, at a time when we should be encouraging the opposite.

There's a lot more in the letter, and the Copia Institute is proud to be one of the dozens of signatories, along with the ACLU, EFF, Wikimedia, Mozilla, Human Rights Campaign, PEN America and many, many more organizations.

Mike Masnick