a Better Bubble™

TechDirt 🕸

Law Firm Sues US Citizen And Immigration Service After It Tries To Claw Back Docs Obtained Legally Thru A FOIA Request

2 years 3 months ago

Once someone legally obtains documents from a government entity through a public records request, the government simply cannot demand to have them returned just because it screwed up when it fulfilled the request.

That unalterable fact hasn't stopped government agencies from trying (or even [temporarily] succeeding). The NYPD botched its handling of a public records request twice, handing out information it didn't want to disclose to facial recognition researchers on two separate occasions. Both times, it tried to get a court to help it demand the mistakenly released information be returned. One request was granted (then rescinded). The second time the NYPD screwed up it didn't even bother to see if a court would oblige it twice.

US Citizenship and Immigration Services (USCIS) is being sued for trying to do exactly this same thing. It fulfilled FOIA requests pertaining to Hoppock Law Firm clients, sending the firm the "alien files" compiled by the agency. (h/t National Security Counselors)

At the time, the USCIS told Hoppock Law Firm it was aware it was over-fulfilling the request. From Hoppock Law's lawsuit [PDF] against USCIS:

In the Determination Letter, defendant USCIS wrote that it was intentionally releasing portions of the records that would otherwise be considered “exempt” under the FOIA statute after discussion between agency personnel and a member of its staff. It purported to release this exempt material “as a matter of administrative discretion.” And it said that the released records may also include other “discretionary releases of exempt information.”

These statements confirmed that even if portions of the records released were subject to any exemption under the FOIA, such release was intentional, not inadvertent.

Seven months later, USCIS had a new Director of FOIA Operations. And Hoppock Law had a new letter from USCIS demanding the "return" of information the agency had already voluntarily released.

The Demand Letter stated that, as for request number NRC2021092780, the USCIS had now decided that it had “inadvertently released personally identifiable information of third parties and/or law enforcement sensitive information.” It did not identify what pages or portions of the release the defendants were now claiming to be exempt or address its previous statements that it was releasing exempt information on purpose, as a matter of administrative discretion.

The Demand Letter did not identify any specific statutory exemption under the FOIA statute that the Defendants believed should apply to these records.

Although the request was for Plaintiffs’ client’s own A-file and the records released included only those found in the client’s A-file, the Demand Letter said that “improper use” of the records Plaintiffs received could “harm” the individuals “whose information was mistakenly sent.”

The demand letter [PDF] also hinted the law firm could be subject to criminal charges if it did not immediately comply. The letter claimed that any "any use or disclosure" of information (information not specified in the letter) might "impede or interfere with law enforcement activities." That's called obstruction when the feds are bringing charges. And like any federal charge, it's serious and can result in lengthy sentences.

The letter went on to demand compliance by January 18, 2022, which would be the same day the law firm received the letter. On January 19, the law firm responded, asking USCIS to identify which FOIA request this demand letter covered and noted that it needed other information from the agency if it was even going to begin complying with its demands, including identification of every person and entity the documents might have been shared with.

Despite the implicit threat of criminal charges and the demand for immediate action, the law firm's questions have yet to be answered. To prevent it from being accused of federal crimes or improper dissemination of sensitive information, the law firm is suing USCIS, seeking an order that would strictly define what the law firm is obligated to do in response to this letter, especially given its implication of criminal charges.

And it points out USCIS has no legal right to demand the things it's demanding.

The FOIA statute includes no authority for the responding agency to demand that a requester return records it has disclosed under the FOIA or to furnish information to the agency about who has been provided access to those records.

Even if there were some implied claw-back power in the FOIA statute, it would not apply to records the agency has intentionally disclosed, as it said explicitly here in the June 2021 disposition letter.

Because of that, the law firm is also seeking declaratory judgment stating that the demand letter violates FOIA law and that the law firm is under no obligation to comply with an extremely belated letter that, in effect, orders the firm to disclose the names of others who've viewed the documents and any attorney-client privileged communications compliance with the letter might reveal.

The law firm should prevail. FOIA law simply does not work this way. And USCIS's earlier statements that it knew it was providing information normally considered exempt from disclosure shows the agency was fully aware of what it was handing over to the law firm. A change at the top of the FOIA org chart doesn't suddenly make all of the agency's past public records actions null and void. And it sure as shit doesn't change how this law works.

Tim Cushing

Funniest/Most Insightful Comments Of The Week At Techdirt

2 years 3 months ago

This week, our first place winner on the insightful side is That Anonymous Coward on our post about the ruling that a college can't order a student to stop talking about an instructor, responding to another commenter who decided to go on a bizarre rant questioning the student's disability:

They are called sunglasses, perhaps one you get your head out of your ass you might learn about them.

Oooh and you get to decide if her disability is or is not real.
Are you one of those assholes who demand people parked in handicapped spots prove their disability to you?
You know they faked it to get the permit hanging from the rearview & now they have to answer to you...

On the upside your entire rant really highlights why many people with disabilities often just suck it up because they really don't feel like explaining to petty able bodied tyrants the exact nature of their disability and having to perform like a circus animal to meet your mental requirements for being disabled enough.

Did you read a different letter?
No where was there a request or demand to give bad reviews, merely honest reviews.

I offer you a big hearty fuck off from someone who sometimes is light sensitive, sometimes doesn't need his cane, sometimes needs to park closer to the entrance who is fed up with assholes like you who think unless your missing limbs you aren't disabled & are required to prove it on demand by every fucking Karen who is just pissed that there are a couple spaces they can't use.

In second place, it's That One Guy with a comment about the Minneapolis cops who demanded a no-knock warrant then killed an innocent gun owner nine seconds after entering a residence:

Criminals lying? Perish the thought

No no, gunning someone down on the spot after you broke into their house in the middle of the night and they're disoriented from being woken up is totally reasonable so long as you say 'police' and 'warrant', I mean can you imagine criminals ever doing something like that or police ever shooting an innocent person?

Clearly not, which means that if someone armed breaks into your house yelling about how they're cops they have a right to be there, and so long as you're innocent you have absolutely no reason to be worried or feel the need to defend yourself or even protest in the slightest.

For editor's choice on the insightful side, we start out with another comment from That One Guy, this time in response to the opposition from companies to good new regulators:

'You can't nominate them, they might do the job!'

As with Sohn, now with Bedoya: The greatest sign that a person is qualified for the job is when the corrupt come pouring out of the woodworks to try to keep them from it.

Next, it's a comment from someone who we haven't seen in these lists before, under the name... That Other Guy. The comment comes in response to a German court fining a site owner for "sharing user data with Google" by using web fonts:

One of the many terrible things about this decision is that the website owner didn't send the user's IP address to Google; the user's browser did.

Over on the funny side, our first place winner is Bobvious, with a comment about the Boston police department's bullshit gang database:

Are you saying

that the Boston Police Department has been frequenting areas notorious for MS13 gang activity?

In second place, it's Toom1275 on our post about Penguin Random House and Maus in the wake of its controversial removal from Tennessee schools. One commenter complained about calling this a "ban", Mike pointed out that the post discussed the fact that it's a bit more complicated and wondered if the complainer had read it, and Toom had a reply:

It seems unlikely anyone supporting the book ban would be a fan of reading.

For editor's choice on the funny side, we start out with David and one more similar crack about the gang database:

Are you rooting for the criminals?

After all, Ortiz has a record of associating with people who have a record of associating with people.

Finally, it's one more comment from That One Guy, this time in response to Apple opposing the trademark on an indie film, Apple Man:

Apple: Our customers are INCREDIBLY stupid

No, that makes perfect sense, why just last week I went to the grocery store because I heard they were selling apples and to my great surprise I was pointed towards a pile of fruit. How dare the store and it's staff deceive people by telling their customers that they are selling apples when they clearly are not, don't they understand that when someone hears 'apple' the only thing that comes to mind is electronics of various types?

That's all for this week, folks!

Leigh Beadon

This Week In Techdirt History: February 6th - 12th

2 years 3 months ago

Five Years Ago

This week in 2017, in the wake of Trump's racist executive order banning people from seven countries from entering the US, pretty much the entire tech industry stood up in opposition. Meanwhile, Ajit Pai was getting quickly to work saying one thing and doing another (not unlike the broadband providers themselves. The FBI was revealed to have even more surveillance powers than we thought, and was also changing its FOIA policies to be even more hostile.

Ten Years Ago

This week in 2012, more dominoes were falling on ACTA: the Romanian Prime Minister admitted he had no idea why Romania signed it, the Czech government suspended ratification, then Latvia did the same, and even Germany got cold feet — and soon the mainstream financial press was writing off ACTA as dead. Meanwhile, we took a look at who was still supporting SOPA and why, while Lamar Smith was defending another terrible internet bill, and the RIAA was just lashing out at everyone.

Fifteen Years Ago

This week in 2007, we looked at the collateral damage from Viacom's wave of YouTube takedowns and a top NBC executive's hatred of the site, while one guy was claiming to own the Electric Slide and issuing DMCA notices on wedding videos. We also got a closer look at how little it takes for the RIAA to fire off a flimsy DMCA notice, while the RIAA was spending its time trying to tell people they should be paying more for CDs. Meanwhile, we took a look at just how completely bogus the MPAA's claims of a Canadian camcording epidemic were.

Leigh Beadon

Danish Court Confirms Insane 'Little Mermaid' Copyright Ruling Against Newspaper Over Cartoon

2 years 3 months ago

If you haven't been a long time Techdirt reader, you'll probably hear me say that there is a copyright infringement court case in Denmark and immediately wonder, "Yeesh, what did Disney do now?" But this is not a story about Disney. This is a story about the heirs of Edvard Eriksen, creator of a bronze statue of The Little Mermaid, inspired by the classic Hans Christian Andersen fairy tale, and their inability to let anyone in any way depict the statue or anything similar without being accosted in copyright actions. Most of the bullying actions by Eriksen's heirs have been, unbelievably, against other towns throughout the world for creating their own Little Mermaid statues: Greenville, Michigan and the Danish city of Asaa for example.

But less known are all the times Eriksen's heirs have gone after publications for showing pictures or other depictions of the statue. I won't pretend to be an expert in Danish copyright law, but if that country's laws are such that a newspaper or magazine cannot show a picture of one of the country's most famous landmarks, then that law is silly and should be changed or amended. Lest you think I must have this wrong, you can see a recent story of, not one, but two courts ruling that a newspaper must compensate Eriksen's heirs for a cartoon that depicted the statue on its pages.

An appeals court in Denmark has increased the compensation a newspaper was ordered to pay for violating the copyright of Copenhagen's The Little Mermaid statue with a cartoon depicting the bronze landmark as a zombie and a photo of it with a facemask.

The Berlingske newspaper published the cartoon in 2019 to illustrate an article about the debate culture in Denmark and used the photo in 2020 to represent a link between the far right and people fearing COVID-19.

For those of us reading this news in America, as well as many other nations, this all looks completely laughable. This is purely free speech stuff, protected in America by the First Amendment. Even getting past that, a cartoon of a statue is not a recreation of that statue, therefore copyright wouldn't even really apply. Plus it's parody and being used for commentary. Nothing about this makes sense.

And, yet, it must in Denmark because this 2nd court not only affirmed the ruling of the lower court but actually increased the compensation the newspaper was ordered to pay Eriksen's heirs.

Both were found to be infringements of the Danish Copyright Act. Copenhagen’s district court ordered the newspaper in 2020 to pay the heirs of Danish sculptor Edvard Eriksen 285,000 kroner ($44,000) in compensation. The appeals court on Wednesday raised the amount to 300,000 kroner ($46,000).

In a statement, the Eastern High Court in the Danish capital agreed with the lower court that “there was a violation of copyright" in the newspaper's actions. It did not give a reason for increasing the compensation amount but noted that Berlingske is a commercial venture since it wants to sell newspapers.

Again, this is all absurd. If the above rulings truly do comport with Danish copyright law, then all that suggests is that there needs to be an active movement in Denmark to amend the law. And, just to make this all the more frustrating, the copyright protections in Denmark are familiar: 70 years after the death of the author. In this case, that means Eriksen's heirs will only have this ability to bilk others for cash payments for the statue for another seven years.

Timothy Geigner

Analog Books Go From Strength To Strength: Helped, Not Hindered, By The Digital World

2 years 3 months ago

Many of the worst ideas in recent copyright laws have been driven by some influential companies’ fear of the transition from analog to digital. Whereas analog formats – vinyl, books, cinematic releases of films – are relatively easy to control, digital ones are not. Once a creation is in a digital form, anyone can make copies and distribute them on the Internet. Traditional copyright industries seem to think that digital versions of everything will be freely available everywhere, and that no one will ever buy analog versions. That’s not the case with vinyl records, and a recent post on Publisher’s Weekly suggests that analog books too, far from dying, are going from strength to strength:

Led by the fiction categories, unit sales of print books rose 8.9% in 2021 over 2020 at outlets that report to NPD BookScan. Units sold were 825.7 million last year, up from 757.9 million in 2020. BookScan captures approximately 85% of all print sales. In 2020, unit sales were up 8.2% over 2019, which saw 693.7 million print units sold.

The young adult fiction segment had the largest increase, with unit sales jumping 30.7%, while adult fiction sales rose 25.5%. Sales in the juvenile fiction category increased 9.6%.

The two years of increased sales is part of a longer-term trend, as this article from the New York Times in 2015 indicates:

the digital apocalypse never arrived, or at least not on schedule. While analysts once predicted that e-books would overtake print by 2015, digital sales have instead slowed sharply.

Now, there are signs that some e-book adopters are returning to print, or becoming hybrid readers, who juggle devices and paper. E-book sales fell by 10 percent in the first five months of this year, according to the Association of American Publishers, which collects data from nearly 1,200 publishers. Digital books accounted last year for around 20 percent of the market, roughly the same as they did a few years ago.

Digital formats possess certain advantages over analog ones, notably convenience. Today, you can access tens of millions of tracks online with music streaming services, and carry around thousands of ebooks on your phone. But many people evidently continue to appreciate the physicality of analog books, just as they like and buy vinyl records. The Publisher’s Weekly article also shows how the digital world is driving analog sales:

Gains in the young adult category were helped by several titles that benefitted from attention drummed up by BookTok, users of the social media platform TikTok who post about their favorite books. They Both Die at the End by Adam Silvera, released in December 2018, was the #1 title in the category, selling nearly 685,000 copies.

As a recent post on Walled Culture noted, if publishing companies were less paranoid about people sharing snippets of the books they love, on BookTok and elsewhere, the already significant analog sales they produce could be even higher. If the copyright industries want to derive the maximum benefit from the online world, they need to be brave, not bullying, as they so often are today.

Follow me @glynmoody on TwitterDiaspora, or Mastodon.

Originally posted to Walled Culture.

Glyn Moody

Declassified Documents Shows The CIA Is Using A 1981 Executive Order To Engage In Domestic Surveillance

2 years 3 months ago

When most people think of the CIA (Central Intelligence Agency), they think of a foreign-facing spy agency with a long history of state sponsored coup attempts (some successful!), attempted assassinations of foreign leaders, and putting the US in the torture business. What most people don't assume about the CIA is that it's also spying on Americans. After all, we prefer our embarrassments to be foreign-facing -- something that targets (and affects) people we don't really care about and governments we have been told are irredeemable.

An entity with the power to provoke military action halfway around the world has periodically shown an unhealthy interest in domestic affairs, which are supposed to be off-limits for the nation's most morally suspect spies. The CIA (along with the FBI) routinely abuses its powers to perform backdoor searches of foreign surveillance stashes to locate US-based communications. It also has asked the FBI to do its dirty secondhand surveillance work for it in order to bypass restrictions baked into Executive Order 12333 -- an executive order issued by Ronald Reagan that significantly expanded surveillance permissions for US agencies.

Perhaps most significantly -- at least in terms of this report -- the order instructed other government agencies to be more compliant with CIA requests for information. Since its debut in December 1981, the order has been modified twice (by George W. Bush) to give the government more power.

That's the authority the CIA has been using to spy on Americans, as a recent PCLOB (Privacy and Civil Liberties Oversight Board) report shows. The PCLOB performed a "deep dive" in CIA domestic spying at the request of Senators Ron Wyden and Martin Heinrich. After its completion, the senators asked for an unclassified version of the PCLOB's report. That report has arrived. And, according to Ron Wyden's statements, it shows the CIA is utilizing EO 12333 to spy on Americans and bypass the protections (however minimal) the FISA court provides to Americans.

“FISA gets all the attention because of the periodic congressional reauthorizations and the release of DOJ, ODNI and FISA Court documents,” said Senators Wyden and Heinrich in response to the newly declassified documents. “But what these documents demonstrate is that many of the same concerns that Americans have about their privacy and civil liberties also apply to how the CIA collects and handles information under executive order and outside the FISA law. In particular, these documents reveal serious problems associated with warrantless backdoor searches of Americans, the same issue that has generated bipartisan concern in the FISA context.”

Wyden and Heinrich called for more transparency from the CIA, including what kind of records were collected and the legal framework for the collection. The PCLOB report noted problems with CIA’s handling and searching of Americans’ information under the program.

Even if the spying isn't direct, the outcome is pretty much identical to direct targeting. With EO 12333, the CIA obtains the compliance from other federal agencies envisioned by Ronald Reagan back in 1981 as his administration ran headlong into the CIA-implicating Iran-Contra scandal.

Domestic data is supposed to be "masked" if incidentally acquired by foreign-facing surveillance collections. Sometimes this simply doesn't happen. Sometimes unmasking occurs without proper permission or oversight. The FBI uses this to its advantage. So does the CIA. But the FBI handles domestic terrorism. The CIA does not. That makes the CIA's abuse possibly more egregious than the FBI's numerous violations of the same restrictions placed on domestic surveillance via foreign interception of communications by the NSA.

The PCLOB report [PDF] shows the CIA has obtained bulk financial data from other sources, possibly without proper masking of incidentally-collected US persons data. According to the CIA's response to the report, the only thing separating CIA analysts from US persons' data and communications is a pop-up box warning them that access may be illegal. This is only a warning. It does not (nor could it) prevent analysts from obtaining data they shouldn't have access to without explicit permission.

How extensive this "incidental" collection is remains to be seen. And there's a good chance no one will ever know how often this pop-up was ignored to collect data generated by US citizens and residents. Much of the report is redacted and what was shared with the PCLOB was limited to whatever the CIA felt like sharing. The oversight of programs like these is deliberately limited by the Executive Order -- one that made the assumption some things (like national security) are too important to be done properly or overseen directly.

The report does note that the CIA has internal processes to limit abuse of backdoor searches. But it also points out the CIA has read EO 12333 and its modifications to mean it can do what it wants when it wants without worrying too much about straying outside of the generous lines drawn by this Executive Order.

The limits include a requirement to use the “least intrusive collection techniques feasible within the United States or directed against United States persons abroad.” Annex A implements E.O. 12333’s “least intrusive collection technique” requirement regarding activities outside of the United States involving U.S. persons. Given that the Executive Order’s restriction only applies to activities in the United States or activities directed against U.S. persons abroad, the CIA interprets the language of Annex A to only apply to collections directed against USPs abroad. Annex A does not require [redacted] to apply the least intrusive collection technique to collections covered by this report, which are generally not directed against USPs.

There's the exploitable loop: the EO only applies to collections "directed" at US persons. Since all information is pulled from foreign-facing surveillance collections that "incidentally" collect US persons data, the resulting collection the CIA has access to is completely legal. Analysts access these collections specifically to find US persons' data, but because no agency deliberately targeted US persons, it's all above board.

This is the exploitation of foreign bulk collections to obtain information about Americans. While some may argue the damage is minimal because it only accesses information (financial records) unlikely to have an established expectation of privacy, people obviously know their financial institutions track their purchases, but that's not the same thing as people assuming the government should be able to access records -- which may contain sensitive information -- using nothing more than an Executive Order that was ostensibly written to strengthen foreign surveillance efforts.

And that's only what can be observed from this redacted release. This isn't the CIA's only attempt to hoover up info on US persons via side channels. Wyden's letter hints at FISA reforms, which likely refers to domestic phone records the NSA used to collect in bulk -- a program that was specifically targeted by Congress following the Snowden revelations. What's contained in this report is a narrow examination of one part of the CIA's exploitation of bulk collections to obtain US persons data. And if it feels this confident about its nearly unrestricted ability to perform these backdoor searches, examinations of other aspects of this program are likely to find other domestic data is ending up in the hands of CIA analysts who are supposed to be focused on foreign activities.

Tim Cushing

Can We Compare Dot-Com Bubble To Today's Web3/Blockchain Craze?

2 years 3 months ago

Recently, I re-read through various discussions about the “dot-com bubble.” Surprisingly, it sounded all too familiar. I realized there are many similarities to today's techno-optimism and techno-pessimism around Web3 and Blockchain. We have people hyping up the future promises, while others express concerns about the bubble.

The Dot-Com Outspoken Optimism

In the mid-1990s, the dot-com boom was starting to gather steam. The key players in the tech ecosystem had blind faith in the inherent good of computers. Their vision of the future represented the broader Silicon Valley culture and the claim that the digital revolution “would bring an era of transformative abundance and prosperity.” Leading tech commentators celebrated the potential for advancing democracy and empowering people.

Most tech reporting pitted the creative force of technological innovation against established powers trying to tame its disruptive inevitability. Tech companies, in this storyline, represented the young and irreverent, gleefully smashing old traditions and hierarchies. The narrative was around “the mystique of the founders,” recalled Rowan Benecke. It was about “the brashness, the arrogance, but also the brilliance of these executives who were daring to take on established industries to find a better way.”

David Karpf examined “25 years of WIRED predictions” and looked back at how both Web 1.0 and Web 2.0 imagined a future that upended traditional economics: “We were all going to be millionaires, all going to be creators, all going to be collaborators.” However, “The bright future of abundance has, time and again, been waylaid by the present realities of earnings reports, venture investments, and shareholder capitalism. On its way to the many, the new wealth has consistently been diverted up to the few.”

The Dot-Com Outspoken Pessimism

During the dot-com boom, the theme around its predicted burst was actually prominent. “At the time, there were still people who said, ‘Silicon Valley is a bubble; this is all about to burst. None of these apps have a workable business model,’” said Casey Newton. “There was a lot of really negative coverage focused on ‘These businesses are going to collapse.’”

Kara Swisher shared that in the 1990s, a lot of the coverage was, “Look at this new cool thing.” But also, “the initial coverage was ‘this is a Ponzi scheme,’ or ‘this is not going to happen.’ When the Internet came, there was a huge amount of doubt about its efficacy. Way before it was doubt about the economics, it was doubt about whether anyone was going to use it,” Then, “it became clear that there was a lot of money to be made; the ‘gold rush’ mentality was on.”

At the end of 1999, this gold rush was mocked by San Francisco Magazine. “The Greed Issue” featured the headline “Made your Million Yet?” and stated that “Three local renegades have made it easy for all of us to hit it big trading online. Yeah…right.” Soon after, came the dot-com implosion.

“In 2000, the coverage became more critical,” explained Nick Wingfield. There was a sense that, “You do have to pay attention to profitability and to create sustainable businesses.” “There was this new economy, where you didn’t need to make profits, you just needed to get a product to market and to grow a market share and to grow eyeballs,” added Rowan Benecke. It was ultimately its downfall at the dot-com crash.”

The Blockchain is Partying Like It’s 1999

While VCs are aggressively promoting Web3 - Crypto, NFTs, decentralized finance (DeFi) platforms, and a bunch of other Blockchain stuff - they are also getting more pushback. See, for example, the latest Mark Andreesen Twitter fight with Jack Dorsey, or listen to Box CEO Aaron Levie's conversation with Alex Kantrowitz. The reason the debate is heated is, in part, due to the amount of money being poured into it.

Web3 Outspoken Optimism

Andreessen Horowitz, for example, has just launched a new $2.2 billion cryptocurrency-focused fund. “The size of this fund speaks to the size of the opportunity before us: crypto is not only the future of finance but, as with the internet in the early days, is poised to transform all aspects of our lives,” a16z’s cryptocurrency group announced in a blog post. “We’re going all-in on the talented, visionary founders who are determined to be part of crypto’s next chapter.”

The vision of Web3’s believers is incredibly optimistic: “Developers, investors and early adopters imagine a future in which the technologies that enable Bitcoin and Ethereum will break up the concentrated power today's tech giants wield and usher in a golden age of individual empowerment and entrepreneurial freedom.” It will disrupt concentrations of power in banks, companies and billionaires, and deliver better ways for creators to profit from their work.

Web3 Outspoken Pessimism

Critics of the Web3 movement argue that its technology is hard to use and prone to failure. “Neither venture capital investment nor easy access to risky, highly inflated assets predicts lasting success and impact for a particular company or technology” (Tim O’Reilly).

Other critics attack “the amount of utopian bullshit” and call it a “dangerous get-rich-quick scam” (Matt Stolle) or even “worse than a Ponzi scheme” (Robert McCauley). “At its core, Web3 is a vapid marketing campaign that attempts to reframe the public’s negative associations of crypto assets into a false narrative about disruption of legacy tech company hegemony” (Stephen Diehl). “But you can’t stop a gold rush,” wrote Moxie Marlinspike. Sounds familiar?

A “Big Bang of Decentralization” is NOT Coming

In his seminal “Protocols, Not Platforms,” Mike Masnick asserted that “if the token/cryptocurrency approach is shown to work as a method for supporting a successful protocol, it may even be more valuable to build these services as protocols, rather than as centralized, controlled platforms.” At the same time, he made it clear that even decentralized systems based on protocols will still likely end up with huge winners that control most of the market (like email and Google, for example. I recommend reading the whole piece if you haven’t already).

Currently, Web3 enthusiasts are hyping that a “Big Bang of decentralization” is coming. However, as the crypto market evolves, it is “becoming more centralized, with insiders retaining a greater share of the token” (Scott Galloway). As more people enter Web3, the more likely centralized services will become dominant. The power shift is already underway. See How OpenSea took over the NFT trade.

However, Mike Masnick also emphasized that decentralization keeps the large players in check. The distributed nature incentivizes the winners to act in the best interest of their users.

Are the new winners of Web3 going to act in their users’ best interests? If you watch Dan Olson’s “Line Goes Up – The Problem With NFTs” you will probably answer, “NO.”

From “Peak of Inflated Expectations” to “Trough of Disillusionment”

In Gartner’s Hype Cycle, it is expected that hyped technologies experience “correction” in the form of a crash: A “peak of inflated expectations” is followed by a “trough of disillusionment.” In this stage, the technology can still be promoted and developed, but at a slower pace. With regards to Web3, we might be reaching the apex of the "inflated expectations". Unfortunately, there will be a few big winners and a “long tail” of losers in the upcoming “disillusionment.”

Previous evolutions of the web had this "power law distribution". Blogs, for example, were marketed as a megaphone for anyone with a keyboard. It was amazing to have access to distribution and an audience. But when you have more blogs than stars in the sky, only a fraction of them can rise to power. Accordingly, only a few of Web3’s new empowering initiatives will ultimately succeed. Then, “on its way to the many,” the question remains “would the new wealth be diverted up to the few?” As per the history of the web, in a "winner-take-all" world, the next iteration wouldn't be different. 

From a “Bubble” to a “Balloon”

Going through the dot-com description, and then, the current Web3 debate - feels like déjà vu. Nonetheless, as I argue that the tech coverage should not be in either Techlash (“tech is a threat”) or Techlust (“tech is our savior”) but rather Tech Realism – I also argue the Web3 debate should be neither “bubble burst” nor “golden age,” but rather in the middle.

A useful description of this middle was recently offered by M.G. Siegler, who said the tech bubble is not a bubble but a balloon. Following his line of thought, instead of a bubble, Web3 can be viewed as a “deflating balloons ecosystem”: The overhyped parts of Web3 might burst, and affect the whole ecosystem, but most evaluations and promises will just return closer to earth.

That’s where they should be in the first place.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

Nirit Weiss-Blatt

Cop Trainer Encouraging Cops To Run Facial Recognition Searches On People During Traffic Stops

2 years 3 months ago

Cops are out there giving each other bad advice. An instructor for Street Cop Training -- a New Jersey based provider of officer training programs -- is telling officers it's ok to run facial recognition searches during routine traffic stops, when not encouraging them to go further with their potential rights violations.

In a podcast recently uncovered by Caroline Haskins for Insider, Maryland detective Nick Jerman tells listeners there's nothing wrong with running a facial image against publicly available databases during a traffic stop.

In a July 2021 episode of the Street Cop Podcast with Dennis Benigno, the company's founder, Jerman encouraged using facial recognition software to determine the identity of the person pulled over. The Street Cop Podcast is advertised as "The training that cops deserve" and, along with Street Cop Training's other programs, is marketed to active-duty police.

"Let's say you're on a traffic stop and we have someone in the car that we suspect may be wanted," Benigno asked during the episode. "What do we do in that situation?"

"Well there's a couple of paid programs you can use where you can take their picture, and it'll put it in," Jerman said, referring to facial recognition tools, before recommending "another one called PimEyes you can use." PimEyes is a free, public-facing facial-recognition search engine.

The legality of running searches like this is still up in the air. If there's nothing beyond suspicion a vehicle occupant might be a wanted suspect, officers would likely have to develop something a little more reasonable before engaging in searches -- like utilizing a facial recognition program -- unrelated to the traffic stop. And in some states and cities, it is very definitely illegal, thanks to recent facial recognition tech bans. Just because the cops may not own the tech utilized during these searches doesn't necessarily make actions like these legal.

But that's not the only potential illegality Detective Jerman (who, as Haskins points out, is currently being investigated by his department over some very questionable social media posts) encourages. He notes that in many states officers cannot demand people they stop ID themselves, especially when they're just passengers in a vehicle. He recommends this bit of subterfuge to obtain this information without consent.

"How about, you're in a situation where you can't compel ID and before you even ask you're like there's something not right with this guy and he's gonna lie," Benigno said.

Jerman suggested getting the person's phone number, either by asking the person, or by accusing the person of stealing a phone in the car and asking if they can call the phone in order to exonerate them.

"[Say] 'I see that phone in the car, we've had a lot of thefts of phones,' say 'Is that really your phone?' and then you can call it to see if that's the real phone number," Jerman said. "If you can get the phone number from your target, the world is your oyster."

Once a cop has a phone number, they can use third-party services to discover the phone owner's name and may be able to find any social media accounts associated with that phone number. The request may sound innocuous -- seeking to see if a phone is stolen -- but the end result may be someone unwittingly sharing a great deal about themselves with an officer.

Detective Jerman also provides classes on how to create fake social media accounts using freely accessible tools. He does this despite knowing it's a terms of service violation and appears to believe that since there's no law against it, officers should avail themselves of this subterfuge option. He has also made social media posts mocking Facebook and others for telling cops they're breaking the platform's rules when they do this.

But far more worrisome is something he admitted on another Street Cop Training podcast:

He recounted that at a wedding a few years ago, his friend wanted to approach a woman in a red dress because he "thought she was pretty hot." Jerman said that on the spot, he did a geofence Instagram search for recent posts near the wedding venue. He found a picture with the woman in the red dress, named Marilisa, posted by her friend, Amanda.

"Then you can start gaining intel on Amanda, then you can go back to Marilisa and start talking to her as if you know her friend Amanda," Jerman said.

Even his host, Street Cop Training founder Dennis Bengino, seemed to consider Jerman's actions to be a little creepy. But that appears to be Detective Jerman's MO: the exploitation of any service or platform to obtain information on anyone he runs into, whether it's at a wedding or during a pretextual traffic stop.

Despite Jerman's insistence that none of this breaks any laws, the actual legality of these actions is still up in the air. The lack of courtroom precedent saying otherwise is not synonymous with "lawful." Cases involving tactics like these are bound to result in challenges of arrests or evidence, and it's not immediately clear running unjustified searches clears the (very low) bar for reasonableness during investigative stops.

However, Jerman's big mouth and enthusiasm for exploitation should make it clear what's at stake when cops start asking questions, no matter how innocuous the questions may initially appear. And documents like the one obtained by Insider -- one that lists dozens of publicly accessible search tools and facial recognition AI -- should serve as a warning to anyone stopped by police officers. Imagine the creepiest things a stalker might do to obtain information about you. Now, imagine all of that in the hands of someone with an incredible amount of power, easy access to weapons, and an insular shield on non-accountability surrounding them.

Tim Cushing

Daily Deal: The Complete 2022 Microsoft Office Master Class Bundle

2 years 3 months ago

The Complete 2022 Microsoft Office Master Class Bundle has 14 courses to help you learn all you need to know about MS Office products to help boost your productivity. Courses cover SharePoint, Word, Excel, Access, Outlook, Teams, and more. The bundle is on sale for $75.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Penguin Random House Demands Removal Of Maus From Digital Library Because The Book Is Popular Again

2 years 3 months ago

We've said it over and over again, if libraries did not exist today, there is no way publishers would allow them to come into existence. We know this, in part, because of their attempts to stop libraries from lending ebooks, and to price ebooks at ridiculous markups to discourage libraries, and their outright claims that libraries are unfair competition. And we won't even touch on their lawsuit over digital libraries.

Anyway, in other book news, you may have heard recently about how a Tennessee school board banned Art Spiegelman's classic graphic novel about the Holocaust, Maus, from being taught in an eighth-grade English class. Some people called this a ban, while others said the book is still available, so it's not a "ban." To me, I think school boards are not the teachers, and the teachers should be able to come up with their own curriculum, as they know best what will educate their students. Also, Maus is a fantastic book, and the claim that it was banned because of "rough, objectionable language" and nudity is utter nonsense.

Either way, Maus is now back atop various best seller lists, as the controversy has driven sales. Spiegelman is giving fun interviews again where he says things like "well, who's the snowflake now?" And we see op-eds about how the best way get kids not to read books... is to assign it in English class.

But, also, we have publishers getting into the banning business themselves... by trying to capitalize on the sudden new interest in Maus.

Penguin Random House doesn't want this new interest in Maus to lead to... people taking it out of the library rather than buying a copy. They're now abusing copyright law to demand the book be removed from the Internet Archive's lending library, and they flat out admit that they're doing so for their own bottom line:

A few days ago, Penguin Random House, the publisher of Maus, Art Spiegelman's Pulitzer Prize-winning graphic novel about the Holocaust, demanded that the Internet Archive remove the book from our lending library. Why? Because, in their words, "consumer interest in 'Maus' has soared" as the result of a Tennessee school board's decision to ban teaching the book. By its own admission, to maximize profits, a Goliath of the publishing industry is forbidding our non-profit library from lending a banned book to our patrons: a real live digital book-burning.

This is just blatant greed laid bare. As the article notes, whatever problems US copyright law has, it has enshrined the concept of libraries, and the right to lend out books as a key element of the public interest. And the publishers -- such as giants like Penguin Random House -- would do anything possible to stamp that right out.

Mike Masnick

Unknown American VC Firm Apparently Looking To Acquire NSO Group, Limit It To Selling To Five Eyes Countries

2 years 3 months ago

NSO Group -- the embattled, extremely controversial Israeli phone malware developer -- finally has some good news to report. It may have a white knight riding to its rescue -- a somewhat unknown American venture capital firm that could help it pay its bills and possibly even rehabilitate its image.

Integrity Partners, which according to its website deals with investments in the fields of mobility and digital infrastructure, is managed by partners Chris Gaertner, Elad Yoran, Pat Wilkinson and Thomas Morgan.

According to the document of intentions, they will establish a company called Integrity Labs that would acquire control of NSO. It would also stream $300 million to the firm, in order to rebuild the company.

It's not all good news, at least not at the outset. The VC firm had pledged to lobby the US government on NSO's behalf to get the recent blacklist lifted, which means NSO would once again be able to purchase US tech solely for the purpose of developing exploits to use against that tech. If Integrity Partners has any interest in remaining true to its name, it should probably backburner this effort until it has engaged in some reputation rehabilitation.

Fortunately, it appears the VC firm is also interested in getting NSO back on the right track. Following neverending reports of NSO exploits being used to target journalists, political opponents, ex-wives, dissidents, and religious leaders, the government of Israel drastically reduced the number of countries NSO could sell to.

Integrity Labs aims to limit that list even further.

Instead of the current 37 clients, the company will reduce its sales to only five clients: the Five Eyes Anglosphere intelligence alliance of New Zealand, the United States, Australia, Great Britain and Canada. The company would initially focus on defensive cyber products as part of its rebranding effort.

With these restrictions in place -- and the United States on the preferred customer list -- it should be pretty easy to get the blacklist lifted. It's not that none of these countries would ever abuse malware to engage in domestic surveillance, but it's a far better list of potential clients than the one NSO had compiled over the last several years, which included a number of known habitual human rights abusers.

But there are still reasons to be concerned. Much of what happens to NSO after this acquisition occurs will still be shrouded in secrecy. There may be a claimed focus on defensive tech, but offensive exploits have always been NSO's main money makers and it will be much more difficult to remain profitable without this revenue stream.

Then there's the chance NSO will enter into a partnership with a different company that may not have the same altruistic goals, which means the malware developer will be able to continue limping along as the poster child for irresponsible sales and marketing. And the market for powerful malware will continue to exist. It will just end up being handled by companies that have remained mostly off the world press radar.

Also, there's the fact that there's very little information about who "Integrity Partners" actually is. While the firm's website lists its partners -- all of whom mention their military experience -- there is no evidence of a portfolio, or any evidence of previous investments. While the firm is listed in Crunchbase (the main database tracking VCs and startups), it shows no investments, and only mentions a single fund the firm has raised... for $350,000. It seems unlikely that that's enough to buy NSO Group.

For now, NSO's financial well-being and reputation are in tatters. The company cannot meet its debt obligations without outside help and its ruinous months-long streak of negative press present challenges even a timely influx of cash may not be able to reverse. But if it can rebrand and retool to provide defensive tech to a very short list of customers it may be able to survive its precipitous plunge into the "Tech's Most Hated" pool.

Tim Cushing

Minneapolis Police Officers Demanded No-Knock Warrant, Killed Innocent Gunowner Nine Seconds After Entering Residence

2 years 3 months ago

The city of Minneapolis, Minnesota is temporarily ending the use of no-knock warrants following the killing of 22-year-old Amir Locke by Minneapolis police officers. The city's mayor, Jacob Frey, has placed a moratorium on these warrants until the policy can be reviewed by Professor Pete Kraska of Eastern Kentucky University and anti-police violence activist DeRay McKesson.

This comes as too little too late for Locke and his surviving family. The entire raid was caught on body cam and it shows Amir Locke picking up a gun (but not pointing it at officers) after he was awakened by police officers swarming into the residence.

Locke, who was not a target of the investigation, was sleeping in the downtown Minneapolis apartment of a relative when members of a Minneapolis police SWAT team burst in shortly before 7 a.m. Wednesday. Footage from one of the officers' body cameras showed police quietly unlocking the apartment door with a key before barging inside, yelling "Search warrant!" as Locke lay under a blanket on the couch. An officer kicked the couch, Locke stirred and was shot by officer Mark Hanneman within seconds as Locke held a firearm in his right hand.

Locke was shot once in the wrist and twice in the chest. He died thirteen minutes after the shooting. As you may have noticed from the preceding paragraph, Locke was not a suspected criminal. And for those who may argue simply being within reach of a firearm is justification for shooting, Locke's handgun was legal and he had a concealed carry permit. His justifiable reaction to people barging into an apartment unannounced is somehow considered less justifiable than the officers' decision to kill him.

In most cases, that's just the way it goes, which -- assuming the warrant dotted all i's and crossed all t's -- means the Second Amendment is subservient to other constitutional amendments, like the Fourth. Here's how Scott Greenfield explains this omnipresent friction in a nation where the right to bear arms is respected… but only up to a point:

The Second Amendment issue is clear. Locke had a legal gun and, upon being awoken in the night, grabbed it. He didn’t point it at anyone or put his finger on the trigger, but it was in his hand. A cop might explain that it would only take a fraction of a second for that to change, if he was inclined to point it at an officer, put his finger on the trigger and shoot. But he didn’t.

This conundrum has been noted and argued before, that if there is a fundamental personal right to keep and bear arms, and that’s what the Supreme Court informs us is our right, then the exercise of that constitutional right cannot automatically give right to police to execute you for it. The Reasonably Scared Cop Rule cannot co-exist with the Right to Keep and Bear Arms.

"Cannot co-exist." This means that, in most cases, the citizen bearing arms generally ceases to exist (along with this right) when confronted by a law enforcement officer who believes they are reasonably afraid.

There's another point to Greenfield's post that's worth reading, but one we won't discuss further in this post: the NRA's utter unwillingness to express outrage when the right to bear arms is converted to the right to remain permanently silent by police officers who have deliberately put themselves in a situation that maximizes their fears, no matter how unreasonable those fears might ultimately turn out to be.

But this is a situation that could have been avoided. A knock-and-announce warrant would have informed Locke (who was sleeping at a relative's house) that law enforcement was outside. As the owner of a legal gun and conceal/carry permit, it's highly unlikely this announcement would have resulted in Locke opening fire on officers.

It didn't have to be this way, but the Minneapolis Police Department insisted this couldn't be handled any other way.

A law enforcement source, who spoke on the condition of anonymity because of the sensitive nature of the case, said that St. Paul police filed standard applications for search warrant affidavits for three separate apartments at the Bolero Flats Apartment Homes, at 1117 S. Marquette Av., earlier this week.

But Minneapolis police demanded that, if their officers were to execute the search within its jurisdiction, St. Paul police first secure "no-knock" warrants instead. MPD would not have agreed to execute the search otherwise, according to the law enforcement source.

If it had been handled the St. Paul way, Locke might still be alive. There's no evidence here indicating deployment of a knock-and-announce warrant would have made things more dangerous for the officers. If this sort of heightened risk presented itself frequently, the St. Paul PD would respond accordingly when seeking warrants.

St. Paul police very rarely execute no-knock warrants because they are considered high-risk. St. Paul police have not served such a warrant since 2016, said department spokesman Steve Linders.

Contrast that with the Minneapolis PD, which appears to feel a majority of warrant service should be performed without niceties like knocking or announcing their presence.

A Star Tribune review of available court records found that MPD personnel have filed for, and obtained, at least 13 applications for no-knock or nighttime warrants since the start of the year — more than the 12 standard search warrants sought in that same span.

This is likely an undercount, the Star Tribune notes. Many warrants are filed under seal and are still inaccessible. But it does track with the MPD's deployment stats. According to records, the MPD carries out an average of 139 no-knock warrants a year.

This happens despite Minnesota PD policy specifically stating officers are supposed to identify themselves as police and announce their purpose (i.e., "search warrant") before entering. That rule applies even if officers have secured a no-knock warrant. If officers wish to bypass this policy that applies to no-knock warrants, they need more than a judge's permission. They also need direct permission from the Chief of Police or their designee. That's because no-knock warrants were severely restricted by police reforms passed in 2020. But it appears those reforms have done little to change the way the MPD handles its warrant business.

We'll see if the mayor's moratorium is more effective than the tepid reforms enacted following the killing of George Floyd by Officer Derek Chauvin. The undetectable change in tactics following the 2020 reforms doesn't exactly give one confidence a citywide moratorium will keep MPD officers from showing up unannounced and killing people during the ensuing confusion. It only took nine seconds for officers to end Amir Locke's life. Given what's been observed here it will apparently take several years (and several lives) before the Minneapolis PD will be willing to alter its culture and its day-to-day practices.

Tim Cushing

The Top Ten Mistakes Senators Made During Today's EARN IT Markup

2 years 3 months ago

Today, the Senate Judiciary Committee unanimously approved the EARN IT Act and sent that legislation to the Senate floor. As drafted, the bill will be a disaster. Only by monitoring what users communicate could tech services avoid vast new liability, and only by abandoning, or compromising, end-to-end encryption, could they implement such monitoring. Thus, the bill poses a dire threat to the privacy, security and safety of law-abiding Internet users around the world, especially those whose lives depend on having messaging tools that governments cannot crack. Aiding such dissidents is precisely why it was the U.S. government that initially funded the development of the end-to-end encryption (E2EE) now found in Signal, Whatsapp and other such tools. Even worse, the bill will do the opposite of what it claims: instead of helping law enforcement crack down on child sexual abuse material (CSAM), the bill will actually help the most odious criminals walk free.

As with the July 2020 markup of the last Congress’s version of this bill, the vote was unanimous. This time, no amendments were adopted; indeed, none were even put up for a vote. We knew there wouldn’t be much time for discussion because Sen. Dick Durbin kicked off the discussion by noting that Sen. Lindsey Graham would have to leave soon for a floor vote. 

The Committee didn’t bother holding a hearing on the bill before rushing it to markup. The one and only hearing on the bill occurred just six days after its introduction back in March 2020. The Committee thereafter made major (but largely cosmetic) changes to the bill, leaving its Members more confused than ever about what the bill actually does. Today’s markup was a singular low-point in the history of what is supposed to be one of the most serious bodies in Congress. It showed that there is nothing remotely judicious about the Judiciary Committee; that most of its members have little understanding of the Internet and even less of how the, ahem, judiciary actually works; and, saddest of all, that they simply do not care.

Here are the top ten legal and technical mistakes the Committee made today.

Mistake #1: “Encryption Is not Threatened by This Bill”

Strong encryption is essential to online life today. It protects our commerce and our communications from the prying eyes of criminals, hostile authorian regimes and other malicious actors.

Sen. Richard Blumenthal called encryption a “red herring,” relying on his work with Sen. Leahy’s office to implement language from his 2020 amendment to the previous version of EARN IT (even as he admitted to a reporter that encryption was a target). Leahy’s 2020 amendment aimed to preserve companies’ ability to offer secure encryption in their products by providing that a company could not be found in violation of the law because it utilized secure encryption, doesn’t have the ability to decrypt communications, or fails to undermine the security of their encryption (for example, by building in a backdoor for use by law enforcement). 

But while the 2022 EARN IT Act contains the same list of protected activities, the authors snuck in new language that undermines that very protection. This version of the bill says that those activities can’t be an independent basis of liability, but that courts can consider them as evidence while proving the civil and criminal claims permitted by the bill’s provisions. That’s a big deal. EARN IT opens the door to liability under an enormous number of state civil and criminal laws, some of which require (or could require, if state legislatures so choose) a showing that a company was only reckless in its actions—a far lower showing than federal law’s requirement that a defendant have acted “knowingly.” If a court can consider the use of encryption, or failure to create security flaws in that encryption, as evidence that a company was “reckless,” it is effectively the same as imposing liability for encryption itself. No sane company would take the chance of being found liable for transmitting CSAM; they’ll just stop offering strong encryption instead. 

Mistake #2: The Bill’s Sponsors Readily Conceded that EARN IT Would Coerce Monitoring for CSAM

EARN IT’s sponsors repeatedly complained that tech companies aren’t doing enough to monitor for CSAM—and that their goal was to force them to do more. As Sen. Blumenthal noted, free software (PhotoDNA) makes it easy to detect CSAM, and it’s simply outrageous that some sites aren’t even using it. He didn’t get specific but we will: both Parler and Gettr, the alternative social networks favored by the MAGA right, have refused to use PhotoDNA. When asked about it, Parler’s COO told The Washington Post: “I don’t look for that content, so why should I know it exists?" The Stanford Internet Observatory’s David Thiel responded:

This, frankly, is just reckless. You cannot run a social media site, particularly one targeted to include content forbidden from mainstream platforms, solely with voluntary flagging. Implementing PhotoDNA to prevent CEI is the bare minimum for a site allowing image uploads. 9/10

— David Thiel (@elegant_wallaby) August 12, 2021

We agree completely—morally. So why, as Berin asked when EARN IT was first introduced, doesn’t Congress just directly mandate the use of such easy filtering tools? The answer lies in understanding why Parler and Gettr can get away with this today. Back in 2008, Congress required tech companies that become aware of CSAM to report it immediately to NCMEC, the quasi-governmental clearinghouse that administers the database of CSAM hashes used by PhotoDNA to identify known CSAM. Instead of requiring companies to monitor for CSAM, Congress said exactly the opposite: nothing in 18 U.S.C. § 2258A “shall be construed to require a provider to monitor [for CSAM].”

Why? Was Congress soft on child predators back then? Obviously not. Just the opposite: they understood that requiring tech companies to conduct searches for CSAM would make them state actors subject to the Fourth Amendment’s warrant requirement—and they didn’t want to jeopardize criminal prosecutions. 

Conceding that the purpose of EARN IT Act is to coerce searches for CSAM is a mistake, a colossal one, because it invites courts to rule that searching wasn’t voluntary.

Mistake #3: The Leahy Amendment Alone Won’t Protect Privacy & Security, or Avoid Triggering the Fourth Amendment

While Sen. Leahy’s 2020 amendment was a positive step towards protecting the privacy and security of online communications, and Lee’s proposal today to revive it is welcome, it was always an incomplete solution. While it protected companies against liability for offering encryption or failing to undermine the security of their encryption, it did not protect the refusal to conduct monitoring of user communications. A company offering E2EE products might still be coerced into compromising the security of its devices by scanning user communications “client-side” (i.e., on the device) prior to encrypting sent communications or after decrypting received communications. 

Apple recently proposed such a technology for such client-side scanning, raising concerns from privacy advocates and civil society groups. For its part, Apple assured that safeguards would limit use of the system to known CSAM to prevent the capability from being abused by foreign governments or rogue actors. But the capacity to conduct such surveillance presents an inherent risk of being exploited by malicious actors. Some companies may be able to successfully safeguard such surveillance architecture from misuse or exploitation. However, resources and approaches will vary across companies, and it is a virtual certainty that not all of them will be successful. And if done under coercion, create a risk that such efforts will be ruled state action requiring a warrant under the Fourth Amendment. 

Our letter to the Committee proposes an easy way to expand the Leahy amendment to ensure that companies won’t be held liable for not monitoring user content: borrow language directly from Section 2258A(f).

Mistake #4: EARN IT’s Sponsors Just Don’t Understand the Fourth Amendment Problem

Sen. Blumenthal insisted, repeatedly, that EARN IT contained no explicit requirement not to use encryption. The original version of the bill would, indeed, have allowed a commission to develop “best practices” that would be “required” as conditions of “earning” back the Section 230 immunity tech companies need to operate—hence the bill’s name. But dropping that concept didn’t really make the bill less coercive because the commission and its recommendations were always a sideshow. The bill has always coerced monitoring of user communications—and, to do that, the abandonment or bypassing of strong encryption—indirectly, through the threat of vast legal liability for not doing enough to stop the spread of CSAM. 

Blumenthal simply misunderstands how the courts assess whether a company is conducting unconstitutional warrantless searches as a “government actor.” “Even when a search is not required by law, … if a statute or regulation so strongly encourages a private party to conduct a search that the search is not ‘primarily the result of private initiative,’ then the Fourth Amendment applies.” U.S. v. Stevenson, 727 F.3d 826, 829 (8th Cir. 2013) (quoting Skinner v. Railway Labor Executives' Assn, 489 U.S. 602, 615 (1989)). In that case, the court found that AOL was not a government actor because it “began using the filtering process for business reasons: to detect files that threaten the operation of AOL's network, like malware and spam, as well as files containing what the affidavit describes as “reputational” threats, like images depicting child pornography.” AOL insisted that it “operate[d] its file-scanning program independently of any government program designed to identify either sex-offenders or images of child pornography, and the government never asked AOL to scan Stevenson's e-mail.” Id. By contrast, every time EARN IT’s supporters explain their bill, they make clear that they intend to force companies to search user communications in ways they’re not doing today.

Mistake #2 Again: EARN IT’s Sponsors Make Clear that Coercion Is the Point

In his opening remarks today, Sen. Graham didn’t hide the ball:

"Our goal is to tell the social media companies 'get involved and stop this crap. And if you don't take responsibility for what's on your platform, then Section 230 will not be there for you.' And it's never going to end until we change the game."

Sen. Chris Coons added that he is “hopeful that this will send a strong signal that technology companies … need to do more.” And so on and so forth.

If they had any idea what they were doing, if they understood the Fourth Amendment issue, these Senators would never admit that they’re using liability as a cudgel to force companies to take affirmative steps to combat CSAM. By making intentions unmistakable, they’ve given the most vile criminals exactly what they need to to challenge the admissibility of CSAM evidence resulting from companies “getting involved” and “doing more.” Though some companies, concerned with negative publicity, may tell courts that they conducted searches of user communications for “business reasons,” we know what defendants will argue: the companies’ “business reason” is avoiding the wide, loose liability that EARN IT subjected them to. EARN IT’s sponsors said so.

Mistake #5: EARN IT’s Sponsors Misunderstanding How Liability Would Work 

Except for Sen. Mike Lee, no one on the Committee seemed to understand what kind of liability rolling back Section 230 immunity, as EARN IT does, would create. Sen. Blumenthal repeatedly claimed that the bill requires actual knowledge. One of the bill’s amendments (the new Section 230(e)(6)(A)) would, indeed, require actual knowledge by enabling civil claims under 18 U.S.C. § 2255 “if the conduct underlying the claim constitutes a violation of section 2252 or section 2252A,” both of which contain knowledge requirements. This amendment is certainly an improvement over the original version of EARN IT, which would have explicitly allowed 2255 claims under a recklessness standard. 

But the two other changes to Section 230 clearly don’t require knowledge. As Sen. Lee pointed out today, a church could be sued, or even prosecuted, simply because someone posted CSAM on its bulletin board. Multiple existing state laws already create liability based on something less than actual knowledge of CSAM. As Lee noted, a state could pass a law creating strict liability for hosting CSAM. Allowing states to hold websites liable for recklessness (or even less) while claiming that the bill requires actual knowledge is simply dishonest. All these less-than-knowledge standards will have the same result: coercing sites into monitoring user communications, and into abandoning strong encryption as an obstacle to such monitoring. 

Blumenthal made it clear that this is precisely what he intends, saying: “Other states may wish to follow [those using the “recklessness” standard]. As Justice Brandeis said, states are the laboratories of democracy … and as a former state attorney general I welcome states using that flexibility. I would be loath to straightjacket them in their adoption of different standards.”

Mistake #6: “This Is a Criminal statute, This Is Not Civil Liability”

So said Sen. Lindsey Graham, apparently forgetting what his own bill says. Sen. Dianne Feinstein added her own misunderstanding, saying that she “didn’t know that there was a blanket immunity in this area of the law.” But if either of those statements were true, the EARN IT Act wouldn’t really do much at all. Section 230 has always explicitly carved out federal criminal law from its immunities; companies can already be charged for knowing distribution of child sexual abuse material (CSAM) or child sexual exploitation (CSE) under federal criminal statutes. Indeed, Backpage and its founders were criminally prosecuted even without SESTA’s 2017 changes to Section 230. If the federal government needs assistance in enforcing those laws, it could adopt Sen. Mike Lee’s amendment to permit state criminal prosecutions when the conduct would constitute a violation of federal law. Better yet, the Attorney General could use an existing federal law (28 U.S.C. § 543) to deputize state, local, and tribal prosecutors as “special attorneys” empowered to prosecute violations of federal law. Why no AG has bothered to do so yet is unclear.

What is clear is that EARN IT isn’t just about criminal law. EARN IT expressly carves out civil claims under certain federal statutes, and also under whatever state laws arguably relate to “the advertisement, promotion, presentation, distribution, or solicitation of child sexual abuse material” as defined by federal law. Those laws can and do vary, not only with respect to the substance of what is prohibited, but also the mental state required for liability. This expansive breadth of potential civil liability is part of what makes this bill so dangerous in the first place.

Mistake #7: “If They Can Censor Conservatives, They Can Stop CSAM!”

As at the 2020 markup, Sen. Lee seemed to understand most clearly how EARN IT would work, the Fourth Amendment problems it raises, and how to fix at least some of them. A former Supreme Court Clerk, Lee has a sharp legal mind, but he seems to misunderstand much of how the bill would work in practice, and how content moderation works more generally.

Lee complained that, if Big Tech companies can be so aggressive in “censoring” speech they don’t like, surely they can do the same for CSAM. He’s mixing apples and oranges in two ways. First, CSAM is the digital equivalent of radioactive waste: if a platform gains knowledge of it, it must take it down immediately and report it to NCMEC, and faces stiff criminal penalties if it doesn’t. And while “free speech” platforms like Parler and Gettr refuse to proactively monitor for CSAM (as discussed below), every mainstream service goes out of its way to stamp out CSAM on unencrypted service. Like AOL in the Stevenson case, they do so for business and reputational reasons.

By contrast no website even tries to block all “conservative” speech; rather, mainstream platforms must make difficult judgment calls about taking down politically charged content, such as Trump’s account only after he incited an insurrection in an attempted coup and misinformation about the 2020 election being stolen. Republicans are mad about where tech companies draw such lines.

Second, social media platforms can only moderate content that they can monitor. Signal can’t moderate user content and that is precisely the point: end-to-end-encryption means that no one other than the parties to a communication can see it. Unlike normal communications, which may be protected by lesser forms of “encryption,” the provider isn’t standing in the middle of the communication and it doesn’t have the keys to unlock the messages that it is passing back and forth. Yes, some users will abuse E2EE to share CSAM, but the alternative is to ban it for everyone. There simply isn’t a middle ground.

There may indeed be more that some tech companies could do about content they can see—both public content like social media posts and private content like messages (protected by something less than E2EE). But their being aggressive about, say, misinformation about COVID or the 2020 election has nothing whatsoever to do with the cold, hard reality that they can’t moderate content protected by strong encryption.

It’s hard to tell whether Lee understands these distinctions. Maybe not. Maybe he’s just looking to wave the bloody shirt of “censorship” again. Maybe he’s saying the same thing everyone else is saying, essentially: “Ah, yes, but if only Facebook, Apple and Google didn’t use end-to-end encryption for their messaging services, then they could monitor those for CSAM just like they monitor and moderate other content!” Proposing to amend the bill to require actual knowledge under both state and federal law suggests he doesn’t want this result, but who knows?

Mistake #8: Assuming the Fourth Amendment Won’t Require Warrants If It Applies

Visibility to the provider relates to one important legal distinction not discussed at all today—but that may well explain why the bill’s sponsors don’t seem to care about Fourth Amendment concerns. It’s an argument Senate staffers have used to defend the bill since its introduction. Even if compulsion through vast legal liability did make tech companies government actors, the Fourth Amendment requires a warrant only for searches of material for which users have a reasonable expectation of privacy. Kyllo v. United States, 533 U.S. 27, 33 (2001); see Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring). Courts long held that users had no such expectations for digital messages like email held by third parties. 

But that began to change in 2010. If searches of emails trigger the Fourth Amendment—and U.S. v. Warshak, 631 F.3d 266 (6th Cir. 2010) said they do—searches of private messaging certainly would. The entire purpose of E2EE is to give users rock-solid expectations of privacy in their communications. More recently, the Supreme Court has said that, “given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user's claim to Fourth Amendment protection.” Carpenter v. United States, 138 S. Ct. 2206, 2217 (2018). These cases draw the line Sen. Lee is missing: no, of course users don’t have reasonable expectations of privacy in public social media posts—which is what he’s talking about when he points to “censorship” of conservative speech. EARN IT could avoid the Fourth Amendment by focusing on content providers can see, but it doesn’t, because it’s intended to force companies to be able to see all user communications.

Mistake #9: What They didn’t Discuss: Anonymous Speech

The Committee didn’t discuss how EARN IT would affect speech protected by the First Amendment. No, of course CSAM isn’t protected speech, but the bill would affect lawful speech by law-abiding citizens—primarily by restricting anonymous speech. Critically, EARN IT doesn’t just create liability for trafficking in CSAM. The bill also creates liability for failing to stop communications that “solicit” or “promote” CSAM. Software like PhotoDNA can flag CSAM (by matching cryptographic hashes to known images in NCMEC’s database) but identifying “solicitation” or “promotion” is infinitely more complicated. Every flirtatious conversation between two adult users could be “solicitation” of CSAM—or it might be two adults doing adult things. (Adults sext each other—a lot. Get over it!) But “on the Internet, nobody knows you’re a dog”—and there’s no sure way to distinguish between adults and children. 

The federal government tried to do just that in the Communications Decency Act (CDA) of 1996 (nearly all of which, except Section 230, was struck down) and the Child Online Protection Act (COPA) of 1998. Both laws were struck down as infringing on the First Amendment right to accessing lawful content anonymously. EARN IT accomplishes much the same thing indirectly, the same way it attacks encryption: basing liability on anything less than knowledge means you can be sued for not actively monitoring, or for not age-verifying users, especially when the risks are particularly high (such as when you “should have known” you were dealing with minor users). 

Indeed, EARN IT is even more constitutionally suspect. At least COPA focused on content deemed “harmful to minors.” Instead of requiring age-gating for sites that offered porn and sex-related content (e.g., LGBTQ teen health), EARN IT would affect all users of private communications services, regardless of the nature of the content they access or exchange. Again, the point of E2EE is that the service provider has no way of knowing whether messages are innocent chatter or CSAM. 

EARN IT could raise other novel First Amendment problems. Companies could be held liable not only for failing to age-verify all users—a clear First Amendment violation— but also for failing to bar minors from using E2EE services so that their communications can be monitored or failing to use client-side monitoring on minors’ devices, and even failing to segregate adults from minors so they can’t communicate with each other. 

Without the Lee Amendment, EARN IT leaves states free to base liability on explicitly requiring age-verification or limits on what minors can do. 

Mistake #10: Claiming the Bill Is “Narrowly Crafted”

If you’ve read this far, Sen. Blumenthal’s stubborn insistence that this bill is a “narrowly targeted approach” should make you laugh—or sigh. If he truly believes that, either he hasn’t adequately thought about what this bill really does or he’s so confident in his own genius that he can simply ignore the chorus of protest from civil liberties groups, privacy advocates, human rights activists, minority groups, and civil society—all of whom are saying that this bill is bad policy.

If he doesn’t truly believe what he’s saying, well… that’s another problem entirely.

Bonus Mistake!: A Postscript About the Real CSAM problem

Lee never mentioned that the only significant social media services that don’t take basic measures to identify and block CSAM are Parler, Gettr and other fringe sites celebrated by Republicans as “neutral public fora” for “free speech.” Has any Congressional Republican sent letters to these sites asking why they refuse to use PhotoDNA? 

Instead, Lee did join Rep. Ken Buck in March 2021 to interrogate Apple about its decision to take down the Parler app. Answer: Parler hadn’t bothered setting any meaningful content moderation system. Only after Parler agreed to start doing some moderation of what appeared in its Apple app (but not its website) did Apple reinstate the app.

Berin Szoka and Ari Cohn

Court (For Now) Says NY Times Can Publish Project Veritas Documents

2 years 3 months ago

We've talked about the hypocrite grifters who run Project Veritas, who, even when they have legitimate concerns about attacks on their own free speech, ran to court to try to silence the NY Times. Bizarrely, a NY judge granted Project Veritas' demand for prior restraint against the NY Times falsely claiming that attorney-client material could not be published.

The NY Times appealed that ruling and now a court has... not overturned the original ruling, but for now said that the NY Times can publish the documents, saying that it will not enforce the original ruling until an appeal can be heard. This is... better than nothing, but fully overturning the original ridiculous ruling would have been much better. Because it was clearly prior restraint. But, at least for now, the prior restraint will not be enforced.

Still, the response from Project Veritas deserves separate comment, because it's just naively stupid:

In a phone interview on Thursday, Mr. O’Keefe said: “Defamation is not a First Amendment-protected right; publishing the other litigants’ attorney-client privileged documents is not a protected First Amendment right.”

While it's accurate that defamation is not protected by the 1st Amendment, he's wrong that publishing attorney-client communications is -- in most cases -- very much protected. He's fuzzing the lines here, by basically arguing that because Project Veritas is, separately, suing the NY Times, that bans the NY Times from publishing any attorney-client privileged material it obtains via standard reporting tactics.

But that fuzzing suggests something that just isn't true: that there's some exception to the 1st Amendment from publishing attorney-client materials. That's wrong. The attorney-client privilege is with respect to having to disclose certain documents to another party in litigation. If you can successfully show that the documents are privileged, they don't need to be disclosed to the other party. That's the extent of the privilege. It has no bearing whatsoever on whether or not someone else obtaining those materials through other means has a right to publish them. Of course they do and the 1st Amendment protects that.

And, I should just note, that considering Project Veritas' main method of operating is trying to obtain private documents, or record secret conversations, it is bizarre beyond belief that Project Veritas is literally claiming that private material has some sort of 1st Amendment protection. Because that seems incredibly likely to come back and bite Project Veritas at a later time. Of course, considering they're hypocritical grifters with no fundamental principles beyond "attack people with views we don't like," I guess it's not surprising that their viewpoint on free speech and the 1st Amendment shifts depending on who it's protecting.

Mike Masnick

Yet Another Israeli Malware Manufacturer Found Selling To Human Rights Abusers, Targeting iPhones

2 years 3 months ago

Exploit developer NSO Group may be swallowing up the negative limelight these days, but let's not forget the company has plenty of competitors. The US government's blacklisting of NSO arrived with a concurrent blacklisting of malware purveyor, Candiru -- another Israeli firm with a long list of questionable customers, including Uzbekistan, Saudi Arabia, United Arab Emirates, and Singapore.

Now there's another name to add to the list of NSO-alikes. And (perhaps not oddly enough) this company also calls Israel home. Reuters was the first to report on this NSO's competitor's ability to stay competitive in the international malware race.

A flaw in Apple's software exploited by Israeli surveillance firm NSO Group to break into iPhones in 2021 was simultaneously abused by a competing company, according to five people familiar with the matter.

QuaDream, the sources said, is a smaller and lower profile Israeli firm that also develops smartphone hacking tools intended for government clients.

Like NSO, QuaDream sold a "zero-click" exploit that could completely compromise a target's phones. We're using the past tense not because QuaDream no longer exists, but because this particular exploit (the basis for NSO's FORCEDENTRY) has been patched into uselessness by Apple.

But, like other NSO competitors (looking at you, Candiru), QuaDream has no interest in providing statements, a friendly public face for inquiries from journalists, or even a public-facing website. Its Tel Aviv office seemingly has no occupants and email inquiries made by Reuters have gone ignored.

QuaDream doesn't have much of a web presence. But that's changing, due to this report, which builds on earlier reporting on the company by Haaretz and Middle East Eye. But even the earlier reporting doesn't go back all that far: June 2021. That report shows the company selling a hacking tool called "Reign" to the Saudi government. But that sale wasn't accomplished directly, apparently in a move designed to further distance QuaDream from both the product being sold and the government it sold it to.

According to Haaretz, Reign is being sold by InReach Technologies, Quadream's sister company based in Cyprus, while Quadream runs its research and development operations from an office in the Ramat Gan district in Tel Aviv.

[...]

InReach Technologies, its sales front in Cyprus, according to Haaretz, may be being used in order to fly under the radar of Israel’s defence export regulator.

Reign is apparently the equivalent of NSO's Pegasus, another powerful zero-click exploit that appears to still be able to hack most iPhone models. But it's not a true equivalent. According to this report, the tool can be rendered useless by a single system software update and, perhaps more importantly, cannot be remotely terminated by the entity deploying it, should the infection be discovered by the target. This means targeted users have the opportunity to learn a great deal about the exploit, its deployment, and possibly where it originated.

That being said, it's not cheap:

One QuaDream system, which would have given customers the ability to launch 50 smartphone break-ins per year, was being offered for $2.2 million exclusive of maintenance costs, according to the 2019 brochure. Two people familiar with the software's sales said the price for REIGN was typically higher.

With more firms in the mix -- and more scrutiny from entities like Citizen Lab -- it's only a matter of time before information linking NSO competitors to human rights abuses and indiscriminate targeting of political enemies threatens to make QuaDream and Candiru household names. And, once again, it's time to point out this all could have been avoided by refusing to sell powerful hacking tools to human rights abusers who were obviously going to use the spyware to target critics, dissidents, journalists, ex-wives, etc. That QuaDream chose to sell to countries like Saudi Arabia, Singapore, and Mexico pretty much guarantees reports of abusive deployment will surface in the future.

Tim Cushing

Surprise: U.S. Cost Of Ripping Out And Replacing Huawei Gear Jumps From $1.8 To $5.6 Billion

2 years 3 months ago

So we've noted that a lot of the U.S. politician accusations that Huawei uses its network hardware to spy on Americans on behalf of the Chinese government are lacking in the evidence department. The company's been on the receiving end of a sustained U.S. government ban based on accusations that have never actually been proven publicly, levied by a country (the United States) with a long, long history of doing exactly what it accuses Huawei of doing.

To be clear, Huawei is a terrible company. It has been happy to provide IT and telecom support to the Chinese government as it wages genocide against ethnic minorities. It has also been caught helping some African governments spy on the press and political opponents. And it may very well have helped the Chinese government spy on Americans. So it's hard to feel too bad about the company.

At the same time, if you're going to levy accusations (like "Huawei clearly spies on Americans") you need to provide public evidence. And we haven't. Eighteen months of investigations found nothing. That didn't really matter much to the FCC (under Trump and Biden) or Congress, which ordered that U.S. ISPs and network operators rip out all Huawei gear and replace it to an estimated cost of $1.8 billion. Yet just a few years later, the actual cost to replace this gear has already ballooned to $5.8 billion and is likely to get higher:

"The FCC has told Congress that applications to The Secure and Trusted Communications Networks Reimbursement Program have generated requests totaling about $5.6 billion – far more than the allocated funding. The program was established to reimburse providers with 10 million or fewer customers who must remove Huawei Technologies Company and ZTE equipment."

That's quite a windfall for companies not named Huawei, don't you think?

My problem with these efforts has always been a nuanced one. I have no interest in defending a shitty global telecom gear maker with an atrocious human rights record which very well may be a proven to be a surveillance lackey for the Chinese government. Yet at the same time, domestic companies like Cisco have, for much of the last decade, leaned on unsubstantiated allegations of spying to shift market share in their favors. DC is flooded with lobbyists who can easily exploit both xenophobia and intelligence worries to their tactical advantage, then bury the need for evidence under ambiguous claims of national security:

"What happens is you get competitors who are able to gin up lawmakers who are already wound up about China,” said one Hill staffer who was not authorized to speak publicly about the matter. “What they do is pull the string and see where the top spins.”

But some experts say these concerns are exaggerated. These experts note that much of Cisco’s own technology is manufactured in China."

So my problem here isn't necessarily that Huawei doesn't deserve what's happening to it. My problem here is generally a lack of transparency in a process that's heavily dictated by lobbyists, who can hide any need for evidence behind national security claims. This creates an environment where decisions are made on a "noble and patriotic basis" that wind up being beyond common sense, reproach, and oversight. That's a nice breeding ground for fraud.

My other problem is the hypocrisy of a country that doesn't believe in limitations on spying, complaining endlessly about spying, without modifying any of its own, very similar behaviors. AT&T has been proven to be directly tethered to the NSA to the point where it's literally impossible to determine where one ends and the other begins. Yet were another country to ban AT&T from doing business there, the heads of the very same folks breathlessly concerned about surveillance ethics would explode. What makes us beyond reproach here? Our ethical track record?

And my third problem is that the almost myopic, focus on Huawei has been so massive, we've failed to take on numerous other privacy and security issues, whether that's the lack of a meaningful federal privacy law, the rampant security and privacy issues inherent in the Internet of things space (where Chinese-made hardware is rampant), or election security with anywhere close to the same level of urgency. These all are equally important issues, all exploited by Chinese intelligence, that see a small fraction of the hand-wringing and action reserved for issues like Huawei.

Again, none of this is to defend Huawei or deny it's a shitty company with dubious ethics. But the lack of transparency or skepticism creates an environment ripe for fraud and myopia by policymakers who act as if the entirety of their efforts is driven by the noblest and most patriotic of intentions. And, were I a betting man, I'd wager this whole rip and replace effort makes headlines for all the wrong reasons several years down the road.

Karl Bode

Daily Deal: The Complete GameGuru Unlimited Bundle

2 years 3 months ago

GameGuru is a non-technical and fun game maker that offers an easy, enjoyable and comprehensive game creation process that is designed specifically for those who are not programmers or designers/artists. It allows you to build your own game world with easy to use tools. Populate your game by placing down characters, weapons, and other game items, then press one button to build your game, and it's ready to play and share. GameGuru is built using DirectX 11 and supports full PBR rendering, meaning your games can look great and take full advantage of the latest graphics technology. The bundle includes hundreds of royalty-free 3D assets. It's on sale for $50.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Senator Blumenthal, After Years Of Denial, Admits He's Targeting Encryption With EARN IT

2 years 3 months ago

Senator Richard Blumenthal has now admitted that EARN IT is targeting encryption, something he denied for two years, and then just out and said it.

Since the very beginning many of us have pointed out that the EARN IT Act will undermine encryption (as well as other parts of the internet). Senator Richard Blumenthal, the lead sponsor on the bill, has insisted over and over again that the bill has nothing to do with encryption. Right after the original bill came out, when people called this out, Blumenthal flat out said "this bill says nothing about encryption" and later claimed that "Big Tech is using encryption as a subterfuge to oppose this bill."

That's been his line ever since -- insisting the bill has nothing to do with encryption. And to "show" that it wasn't about encryption, back in 2020, he agreed to a very weak amendment from Senator Leahy that had some language about encryption, even though as we pointed out at the time, that amendment still created a problem for encryption.

The newest version of EARN IT replaced Leahy's already weak amendment with one that is a more direct attack on encryption. But it has allowed slimy "anti-porn" groups like NCOSE to falsely claim that it has "dealt with the concerns about encryption." Except, as we detailed, the language of the bill now makes encryption a liability for any web service, as it explicitly says that use of encryption can be used as evidence that a website does not properly deal with child sexual abuse material.

But still, through it all, Blumenthal kept lying through his teeth, insisting that the bill wasn't targeting encryption. Until yesterday when he finally admitted it straight up to Washington Post reporter Cat Zakrzewski. In her larger story about EARN IT, I'm not sure why Zakrewski buried this point all the way down near the bottom, because this is the story. Blumenthal is asked about the encryption bit and he admits that the bill is targeting encryption:

Blumenthal said in an interview that lawmakers incorporated these concerns into revisions, which prevent the implementation of encryption from being the sole evidence of a company’s liability for child porn. But he said lawmakers wouldn’t offer a blanket exemption to using encryption as evidence arguing companies might use it as a “get-out-of-jail-free card.”

In other words, he knows that the bill targets encryption despite two whole years of blatant denials. To go from "this bill makes no mention of encryption" to "we don't want companies using encryption as a 'get-out-of-jail-free card'" is an admission that this bill is absolutely about encryption. And if that's the case, why have their been no hearings about the impact this would have on encryption and national security? Because, that seems like a key point that should be discussed, especially with Blumenthal admitting this thing that he denied for two whole years.

During today's markup, Blumenthal also made some nonsense comments about encryption:

The treatment of encryption in this statute is the result of hours, days, of consultation involving the very wise and significant counsel from Sen. Leahy who offered the original encryption amendment and said at the time that his amendment would not protect tech companies for being held liable for doing anything that would give rise to liability today for using encryption to further illegal activity. That's the key distinction here. Doesn't prohibit the use of encryption, doesn't create liability for using encryption, but the misuse of encryption to further illegal activity is what gives rise to liability here.

This is, beyond being nonsense word salad, just utterly ridiculous. No one ever said the bill "prohibited" encryption, but that it would make it a massive liability. And he's absolutely wrong that it "doesn't create liability for using encryption" because it literally does exactly that in saying that encryption can be used as evidence of liability.

The claim that it's only the "misuse of encryption" shows that Senator Blumenthal (1) has no clue what he's talking about and (2) needs to hire staffers who actually do understand this stuff, because that's not how this works. Once you say it's the "misuse of encryption" you've sunk encryption. Because now every lawsuit will just claim that any use of encryption is misuse and the end result is that you need to go through a massive litigation process to determine if your use of encryption is okay or not.

That's the whole reason why things like Section 230 are important, because they avoid having every company have to spend over a million dollars to prove that the technical decision they made were okay and not a "misuse." But now if they have to spent a million dollars every time someone sues them for their use of encryption, then it makes it ridiculously costly -- and risky -- to use encryption.

So, Blumenthal is either too stupid to understand how all of this actually works, or as he seems to have admitted to the reporter despite two years of denial, he doesn't believe companies should be allowed to use encryption.

EARN IT is an attack on encryption, full stop. Senator Blumenthal has finally admitted that, and anyone who believes in basic privacy and security should take notice.

Oh, and as a side note, remember back in 2020 when Blumenthal flipped out at Zoom for not offering full end-to-end encryption? Under this bill, Zoom would be at risk either way. Blumenthal is threatening them if they use encryption and if they don't. It's almost as if Richard Blumenthal doesn't know what he's talking about regarding encryption.

Mike Masnick

Yes, It Really Was Nintendo That Slammed GilvaSunner YouTube Channel With Copyright Strikes

2 years 3 months ago

Well, for a story that was already over, this became somewhat fascinating. We have followed the Nintendo vs. GilvaSunner war for several years now. The GilvaSunner YouTube channel has long been dedicated to uploading and appreciating a variety of video game music, largely from Nintendo games. Roughly once a year for the past few years, Nintendo would lob copyright strikes at a swath of GilvaSunner "videos": 100 videos in 2019, a bit less than that in 2020, take 2021 off, then suddenly slam the channel with 1,300 strikes in 2022. With that last copyright MOAB, the GilvaSunner channel has been shuttered voluntarily, with the operator indicating that it's all too much hassle.

Well, on the internet, and in our comments on that last post, there began to be speculation as to whether or not it was actually Nintendo behind all of these copyright strikes... or an imposter. Those sleuthing around found little tidbits, such as the name used on the strike not matching up to the names displayed in the past when Nintendo has acted against YouTube videos.

It was... strange. Why? Well, because it looked like many people going out and trying to find a reason to believe that Nintendo wasn't behaving exactly as anyone who had witnessed Nintendo's behavior would expect. If this was someone impersonating Nintendo with these actions, it was utterly indistinguishable from how Nintendo would normally behave. Guys, they do this shit all the time.

And this time too, as it turns out. You can hear it straight from YouTube's mouth.

Jumping in – we can confirm that the claims on @GilvaSunner's channel are from Nintendo. These are all valid and in full compliance with copyright rules. If the creator believes the claims were made in error, they can dispute with these steps: https://t.co/ivyjVNwLVu

— TeamYouTube (@TeamYouTube) February 5, 2022

This is where I will stipulate for the zillionth time that Nintendo is within it's rights to take these actions. But we should also stipulate that the company doesn't have to go this route and the fact that it prioritizes control of its IP in the strictest fashion over letting its fans enjoy some video game music should tell you everything you need to know.

In the meantime, to the internet sleuths: I appreciate your dedication to either Nintendo or to simply digging into these kinds of details for funsies or whatever. That being said, as the old saying goes, if you hear the sound of hooves, assume it's a horse and not a zebra.

Timothy Geigner

Even Officials In The Intelligence Community Are Recognizing The Dangers Of Over-Classification

2 years 3 months ago

The federal government has a problem with secrecy. Well, actually it doesn't have a problem with secrecy, per se. That's often considered a feature, not a bug. But federal law says the government shouldn't have so much secrecy, what with the FOIA being in operation. And yet, the government feels compelled to keep secrets from its biggest employer: the US taxpayers.

Over-classification remains a problem. It has been a problem ever since long before a government contractor went rogue with a massive stash of NSA documents, showing that many of the government's secrets should have been shared or, at the very least, more widely discussed as the government turned 9/11 into a constitutional bypass on the information superhighway.

Since then, efforts have been made to dial back the government's proclivity for classifying documents that pose no threat to government operations and/or government security. In fact, the argument has been made (rather convincingly) that over-classification is counterproductive. It's more likely to result in the exposure of so-called secrets rather than secure the blanket-exemption-formality that keeps secrets from the general public.

Efforts have been made to counteract this overwhelming desire to keep the public locked out of discussions about government activities. These efforts have mostly failed. And that has mainly been due to vague and frequent invocations of national security concerns, which allow legislators and federal judges to shut off their brains and hammer the [REDACT] button repeatedly.

But ignoring the problem hasn't made the problem go away, no matter how many billions the federal government refuses to throw at the problem. Over-classification still stands between the public and information it should have access to. And it stands between federal agencies and efficient use of tax dollars. The federal government generates petabytes of data every month. And far too often, the agencies generating the data decide it's no one's business but their own.

It's not just legislators noting the widening gap between the government's massive stockpiles of data and the public's ability to access them. It's also those generating the most massive stashes of bits and bytes, as the Washington Post points out, using the words of an Intelligence Community official.

The U.S. government is drowning in its own secrets. Avril Haines, the director of national intelligence, recently wrote to Sens. Ron Wyden (D-Ore.) and Jerry Moran (R-Kan.) that “deficiencies in the current classification system undermine our national security, as well as critical democratic objectives, by impeding our ability to share information in a timely manner.” The same conclusions have been drawn by the senators and many others for a long time.

As this letter hints at, over-classification doesn't just affect the great unwashed whose power is generally considered to be far too limited to change things. It also affects agencies and the entities that oversee the agencies -- the latter of which are asked to engage in oversight while being locked out of the information they need to perform this task.

If there's any good news here, it's that the Intelligence Community recognizes it's part of the problem. But this is just one person in the IC. It's unlikely every official feels this way.

The government is working towards a solution, but its work is being performed at the speed of government -- something further hampered by the back-and-forth of periodic regime changes and their alternating ideas about how much transparency the government owes to its patrons.

The IC letter writer almost sees a silver lining in the nearly opaque cloud enveloping agencies involved in national security efforts.

So far, Ms. Haines said, current priorities and resources for fixing the classification systems “are simply not sufficient.” The National Security Council is working on a revised presidential executive order governing classified information, and we hope the White House will come up with an ambitious blueprint for modernization.

The silver lining is "so far," and the efforts being made elsewhere to change things. The rest of the non-lining is far less silver: the resources aren't sufficient and the National Security Council is grinding bureaucratic gears by working with the administration to change things. If it doesn't happen soon, changes will be at the discretion of the next administration. And the next administration may no longer feel streamlining declassification is a priority, putting projects that have been in the on-again, off-again works since Snowden's exposes on the back burner yet again.

Our government will never likely feel Americans can be trusted with information about the programs their tax dollars pay for. But perhaps a little more momentum -- this time propelled by something within the Intelligence Community -- will prompt some incremental changes that may eventually snowball into actual transparency and accountability.

Tim Cushing