a Better Bubble™

TechDirt 🕸

Phoenix City Council Says PD Can Have Surveillance Drones Without Any Policy In Place Because Some Officers Recently Got Shot

2 years 8 months ago

The Phoenix Police Department wants drones and it wants them now. And, according to this report by the Phoenix New Times, it's going to get them.

After several hours of debate and spirited public response during the Phoenix City Council meeting this week, local officials agreed to authorize the police department to purchase public safety drones right away.

Late Wednesday night the Phoenix City Council voted 6-3 after a lengthy, and at times heated, discussion.

The request was submitted to the city council at the last minute, fast-tracking the agency’s plans to implement the technology.

Why the rush? Well, according to a letter [PDF] signed by Mayor Kate Gallego and two council members, having a drone in the air would have… not changed anything at all about a recent incident where officers were shot.

In the early morning hours of February 11, our officers were ambushed when responding to a call for service at a two-story home in Southwest Phoenix near 54th Avenue and Broadway. Nine of our police officers were injured but thankfully all of them are recovering.

During this incident was determined for the safety of our officers drone would need to be utilized to neutralize the situation. Currently, Phoenix does not own any drones for use by our Police Department, therefore we had to rely on the grace of our neighbor, the City of Glendale, to provide our department with a drone.

News reports about the ambush shooting make no mention of a deployed drone or describe what difference it made in resolving the deadly situation. But that shooting that happened to have a late-arriving drone is being used to justify the sudden acquisition of drones by the PD, which will presumably be deployed as soon as they're obtained.

Since it's apparently a matter of life and death, the request made by the council for the police to develop a drone policy and deployment plan before seeking funding and permission to acquire them has been abandoned. It's apparently now far too urgent a problem to be slowed down by accountability and transparency.

The committee agreed to allow Phoenix Fire to go ahead with its drone purchases — so it could roll the tech out by the summer — but asked Phoenix police to come back for approval separately, with a more fleshed-out plan.

This new proposal will circumvent that, instead allowing Phoenix police to go ahead with the drone purchase “as soon as possible,” according to a memo, without presenting a policy first to the council.

That gives the Phoenix PD permission to send eyes into the skies without meaningful restrictions or oversight. Far too much slack is being cut for a police department that is currently being investigated by the Department of Justice following years of abusive behavior by its officers. Here's what the DOJ -- which announced this investigation last August -- will be digging into:

This investigation will assess all types of use of force by PhxPD officers, including deadly force. The investigation will also seek to determine whether PhxPD engages in retaliatory activity against people for conduct protected by the First Amendment; whether PhxPD engages in discriminatory policing; and whether PhxPD unlawfully seizes or disposes of the belongings of individuals experiencing homelessness. In addition, the investigation will assess the City and PhxPD’s systems and practices for responding to people with disabilities. The investigation will include a comprehensive review of PhxPD policies, training, supervision, and force investigations, as well as PhxPD’s systems of accountability, including misconduct complaint intake, investigation, review, disposition, and discipline.

Not exactly the sort of thing that inspires trust. And certainly not the sort of thing that warrants a free pass on surveillance policies until long after new surveillance tech has been deployed. The Phoenix PD may have recently been involved in an unexpected burst of violence (I mean, committed by someone else against police officers), but that hardly justifies a careless rush into an expansion of the department's surveillance capabilities.

Tim Cushing

We Stand On The Precipice Of World War III, But, Sure, Let's All Talk About The DMCA And 'Standard Technical Measures'

2 years 8 months ago

A whole bunch of people wasted Tuesday talking about technical measures. What technical measures, you might ask? The ones vaguely alluded to in the DMCA. Subsection 512(i) conditions the safe harbors on platforms (more formally called "Online Service Providers," or OSPs, for the purposes of the DMCA) "accommodat[ing] and [...] not interfer[ing] with standard technical measures." The statute goes on to describe them in general terms as "technical measures [...] used by copyright owners to identify and protect copyrighted works" that meet a few other criteria, including that they don’t unduly burden OSPs.

In 1998 when the DMCA was passed no technical measures met all the criteria. And, still, today, none do either. So it should have been a very short hearing. But it wasn’t. Instead we spent all day, plus lots of time earlier filing comments, all at the instigation of Senators Tillis and Leahy, having some people point out that no technical measure currently existing can meet this statutory criteria to help police for infringement without massive, unacceptable cost to OSPs and the expression – including copyrightable expression – they facilitate, and having other people instead stamp their feet and hold their breath, pretend up is down, left is right, and the world is flat, in order to declare that some somehow do anyway and that platforms should incur any cost necessary to deploy them.

And as for which technical measures we were talking about… we never really got there. There were references to fingerprinting technologies, like ContentID, the huge, expensive, and perpetually inaccurate system Google uses to identify potentially infringing files. There were references to watermarking systems, which some (like us) noted create significant surveillance concerns as people’s consumption of expression is now especially trackable. And there were references to upload filters as well, like the EU keeps wanting to mandate. But at no point was any specific technology ever identified so we could assess the benefits and harms of even encouraging, much less mandating, its broader use. We just all sort of nodded knowingly at each other, as if we all shared some unspoken knowledge of some technology that could somehow magically work this unprecedented miracle to make all rightsholders perfectly happy while not crushing OSPs’ abilities to continue to facilitate expression, create market opportunities for creators, and connect creators to audiences. Nor outright crush lawful expression itself as so many of these systems are already doing. When, of course, no such technology currently exists, nor is likely to exist any time soon, if ever at all.

Since the Copia Institute participated in this exercise in futility, we used the opportunity to remind everyone – and the record – in our comment and testimony that the entire conversation was happening in the shadow of the Constitution. For instance, while a system of safe harbors for OSPs is not inherently in tension with the First Amendment – indeed, protecting the platforms that facilitate Internet expression is a critical statutory tool for advancing First Amendment interests online – recent interpretations of the statutory language of Section 512 have been increasingly putting this safe harbor system at odds with the constitutional proscription against making a law that would impinge free expression. Any system, be it legal or technical, that causes lawful expression to be removed, or to not be allowed to be expressed at all, deeply offends the First Amendment. Such harm cannot and should not be tolerated in any statute or policy promulgated by the Copyright Office. The regulatory priority therefore ought to be, and must be, to abate this constitutional injury that’s already been occurring and keep it from accruing further. And under no circumstances should any provision of Section 512, including and especially the technical measures provision, be amended or interpreted in a way that increases the frequency or severity of this constitutional harm that the statute has already invited.

Because it also offends the spirit if not letter of the Progress Clause animating copyright law. You cannot foster creative expression by creating a system of censorship that in any way injures the public’s ability to express themselves or to consume others’ expression. So it is critically important to recognize how any technological measure might do that, because it will only hurt the creative expression copyright law is itself supposed to foster, as well as all the public benefit it’s supposed to deliver.

Cathy Gellis

Turns Out It Was Actually The Missouri Governor's Office Who Was Responsible For The Security Vulnerability Exposing Teacher Data

2 years 8 months ago

The story of Missouri's Department of Elementary and Secondary Education (DESE) leaking the Social Security Numbers of hundreds of thousands of current and former teachers and administrators could have been a relatively small story of yet another botched government technology implementation -- there are plenty of those every year. But then Missouri Governor Mike Parson insisted that the reporter who reported on the flaw was a hacker and demanded he be prosecuted. After a months' long investigation, prosecutors declined to press charges, but Parson doubled down and insisted that he would "protect state data and prevent unauthorized hacks."

You had to figure another shoe was going to drop and here it is. As Brian Krebs notes, it has now come out that it was actually the Governor's own IT team that was in charge of the website that leaked the data. That is, even though it was the DESE website, that was controlled by the Governor's own IT team. This is from the now released Missouri Highway Patrol investigation document. As Krebs summarizes:

The Missouri Highway Patrol report includes an interview with Mallory McGowin, the chief communications officer for the state’s Department of Elementary and Secondary Education (DESE). McGowin told police the website weakness actually exposed 576,000 teacher Social Security numbers, and the data would have been publicly exposed for a decade.

McGowin also said the DESE’s website was developed and maintained by the Office of Administration’s Information Technology Services Division (ITSD) — which the governor’s office controls directly.

“I asked Mrs. McGowin if I was correct in saying the website was for DESE but it was maintained by ITSD, and she indicated that was correct,” the Highway Patrol investigator wrote. “I asked her if the ITSD was within the Office of Administration, or if DESE had their on-information technology section, and she indicated it was within the Office of Administration. She stated in 2009, policy was changed to move all information technology services to the Office of Administration.”

Now, it's important to note that the massive, mind-bogglingly bad, security flaw that exposed all those SSNs in the source code of publicly available websites was coded long before Parson was the governor, but it's still his IT team that was who was on the hook here. And perhaps that explains his nonsensical reaction to all of this?

For what it's worth, the report also goes into greater detail about just how dumb this vulnerability was:

Ms. Keep and Mr. Durnow told me once on the screen with this specific data about any teacher listed in the DESE system, if a user of the webpage selected to view the Hyper Text Markup Language (HTML) source code, they were allowed to see additional data available to the webpage, but not necessarily displayed to the typical end-user. This HTML source code included data about the selected teacher which was Base64 encoded. There was information about other teachers, who were within the same district as the selected teacher, on this same page; however, the data about these other teachers was encrypted.

Ms. Keep said the data which was encoded should have been encrypted. Ms. Keep told me Mr. Durnow was reworking the web application to encrypt the data prior to putting the web application back online for the public. Ms. Keep told me the DESE application was about 10 years old, and the fact the data was only encoded and not encrypted had never been noticed before.

This explains why Parson kept insisting that it wasn't simply "view source" that was the issue here, and that it was hacking because it was "decoded." But Base64 decoding isn't hacking. If it was, anyone figuring out what this says would be a "hacker."

TWlrZSBQYXJzb24gaXMgYSB2ZXJ5IGJhZCBnb3Zlcm5vciB3aG8gYmVpZXZlcyB0aGF0IGhpcyBvd24gSVQgdGVhbSdzIHZlcnkgYmFkIGNvZGluZyBwcmFjdGljZXMgc2hvdWxkIG5vdCBiZSBibGFtZWQsIGFuZCBpbnN0ZWFkIHRoYXQgaGUgY2FuIGF0dGFjayBqb3VybmFsaXN0cyB3aG8gZXRoaWNhbGx5IGRpc2Nsb3NlZCB0aGUgdnVsbmVyYWJpbGl0eSBhcyAiaGFja2VycyIgcmF0aGVyIHRoYW4gdGFrZSBldmVuIHRoZSBzbGlnaHRlc3QgYml0IG9mIHJlc3BvbnNpYmlsaXR5Lg==

That's not hacking. That's just looking at what's there and knowing how to read it. Not understanding the difference between encoding and encrypting is the kind of thing that is maybe forgivable for a non-techie in a confused moment, but Parson has people around him who could surely explain it -- the same people who clearly explained it to the Highway Patrol investigating. But instead, he still insists it was hacking and is still making journalist Jon Renaud's life a living hell from all this nonsense.

The investigation also confirms exactly as we had been saying all along that Renaud and the St. Louis Post-Dispatch did everything in the most ethical way possible. It found the vulnerability, checked to make sure it was real, confirmed it with an expert, then notified DESE about it, including the details of the vulnerability, and while Renaud noted that the newspaper was going to run a story about it, made it clear that it wanted to make sure the vulnerability was locked down before the story would run.

So, once again, Mike Parson looks incredibly ignorant, and completely unwilling to take responsibility. And the more he does so, the more this story continues to receive attention.

Mike Masnick

Important Announcement: Techdirt Is Migrating To A New Platform

2 years 8 months ago
UPDATE: If you’re reading this, you’re looking at the new Techdirt! If you have an account, you will need to reset your password before logging in. You may experience some bugs and slow performance for the next several hours while we complete the migration. Contact us if you notice any major issues. Almost since its […]
Leigh Beadon

Censr: Alt-Right Twitter Alternative Gettr Bans Posts, Accounts Calling One Of Its Backers A Chinese Spy

2 years 8 months ago

As so-called "conservatives" (a decently large number of them appearing to actually be white supremacists and bigots engaged in harassment) complained Big Tech was slanted against them, a host of new services arrived to meet the sudden demand. Gab, Gettr, etc. hit the marketplace of ideas, promising freedom from the "censorship" of "liberal" social media platforms, ignoring evidence that indicated "conservatives" weren't actually being "censored," but rather extremists calling themselves "conservatives" were being booted for multiple violations of site policies.

New services arrived, promising unabridged speech and a safe space for bigots, transphobes, disgruntled MAGAts, and everyone else who felt oppressed because they frequently went asshole on main. But as soon as these sites debuted, they began moderating all sorts of speech, starting with the clearly illegal and ramping things up to eject trolls and critics.

Moderation at scale remains impossible. And it's not much easier when you're dealing with thousands of users rather than millions or billions. Decisions need to be made. While it was clear to see the upstarts were unfamiliar with the moderation issues bigger platforms have struggled with for years, it was also clear to see the upstarts were more than happy to "censor" speech they didn't like, despite claiming to be the last bastions of online free speech.

"You're free to say whatever you want," platforms like Gab and Gettr proclaimed, muttering asterisks under their breath. You were indeed free to say what you wanted, but that would not prevent your content or your account being banned, deleted, etc.

Gettr has experienced the growing pains of platform moderation. This has happened despite its initial guarantees (*offer void pretty much everywhere) that it would only remove illegal content. Porn is not illegal, yet Gettr seemed to have a problem with all the porn being posted by users, perhaps because a majority of it involved animated animals.

It also had problems keeping trolls from impersonating the illustrious conservative figures it hoped to host exclusively. Aggressive trolling resulted in Gettr temporarily banning Roger Stone's actual account under the assumption it couldn't possibly be the real Roger Stone. It followed this up a few months later by banning the term "groyper" in an effort to limit the amount of white supremacist content it had to host. This too was somewhat of a failure. First, it told white supremacists their awful (but not illegal) speech wasn't welcome on the "free speech" alternative to Twitter. Then it became apparent the ban on "groyper" could be easily evaded by adding an o or two.

Now, there's even more "censorship" to be had at Gettr. One of its financial backers is Guo Wengui, a (former) billionaire and supposed anti-communist who recently filed for bankruptcy. There are reasons to believe Wengui isn't the most trustworthy of online associates. Wengui left China and has spent several years living in a New York City hotel overlooking Central Park. He has applied for asylum but has yet to be granted this request. Despite apparently distancing himself from China, he is still hounded by claims that he's only in the US to obtain information he can deliver to the Chinese government. These allegations were made by Strategic Vision US during a lawsuit over business dealings the company had with Wengui.

Strategic Vision said it concluded Mr. Guo was seeking information on Chinese nationals who may have been helping the U.S. government in national-security investigations or who were involved in other sensitive matters, according to the filing.

“Guo never intended to use the fruits of Strategic Vision’s research against the Chinese Communist Party,” the court filing said. “That is because Guo was not the dissident he claimed to be. Instead, Guo Wengui was, and is, a dissident-hunter, propagandist, and agent in the service of the People’s Republic of China and the Chinese Communist Party.”

Others have echoed this allegation. While it has yet to be proven true, Gettr is insulating its bankrupted backer from online criticism by deleting content that insinuates Wengui is a Chinese spy.

Journalists at the Daily Beast spent a few days running accounts on Gettr to see if the "free speech" site had a problem with criticizing Wengui. Unsurprisingly, the "we won't censor" platform engages in plenty of moderation when it comes to speech it doesn't like.

In an attempt to test the claims that even so much as mentioning the allegations of Guo being a “spy” would result in a permanent suspension from the platform, The Daily Beast created six separate Gettr accounts critical of Guo over the past two weeks.

These accounts posted variations on the question of whether the platform’s billionaire benefactor is a “Chinese spy.” For example, one of the accounts asked, “Does Chinese spy Miles Guo fund Gettr?” It was banned from the platform just 19 minutes after its creation. “Guo a spy??” another Daily Beast-operated account asked in response to a post from the businessman.

All six accounts were promptly banned, with 83 minutes being the longest span of time a single critical post remained live. They were banned without notice of wrongdoing or explanation for the permanent suspensions.

The hypocritical chickens have come home to roost. You're free to run your mouth on Gettr with copious exceptions. And one of those exceptions is the repeating allegations about someone who put some money into Gettr. Meanwhile, over on Twitter, users are free to insinuate the company's principals and backers are in bed with the Chinese government without running afoul of the terms of service.

Gettr will undoubtedly continue to pretend it's a free speech champion, even as it engages in actions that show it's really no more protective of speech than any other platform. It will continue to disappoint refugees from other, more heavily-trafficked social media platforms by engaging in (completely lawful!) moderation of speech it would rather not see on its platform. And while it may be more inviting of general harassment of people with alternative viewpoints (which is generally a lot less fun in Gettr's echo chamber) and election/COVID misinformation, it sees absolutely nothing wrong with silencing dissent and criticism. Its promises of a social media Wild West are as empty as its promises to give Twitter users a better place to express their "conservative" views.

Tim Cushing

Daily Deal: The Complete Video Production Super Bundle

2 years 8 months ago

Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about the Complete Video Production Super Bundle. Video content is fast changing from the future marketing tool to the present, and in these 10 courses you'll learn how to make professional videos on any budget. From the absolute basics to the advanced shooting and lighting techniques of the pros, you'll be ready to start making high-quality video content and driving viewers to it in no time. This bundle will teach you how to make amazing videos, whether you use a smartphone, webcam, DSLR, mirrorless, or professional camera. It's on sale for $35.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Why It Makes No Sense To Call Websites 'Common Carriers'

2 years 8 months ago

There's been an unfortunate movement in the US over the last few years to try to argue that social media should be considered "common carriers." Mostly this is coming (somewhat ironically) from the Trumpian wing of grifting victims, who are trying to force websites to carry the speech of trolls and extremists claiming, (against all actual evidence) that there's an "anti-conservative bias" in content moderation on various major websites.

This leads to things like Ohio's bizarre lawsuit that just outright declares Google a "common carrier" and seems to argue that the company cannot "discriminate" in its search results, even though the entire point of search is to rank (i.e., discriminate) between different potential search results to show you which ones it thinks best answer your query.

There is even some movement among (mostly Republican) lawmakers to pass laws that declare Facebook/Google/Twitter to be "common carriers." There's some irony here, in that these very same Republicans spent years demonizing the idea of "common carriers" when the net neutrality debate was happening, and insisting that the entire concept of "common carrier" was socialism. Amusingly (if it weren't so dumb), Republican-proposed bills declaring social media sites common carriers often explicitly carve out broadband providers from the definitions, as if to prove that this is not about any actual principles, and 100% about using the law to punish companies they think don't share their ideological beliefs.

Unfortunately, beyond grandstanding politicians, even some academics are starting to suggest that social media should be treated like common carriers. Beyond the fact that this would almost certainly come back to bite conservatives down the line, there's an even better reason why it makes no sense at all to make social media websites common carriers.

They don't fit any of the underlying characteristics that made common carrier designations necessary in the first place.

While there were other precursor laws having to do with the requirement to offer service if you were "public callings" the concept of "common carriers" is literally tied up in its name: the "carrier" part is important. Common carriers have been about transporting things from point A to point B. Going back to the first use of the direct concept of a must "carry" rule, there's the 1701 case in England of Lane v. Cotton, regarding the failure to deliver mail by the postal service. The court ruled that a postal service should be considered a common carrier, and that there was a legitimate claim "[a]gainst a carrier refusing to carry goods when he has convenience, his wagon not being full."

In the US, the concept of the common carrier comes from the railroads, and the Interstate Commerce Act of 1887, and then to communications services with the Communications Act of 1934, and the establishment of an important bifurcation between information services (not common carriers) and telecommunications services which were common carriers.

As you look over time, you'll notice a few important common traits in all historical common carriers:

  1. Delivering something (people, cargo, data) from point A to point B
  2. Offering a commoditized service (often involving a natural monopoly provider)
In some ways, point (2) is a function of point (1). The delivery from point A to point B is the key point here. Railroads, telegraphs, telephone systems are all in that simple business -- taking people, cargo, data (voice) from point A to point B -- and then having no further ongoing relationship with you.

That's just not the case for social media. Social media, from the very beginning, was about hosting content that you put up. It's not transient, it's perpetual. That, alone, makes a huge difference, especially with regards to the 1st Amendment's freedom of association. It's one thing to say you have to transmit someone's speech from here to there and then have no more to do with it, but it's something else entirely to say "you must host this person's speech forever."

Second, social media is, in no way, a commodified service. Facebook is a very different service from Twitter, as it is from YouTube, as it is from TikTok, as it is from Reddit. They're not interchangeable, nor are they natural monopolies, in which massive capital outlays are required upfront to build redundant architecture. New social networks can be set up without having to install massive infrastructure, and they can be extremely differentiated from every other social network. That's not true of traditional common carriers. Getting from New York to Boston by train is getting from New York to Boston by train.

Finally, even if you did twist yourself around, and ignore all of that, you're still ignoring that even with common carriers, they are able to refuse service to those who violate the rules (which is the reason that any social media bans a user -- for rule violations). Historically, common carriers can reject carriage for someone who does not pay, but also if the goods are deemed "dangerous" or not properly packed. In other words, even with a common carrier, they are able to deny service to someone who does not follow the terms of service.

So, social media does not meet any of the core components of a common carrier. It is hosting content perpetually, not merely transporting data from one point to another in a transient fashion. It is not a commodity service, but often highly differentiated in a world with many different competitors offering very differentiated services. It is not a natural monopoly, in which the high cost of infrastructure buildout would be inefficient for other entrants in the market. And, finally, even if, somehow, you ignored all of that, declaring a social media site a common carrier wouldn't change that they are allowed to ban or otherwise moderate users who fail to abide by the terms of service for the site.

So can we just stop talking about how social media websites should be declared common carriers? It's never made any sense at all.

Mike Masnick

Phoenix City Council Says PD Can Have Surveillance Drones Without Any Policy In Place Because Some Officers Recently Got Shot

2 years 8 months ago
The Phoenix Police Department wants drones and it wants them now. And, according to this report by the Phoenix New Times, it’s going to get them. After several hours of debate and spirited public response during the Phoenix City Council meeting this week, local officials agreed to authorize the police department to purchase public safety […]
Tim Cushing

New Right To Repair Bill Targets Obnoxious Auto Industry Behavior

2 years 8 months ago

It's just no fun being a giant company aspiring to monopolize repair to boost revenues. On both the state and federal level, a flood of new bills are targeting companies' efforts to monopolize repair by implementing obnoxious DRM, making repair tools and manuals hard to find, bullying independent repair shops (like Apple does), or forcing tractor owners to drive hundreds of miles just to get their tractor repaired (one of John Deere's favorite pastimes). The Biden administration even just got done signing an executive order asking the FTC to tighten up its restrictions on the subject.

This week the list of right to repair legislation jumped by one with the introduction of the "Right to Equitable and Professional Auto Industry Repair" Act (REPAIR Act), which would mandate equitable access to repair tools and tech, boost the FTC's authority to handle consumer complaints, and mandate additional transparency by the auto industry:

"Americans should not be forced to bring their cars to more costly and inconvenient dealerships for repairs when independent auto-repair shops are often cheaper and far more accessible,” said Rep. Rush. “But as cars become more advanced, manufacturers are getting sole access to important vehicle data while independent repair shops are increasingly locked out. The status quo for auto repair is not tenable, and it is getting worse. If the monopoly on vehicle repair data continues, it would affect nearly 860,000 blue-collar workers and 274,000 service facilities."

The auto industry has been particularly obnoxious when it comes to providing independent access to data, tools, and repair manuals for cars with increasingly complicated internal electronics. That's a particular problem when an estimated 70 percent of U.S. cars are serviced by independent repair shops. The industry has also been obnoxious in their attempts to scuttle legislation attempting to address the problem, including running ads in Massachusetts that claimed an expansion of that state's right to repair legislation would only be of benefit to stalkers and sexual predators.

The problem for companies looking to monopolize repair is several fold. One, the harder they try to lock their technologies down, the greater the opposition grows. And that opposition tends to be both broad and bipartisan, ranging from the most fervent of urban Apple fanboys, to the most rural of John Deere tractor owners. This isn't a battle they're likely to win, and while we haven't seen federal legislation on this front passed yet, if the industries continue to push their luck in this space it's only a matter of time.

Karl Bode

We Stand On The Precipice Of World War III, But, Sure, Let's All Talk About The DMCA And 'Standard Technical Measures'

2 years 8 months ago
A whole bunch of people wasted Tuesday talking about technical measures. What technical measures, you might ask? The ones vaguely alluded to in the DMCA. Subsection 512(i) conditions the safe harbors on platforms (more formally called “Online Service Providers,” or OSPs, for the purposes of the DMCA) “accommodat[ing] and […] not interfer[ing] with standard technical […]
Cathy Gellis

Turns Out It Was Actually The Missouri Governor's Office Who Was Responsible For The Security Vulnerability Exposing Teacher Data

2 years 8 months ago
The story of Missouri’s Department of Elementary and Secondary Education (DESE) leaking the Social Security Numbers of hundreds of thousands of current and former teachers and administrators could have been a relatively small story of yet another botched government technology implementation — there are plenty of those every year. But then Missouri Governor Mike Parson […]
Mike Masnick

Censr: Alt-Right Twitter Alternative Gettr Bans Posts, Accounts Calling One Of Its Backers A Chinese Spy

2 years 8 months ago
As so-called “conservatives” (a decently large number of them appearing to actually be white supremacists and bigots engaged in harassment) complained Big Tech was slanted against them, a host of new services arrived to meet the sudden demand. Gab, Gettr, etc. hit the marketplace of ideas, promising freedom from the “censorship” of “liberal” social media […]
Tim Cushing

Daily Deal: The Complete Video Production Super Bundle

2 years 8 months ago
Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about the Complete Video Production Super Bundle. Video content is fast changing from the future marketing tool to the present, and in these 10 courses you’ll learn how to make professional videos on any budget. From the absolute basics to the advanced […]
Daily Deal

Why It Makes No Sense To Call Websites 'Common Carriers'

2 years 8 months ago
There’s been an unfortunate movement in the US over the last few years to try to argue that social media should be considered “common carriers.” Mostly this is coming (somewhat ironically) from the Trumpian wing of grifting victims, who are trying to force websites to carry the speech of trolls and extremists claiming, (against all […]
Mike Masnick

New Right To Repair Bill Targets Obnoxious Auto Industry Behavior

2 years 8 months ago
It’s just no fun being a giant company aspiring to monopolize repair to boost revenues. On both the state and federal level, a flood of new bills are targeting companies’ efforts to monopolize repair by implementing obnoxious DRM, making repair tools and manuals hard to find, bullying independent repair shops (like Apple does), or forcing […]
Karl Bode

'Peaky Blinders' Production Company Working With Bushmills On A Themed Whiskey

2 years 8 months ago

Nearly a year ago, we talked about a trademark battle between Caryn Mandabach Productions, the company that produces Netflix's Peaky Blinders hit show, and Sadler's Brewhouse, a combined distillery that applied for a "Peaky Blinders" trademark for several spirits brands. Important to keep in mind is that "Peaky Blinders" isn't some made up gang in a fictional story. That name was taken from very real history in England, as evidenced by the folks that own Sadler's being descendants from one of the gang's members. It's also important to remember that television shows and alcohol are not the same marketplace when it comes to trademark law. Despite that, there has been a years-long dispute raging between Mandabach and Sadler's.

And now we have some indication as to why, since Bushmills has announced a partnership with Mandabach Productions to release its own "Peaky Blinders" themed whiskey.

Irish whiskey producer Bushmills could be launching a Peaky Blinders-inspired whiskey after applying to approve a label for the product. Proximo Spirits, which owns Bushmills, made the application to the US Alcohol and Tobacco Tax and Trade Bureau in January 2022.

Caryn Mandabach Productions, which produces the hit Neflix series about the flatcap-wearing gang, is thought to be mentioned on the proposed Bushmills label, which also allegedly says the whiskey is licensed by series distributor Banijay Group.

And this is where things get really interesting. Why? Well, the argument I made in the original post on this topic was that Mandabach really didn't have a good argument for opposition or infringement since the production company wasn't actually using the historical name of a real gang to make alcohol. Given the disparate markets, there didn't seem to be any real reason for concern about public confusion.

But now that is happening in reverse. A company behind the Netflix show is now partnering with another distillery to enter the spirits market with a "Peaky Blinders" brand and theme. If anything, I would think that Sadler's Brewhouse now has an argument for opposition, given the pending trademark application. Especially since it seems the production company, late to the party, has "plans" to get into the liquor business.

Earlier this month, The Sun revealed that the production company has its own plans to open a line of Peaky Blinders-themed bars and restaurants.

In which case I believe this would come down mostly to a "first to file" race. And if the production company had already filed trademark applications for the liquor business, you really would have thought that fact would be on display in its opposition and suit against Sadler's. But there was no hint of that in any of the documents that informed our previous post.

So, on Mandabach's side of things, this all appears to be backwards. Why it should win on any of this is not something I'm able to argue.

Timothy Geigner

ACLU & EFF Step Up To Tell Court You Don't Get To Expose An Anonymous Tweeter With A Sketchy Copyright Claim

2 years 8 months ago

In November, we wrote about a very bizarre case in which someone was using a highly questionable copyright claim to try to identify an anonymous Twitter user with the username @CallMeMoneyBags. The account had made fun of various rich people, including a hedge fund billionaire named Brian Sheth. In some of those tweets, Money Bags posted images that appeared to be standard social media type images of a woman, and the account claimed that she was Sheth's mistress. Some time later, an operation called Bayside Advisory LLC, that has very little other presence in the world, registered the copyright on those images, and sent a DMCA 512(h) subpoena to Twitter, seeking to identify the user.

The obvious suspicion was that Sheth was somehow involved and was seeking to identify his critic, though Bayside's lawyer has fairly strenuously denied Sheth having any involvement.

Either way, Twitter stood up for the user, noting that this seemed to be an abuse of copyright law to identify someone for non-copyright reasons, that the use of the images was almost certainly fair use, and that the 1st Amendment should protect Money Bag's identify from being shared. The judge -- somewhat oddly -- said that the fair use determination couldn't be made with out Money Bags weighing in and ordered Twitter to alert the user. Twitter claims it did its best to do so, but the Money Bags account (which has not tweeted since last October...) did not file anything with the court, leading to a bizarre ruling in which Twitter was ordered to reveal the identify of Money Bags.

We were troubled by all of this, and it appears that so was the ACLU and the EFF, who have teamed up to tell the court it got this very, very wrong. The two organizations have filed a pretty compelling amicus brief saying that you can't use copyright as an end-run around the 1st Amendment's anonymity protections.

The First Amendment protects anonymous speakers from retaliation and other harms by allowing them to separate their identity from the content of their speech to avoid retaliation and other harms. Anonymity is a distinct constitutional right: “an author’s decision to remain anonymous, like other decisions concerning omissions or additions to the content of a publication, is an aspect of the freedom of speech protected by the First Amendment.” McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 342 (1995). It is well-settled that the First Amendment protects anonymity online, as it “facilitates the rich, diverse, and far-ranging exchange of ideas,” Doe v. 2TheMart.com, Inc., 140 F. Supp. 2d 1088, 1092 (W.D. Wash. 2001), and ensures that a speaker can use “one of the vehicles for expressing his views that is most likely to result in those views reaching the intended audience.” Highfields, 385 F. Supp. 2d at 981. It is also well-settled that litigants who do not like the content of Internet speech by anonymous speakers will often misuse “discovery procedures to ascertain the identities of unknown defendants in order to harass, intimidate or silence critics in the public forum opportunities presented by the Internet.” Dendrite Int’l v. Doe No. 3, 775 A.2d 756, 771 (N.J. App. Div. 2001).

Thus, although the right to anonymity is not absolute, courts subject discovery requests like the subpoena here to robust First Amendment scrutiny. And in the Ninth Circuit, as the Magistrate implicitly acknowledged, that scrutiny generally follows the Highfields standard when the individual targeted is engaging in free expression. Under Highfields, courts must first determine whether the party seeking the subpoena can demonstrate that its legal claims have merit. Highfields, 385 F. Supp. 2d at 975-76. If so, the court must look beyond the content of the speech at issue to ensure that identifying the speaker is necessary and, on balance, outweighs the harm unmasking may cause.

The filing notes that the magistrate judge who ordered the unmasking apparently seemed to skip a few steps:

The Magistrate further confused matters by suggesting that a fair use analysis could be a proxy for the robust two-step First Amendment analysis Highfields requires. Order at 7. This suggestion follows a decision, in In re DMCA Subpoena, 441 F. Supp. 3d at 882, to resolve a similar case purely on fair use grounds, on the theory that Highfields “is not well-suited for a copyright dispute” and “the First Amendment does not protect anonymous speech that infringed copyright.”...

That theory was legally incorrect. While fair use is a free-speech safety valve that helps reconcile the First Amendment and the Copyright Act with respect to restrictions on expression, anonymity is a distinct First Amendment right.1 Signature Mgmt., 876 F.3d at 839. Moreover, DMCA subpoenas like those at issue here and in In re DMCA Subpoena, concern attempts to unmask internet users who are engaged in commentary. In such cases, as with the blogger in Signature Mgmt., unmasking is likely to chill lawful as well as allegedly infringing speech. They thus raise precisely the same speech concerns identified in Highfields: the use of the discovery process “to impose a considerable price” on a speaker’s anonymity....

ndeed, where a use is likely or even colorably a lawful fair use, allowing a fair use analysis alone to substitute for a full Highfields review gets the question precisely backwards, given the doctrine’s “constitutional significance as a guarantor to access and use for First Amendment purposes.” Suntrust Bank v. Houghton Mifflin, 268 F.3d 1257, 1260 n.3 (11th Cir. 2001). Fair use prevents copyright holders from thwarting well-established speech protections by improperly punishing lawful expression, from critical reviews, to protest videos that happen to capture background music, to documentaries incorporating found footage, and so on. But the existence of one form of speech protection (the right to engage in fair use) should not be used as an excuse to give shorter shrift to another (the right to speak anonymously).

It also calls out the oddity of demanding that Money Bags weigh in, when its Bayside and whoever is behind it that bears the burden of proving that this use was actually infringing:

Bayside incorrectly claims that Twitter (and by implication, its user) bears the burden of demonstrating that the use in question was a lawful fair use. Opposition to Motion to Quash (Dkt. No. 9) at 15. The party seeking discovery normally bears the burden of showing its legal claims have merit. Highfields, 385 F. Supp. 2d at 975-76. In this pre-litigation stage, that burden should not shift to the anonymous speaker, for at least three reasons.

First, constitutional rights, such as the right to anonymity, trump statutory rights such as copyright. Silvers v. Sony Pictures Entm’t, Inc., 402 F.3d 881, 883-84 (9th Cir. 2005). Moreover, fair use has an additional constitutional dimension because it serves as a First Amendment “safety valve” that helps reconcile the right to speak freely and the right to restrict speech. William F. Patry & Shira Perlmutter, Fair Use Misconstrued: Profit, Presumptions, and Parody, 11 Cardozo Arts & Ent. L.J. 667, 668 (1993). Shifting the Highfields burden to the speaker would create a cruel irony: an anonymous speaker would be less able to take advantage of one First Amendment safeguard—the right to anonymity—solely because their speech relies on another—the right to fair use. Notably, the Ninth Circuit has stressed that fair use is not an affirmative defense that merely excuses unlawful conduct; rather, it is an affirmative right that is raised as a defense simply as a matter of procedural posture. Lenz v. Universal, 815 F.3d 1145, 1152 (9th Cir. 2016). Second, Bayside itself was required to assess whether the use in question was fair before it sent its DMCA takedown notices to Twitter; it cannot now complain if the Court asks it to explain that assessment before ordering unmasking. In re DMCA Subpoena, 441 F. Supp. 3d at 886 (citing Lenz., 815 F.3d at 1153: “a copyright holder must consider the existence of fair use before sending a takedown notification under § 512(c)”)

Third, placing the burden on the party seeking to unmask a Doe makes practical sense at this early stage, when many relevant facts lie with the rightsholder. Here, for example, Bayside presumably knows—though it has declined to address—the original purpose of the works. And as the copyright holder, it is best positioned to explain how the use at issue might affect a licensing market. While the copyright holder cannot see into the mind of the user, the user’s purpose is easy to surmise here, and the same is likely to be true in any 512(h) case involving expressive uses. With respect to the nature of the work, any party can adequately address that factor. Indeed, both Bayside and Twitter have done so.

The filing also notes that this is an obvious fair use situation, and the judge can recognize that:

While courts often reserve fair use determinations for summary judgment or trial, in appropriate circumstances it is possible to make the determination based on the use itself. See In re DMCA Section 512(h) Subpoena to YouTube (Google, Inc.), No. 7:18-MC-00268 (NSR), 2022 WL 160270 (S.D.N.Y. Jan. 18, 2022) (rejecting the argument that fair use cannot be determined during a motion to quash proceeding). In Burnett v. Twentieth Century Fox, for example, a federal district court dismissed a copyright claim—without leave to amend—at the pleading stage based on a finding of fair use. 491 F. Supp. 2d 962, 967, 975 (C.D. Cal. 2007); see also Leadsinger v. BMG Music Pub., 512 F.3d 522, 532–33 (9th. Cir. 2008) (affirming motion to dismiss, without leave to amend, fair use allegations where three factors “unequivocally militated” against fair use). See also, e.g., Sedgwick Claims Mgmt. Servs., Inc. v. Delsman, 2009 WL 2157573 at *4 (N.D. Cal. July 17, 2009), aff’d, 422 F. App’x 651 (9th Cir. 2011); Savage v. Council on Am.-Islamic Rels., Inc., 2008 WL 2951281 at *4 (N.D. Cal. July 25, 2008); City of Inglewood v. Teixeira, 2015 WL 5025839 at *12 (C.D. Cal. Aug. 20, 2015); Marano v. Metro. Museum of Art, 472 F. Supp. 3d 76, 82–83, 88 (S.D.N.Y. 2020), aff’d, 844 F. App’x 436 (2d Cir. 2021); Lombardo v. Dr. Seuss Enters., L.P., 279 F. Supp. 3d 497, 504–05 (S.D.N.Y. 2017), aff’d, 729 F. App’x 131 (2d Cir. 2018); Hughes v. Benjamin, 437 F. Supp. 3d 382, 389, 394 (S.D.N.Y. 2020); Denison v. Larkin, 64 F. Supp. 3d 1127, 1135 (N.D. Ill. 2014).

These ruling are possible because many fair uses are obvious. A court does not need to consult a user to determine that the use of an excerpt in a book review, the use of a thumbnail photograph in an academic article commenting on the photographer’s work, or the inclusion of an image in a protest sign are lawful uses. There is no need to seek a declaration from a journalist when they quote a series of social media posts while reporting on real-time events.

And the uses by Money Bags were pretty obviously fair use:

First, the tweets appear to be noncommercial, transformative, critical commentary—classic fair uses. The tweets present photographs of a woman, identified as “the new Mrs. Brian Sheth” as part of commentary on Mr. Sheth, the clear implication being that Mr. Sheth has used his wealth to “invest” in a new, young, wife. As the holder of rights in the photographs, Bayside could have explained the original purpose of the photographs; it has chosen not to do so. In any event, it seems unlikely that Bayside’s original purpose was to illustrate criticism and commentary regarding a billionaire investor. Hence, the user “used the [works] to express ‘something new, with a further purpose or different character, altering the first with new expression, meaning, or message.’” In re DMCA Subpoena to Reddit, Inc., 441 F. Supp. 3d at 883 (quoting Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 579 (1994)). While undoubtedly crass, the user’s purpose is transformative and, Bayside’s speculation notwithstanding, there is nothing to suggest it was commercial.

The filing also calls out the magistrate judge's unwillingness to consider Twitter's own arguments:

Of course, there was a party in court able and willing to offer evidence and argument on fair use: Twitter. The Magistrate’s refusal to credit Twitter’s own evidence, Order at 7-8, sends a dangerous message to online speakers: either show up and fully litigate their anonymity—risking their right to remain anonymous in the process—or face summary loss of their anonymity when they do not appear. Order at 7. That outcome inevitably “impose[s] a considerable price” on internet users’ ability to exercise their rights to speak anonymously. Highfields, 385 F. Supp. 2d at 980-81. And “when word gets out that the price tag of effective sardonic speech is this high, that speech will likely disappear.”

Hopefully the court reconsiders the original ruling...

Mike Masnick

Former Employees Say Mossad Members Dropped By NSO Officers To Run Off-The-Books Phone Hacks

2 years 8 months ago

Oh, NSO Group, is there anything you won't do? (And then clumsily deny later?). If I were the type to sigh about such things, I surely would. But that would indicate something between exasperation and surprise, which are emotions I don't actually feel when bringing you this latest revelation about the NSO's shady dealings.

The Mossad used NSO’s Pegasus spyware to hack cellphones unofficially under the agency’s previous director, Yossi Cohen, several NSO Group employees said.

The employees, who asked to remain anonymous because of their confidentiality agreements with the company, said that Mossad officials asked NSO on several occasions to hack certain phones for them. The employees didn’t know why these hacks were requested.

There's plenty that will shock no one about these allegations. First off, NSO Group has an extremely close relationship with the Israeli government. Top-level officials have paved the way for sales to countries like Saudi Arabia and the UAE, leveraging powerful spyware to obtain diplomatic concessions.

Second, NSO -- like other Israeli malware merchants -- recruits heavily from the Israeli government, approaching military members and analysts from intelligence agencies Shin Bet and the Mossad. Given this incestuous relationship, it's unsurprising visiting Mossad members would feel comfortable asking for a few off-the-books malware deployments.

It appears these alleged hacking attempts were requested to obscure the source of the hackings, eliminating any paper trail linking the Mossad to the information obtained as a result of these malware deployments. As the Haaretz article points out, the Mossad doesn't really need NSO's tools or expertise. It had the capability to compromise cellphones well before NSO brought tools like Pegasus to market.

A generous reading of these informal requests would be that the Mossad was having problems compromising a target and wanted to see if NSO had any recent exploits that could help. A more realistic reading is that these requests were meant to evade the Mossad's oversight.

Experts in the field of phone exploitation are still trying to verify these claims and ascertain whether or not NSO could actually do what was requested. Evidence of these allegations has yet to be discovered. But it's apparent NSO's hard rules about who could or couldn't be targeted were actually portable goal posts.

NSO has sold plenty of spyware to governments with the understanding it can't be used to target US numbers. But then it showed up in the United States with a version of Pegasus called "Phantom" that could be used to target US numbers. It pitched this to FBI (with live demonstrations using dummy phones purchased by the agency) but left empty-handed when DOJ counsel couldn't find some way to use this malware without violating the Constitution or (far more likely) keeping the particulars of the hacking tool from being discussed in open court.

NSO also claims malware cannot be deployed against Israeli numbers. This, too, has been shown to be false. So, there's really no reason to believe NSO when it claims everything about its malware products is so compartmentalized Mossad officials would not be able to waltz into the building and ask for unregulated malware deployments.

Indeed, the answer given by an NSO spokesperson is so ridiculous it may prompt a sudden burst of laughter from all but the most credulous readers.

When asked what prevents an executives from spying on, say, a competitor by using an in-house server, the NSO representative stressed that even if such a system existed, the legal risks posed by such a scenario would serve as a serious deterrent.

They added that the question is tantamount to asking what prevents workers in a munitions factory from stealing guns and using them illegally, or what stops a police officer from abusing their power.

On one hand, I can see this is NSO saying you have to trust your employees and that no policy is capable of eliminating all wrongdoing. On the other hand, it offers no meaningful denial about alleged wrongdoing. The answer is at least as meaningless as the question. It basically says NSO can't really prevent malfeasance, which is definitely not a direct denial of the allegations made in this report.

NSO Group is in an unenviable position: it can't disprove allegations without opening up scrutiny of its operations and its clients. On the other hand, it can't do that without risking existing contracts or future sales. But as much as I'd like to express sympathy, the company has spent years making itself unsympathetic by selling to human rights violators and blowing off legitimate criticism of its business model. It made itself millions by selling to authoritarians and getting super cozy with Israel's government. Now it has to pay the piper. And it seriously looks like it will be as bankrupt as its morals by the time this is all said and done.

Tim Cushing

'Peaky Blinders' Production Company Working With Bushmills On A Themed Whiskey

2 years 8 months ago
Nearly a year ago, we talked about a trademark battle between Caryn Mandabach Productions, the company that produces Netflix’s Peaky Blinders hit show, and Sadler’s Brewhouse, a combined distillery that applied for a “Peaky Blinders” trademark for several spirits brands. Important to keep in mind is that “Peaky Blinders” isn’t some made up gang in […]
Dark Helmet

No, Creating An NFT Of The Video Of A Horrific Shooting Will Not Get It Removed From The Internet

2 years 8 months ago

Andy Parker has experienced something that no one should ever have to go through: having a child murdered. Even worse, his daughter, Alison, was murdered on live TV, while she was doing a live news broadcast, as an ex-colleague shot her and the news station's cameraman dead. It got a lot of news coverage, and you probably remember the story. Maybe you even watched the video (I avoided it on purpose, as I have no desire to see such a gruesome sight). Almost none of us can even fathom what that experience must be like, and I can completely understand how that has turned Parker into something of an activist. We wrote about him a year ago, when he appeared in a very weird and misleading 60 Minutes story attacking Section 230.

While Parker considers himself an "anti-big tech, anti-Section 230" advocate, we noted that his story actually shows the benefits of Section 230, rather than the problems with it. Parker is (completely understandably!) upset that the video of his daughter's murder is available online. And he wants it gone. As we detailed in our response to the 60 Minutes story, Parker had succeeded in convincing various platforms to quickly remove that video whenever it's uploaded. Something they can do, in part, because of Section 230's protections that allow them to moderate freely, and to proactively moderate content without fear of crippling lawsuits and liability.

The 60 Minutes episode was truly bizarre, because it explains Parker's tragic situation, and then notes that YouTube went above and beyond to stop the video from being shared on its platform, and then it cuts to Parker saying he "expected them to do the right thing" and then says that Google is "the personification of evil"... for... doing exactly what he asked?

Parker is now running for Congress as well, and has been spouting a bunch of bizarre things about the internet and content moderation on Twitter. I'd link to some of them, but he blocked me (a feature, again, that is aided by Section 230's existence). But now the Washington Post has a strange article about how Parker... created an NFT of the video as part of his campaign to remove it from the internet.

Now, Andy Parker has transformed the clip of the killings into an NFT, or non-fungible token, in a complex and potentially futile bid to claim ownership over the videos — a tactic to use copyright to force Big Tech’s hand.

So... none of this makes any sense. First of all, Parker doesn't own the copyright, as the article notes (though many paragraphs later, even though it seems like kind of a key point!).

Parker does not own the copyright to the footage of his daughter’s murder that aired on CBS affiliate WDBJ in 2015.

But it says he's doing this to claim "ownership" of the video, because what appear to be very, very bad lawyers have advised him that by creating an NFT he can "claim ownership" of the video, and then use the DMCA's notice-and-takedown provisions instead. Everything about this is wrong.

First, while using copyright to takedown things you don't want is quite common, it's not (at all) what copyright is meant for. And, as much as Parker does not want the video to be available, there is a pretty strong argument that many uses of that video are covered by fair use.

But, again, he doesn't hold the copyright. So, creating an NFT of the video does not magically give him a copyright, nor does it give him any power under the DMCA to demand takedowns. That requires the actual copyright. Which Parker does not have. Even more ridiculously, the TV station that does hold the copyright has apparently offered to help Parker use the copyright to issue DMCA takedowns:

In a statement, Latek said that the company has “repeatedly offered to provide Mr. Parker with the additional copyright license” to call on social media companies to remove the WDBJ footage “if it is being used inappropriately.”

This includes the right to act as their agent with the HONR network, a nonprofit created by Pozner that helps people targeted by online harassment and hate. “By doing so, we enabled the HONR Network to flag the video for removal from platforms like YouTube and Facebook,” Latek said.

So what does the NFT do? Absolutely nothing. Indeed, the NFT is nothing more than basically a signed note, saying "this is a video." And part of the ethos of the NFT space is that people are frequently encouraged to "right click and save" the content, and to share it as well -- because the content and the NFT are separate.

Hell, there's an argument (though I'd argue a weak one -- though others disagree) that by creating an NFT of a work he has no copyright over, Parker has actually opened himself up to a copyright infringement claim. Indeed, the TV station is quoted in the article noting that, while it has provided licenses to Parker to help him get the video removed, "those usage licenses do not and never have allowed them to turn our content into NFTs."

I understand that Parker wants the video taken down -- even though there may be non-nefarious, legitimate reasons for those videos to remain available in some format. But creating an NFT doesn't give him any copyright interest, or any way to use the DMCA to remove the videos and whoever told Parker otherwise should be disbarred. They're taking advantage of him and his grief, and giving him very, very bad legal advice.

Meanwhile, all the way at the end of the article, it is noted -- once again -- that the big social media platforms are extremely proactive in trying to remove the video of her murder:

“We remain committed to removing violent footage filmed by Alison Parker’s murderer, and we rigorously enforce our policies using a combination of machine learning technology and human review,” YouTube spokesperson Jack Malon said in a statement.

[...]

Facebook bans any videos that depict the shooting from any angle, with no exceptions, according to Jen Ridings, a spokesperson for parent company Meta.

“We’ve removed thousands of videos depicting this tragedy since 2015, and continue to proactively remove more,” Ridings said in a statement, adding that they “encourage people to continue reporting this content.”

The reporter then notes that he was still able to find the video on Facebook (though all the ones he found were quickly removed).

Which actually goes on to highlight the nature of the problem. It is impossible to find and block the video with perfect accuracy. Facebook and YouTube employ some of the most sophisticated tools out there for finding this stuff, but the sheer volume of content, combined with the tricks and modifications that uploaders try, mean that they're never going to be perfect. So even if Parker got the copyright, which he doesn't, it still wouldn't help. Because these sites are already trying to remove the videos.

Everything about this story is unfortunate. The original tragedy, of course, is heartbreakingly horrific. But Parker's misguided crusade isn't helping, and the whole NFT idea is so backwards that it might lead to him potentially facing a copyright claim, rather than using one. I feel sorry for Parker, not only for the tragic situation with his daughter, but because it appears that some very cynical lawyers are taking advantage of Parker's grief to try to drive some sort of policy outcome out of it. He deserves better than to be preyed upon like that.

Mike Masnick