a Better Bubble™

TechDirt 🕸

Daily Deal: The Complete Award-Winning Luminar AI Bundle

3 years 1 month ago

The Complete Award-Winning Luminar AI Bundle comes with the photo editing software, 3 template packs, and a photography ecourse. Luminar AI is an intelligent photo editor with an intuitive workflow and one-click solutions for complex tasks. With more than 100 tools powered by artificial intelligence, Luminar AI helps you make complex edits fast. Retouch portraits and create captivating magazine quality landscapes without spending hours and lots of effort. The templates included are landscapes, travel, and black and white photos. The photography ecourse will help you learn how to plan a photoshoot, and give you an introduction on editing in Luminar. The bundle is on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

How The EARN IT Act Is Significantly More Dangerous Than FOSTA

3 years 1 month ago

I've already explained the dangers of the EARN IT Act, which is supported by 19 Senators, who are misleading people with a "fact" sheet that is mostly full of myths. As Senator Wyden has explained, EARN IT will undoubtedly make the problem of child sexual abuse material (CSAM) worse, not better.

In my initial posts, I compared it to FOSTA, because EARN IT repeats the basics of the FOSTA playbook. But -- and this is very important since EARN IT appears to have significant momentum in Congress -- it's not just FOSTA 2.0, it's significantly more dangerous in multiple different ways that haven't necessarily been highlighted in most discussions of the law.

First, let's look at why FOSTA was already so problematic -- and why many in Congress have raised concerns about the damage done by FOSTA or called for the outright repeal of FOSTA. FOSTA "worked" by creating a carveout from Section 230 for anything related to "sex trafficking." As we've explained repeatedly, the false premise of the bill is that if Section 230 "doesn't protect" certain types of content, that will magically force companies to "stop" the underlying activity.

Except, that's wrong. What Section 230 does is provide immunity not just for the hosting of content, but for the decisions a company takes to deal with that content. By increasing the liability, you actually disincentivize websites from taking action against such content, because any action to deal with "sex trafficking" content on your platform can be turned around and used against you in court to show you had "knowledge" that your site was used for trafficking. The end result, then, is that many sites either shut down entirely or just put blanket bans on perfectly legal activity to avoid having to carefully review anything.

And, as we've seen, the impact of FOSTA was putting women in very real danger, especially sex workers. Whereas in the past they were able to take control of their own business via websites, FOSTA made that untenable and risky for the websites. This actually increased the amount of sex trafficking, because it opened up more opportunity for traffickers to step in and provide the services that sex workers had formerly used websites for to control their own lives. This put them at much greater risk of abuse and death. And, as some experts have highlighted, these were not unintended consequences. They were consequences that were widely known and expected from the bill.

On top of that, even though the DOJ warned Congress before the law was passed that it would make it more difficult to catch sex traffickers, Congress passed it anyway and patted each other on the back, claiming that they had successfully "fought sex trafficking." Except, since then, every single report has said the opposite is true. Multiple police departments have explained that since FOSTA it has made it harder for law enforcement to track down sex traffickers, even as it's made it easier for traffickers to operate.

Last year, the (required, but delivered late) analysis of FOSTA by the Government Accountability Office, found that the law made it more difficult to track down sex traffickers and did not seem to enable the DOJ to do anything it couldn't (but didn't!) do before. The DOJ just didn't seem to need this law that Congress insisted it needed, and basically has not used it. Instead, what FOSTA has enabled in court is not an end to sex trafficking, but ambulance chasing lawyers suing companies over nonsense -- companies like Salesforce and MailChimp, who are not engaging in sex trafficking, have had to fight FOSTA cases in court.

So, FOSTA is already a complete disaster by almost any measure. It has put women at risk. It has helped sex traffickers. It has made the job of law enforcement more difficult in trying to find and apprehend sex traffickers.

Already you should be wondering why anyone in Congress would be looking to repeat that mess all over again.

But, instead of just repeating it, they're making it significantly worse. EARN IT has a few slight differences from FOSTA, each of which make the law much more dangerous. And, incredibly, it's doing this without being able to point to a single case in which Section 230 got in the way of prosecution of CSAM.

The state law land mine:

Section 230 already exempts federal criminal law violations. With FOSTA there was a push to also exempt state criminal law. This has been a pointed desire of state Attorneys General going back at least a decade and in some cases further (notably: when EARN IT lead sponsor Richard Blumenthal was Attorney General of Connecticut he was among the AGs who asked for Section 230 to exempt state criminal law).

Some people argue that since federal criminal law is already exempt, what would be the big deal with state law exemptions -- which only highlights who is ignorant of the nature of state criminal laws. Let's just say that states have a habit of passing some incredibly ridiculous laws -- and those laws can be impossible to parse (and can even be contradictory). As you may have noticed, many states have become less laboratories of democracy and much more the testing ground for totalitarianism.

Making internet companies potentially criminally liable based on a patchwork of 50+ state laws opens them up to all sorts of incredible mischief, especially when you're dealing with state AGs whose incentives are, well, suspect.

CDT has detailed examples of conflicting state laws and how they would make it nearly impossible to comply:

For instance, in Arkansas it is illegal for an “owner, operator or employee” of online services to “knowingly fail” to report instances of child pornography on their network to “a law enforcement official.” Because this law has apparently never been enforced (it was passed in 2001, five years after Section 230, which preempts it) it is not clear what “knowingly” means. Does the offender have to know that a specific subscriber transmitted a specific piece of CSAM? Or is it a much broader concept of “knowledge,” for example that some CSAM is present somewhere on their network? To whom, exactly, do these providers report CSAM? How would this law apply to service providers located outside of Arkansas, but which may have users in Arkansas?

Maryland enables law enforcement to request online services take down alleged CSAM, and if the service provider doesn’t comply, law enforcement can obtain a court order to have it taken down without the court confirming the content is actually CSAM. Some states simply have incredibly broad statutes criminalizing the transmission of CSAM, such as Florida: “any person in this state who knew or reasonably should have known that he or she was transmitting child pornography . . . to another person in this state or in another jurisdiction commits a felony of the third degree.”

Finally, some states have laws that prohibit the distribution of “obscene” materials to minors without requiring knowledge of the character of the material or to whom the material is transmitted. For example, Georgia makes it illegal “to make available [obscene material] by allowing access to information stored in a computer” if the defendant has a “good reason to know the character of the material” and “should have known” the user is a minor. State prosecutors could argue that these laws are “regarding” the “solicitation” of CSAM on the theory that many abusers send obscene material to their child victims as part of their abuse.

Some early versions had a similar carve-out for state criminal laws, but after similar concerns were raised with Congress, it was modified so that it only applied to state criminal laws if it was also a violation of federal law. EARN IT has no such condition. In other words, EARN IT opens up the opportunity for significantly more mischief for both state legislatures and state Attorneys General to modify the law in dangerous ways.. and then enable state AGs to go after the companies for criminal violations. Given the current power of the "techlash" to attract grandstanding AGs who wish to abuse their power to shakedown internet companies for headlines, all sorts of nonsense is likely to be unleashed by this unbounded state law clause.

The encryption decoy:

I discussed this a bit in my original post, but it's worth spending some time on this as well. When EARN IT was first introduced, the entire tech industry realized that it was clearly designed to try to completely undermine end-to-end encryption (a goal of law enforcement for quite a while). Realizing that those concerns were getting too much negative attention for the bill, a "deal" was worked out to add Senator Pat Leahy's amendment which appeared to say that the use of encryption shouldn't be used as evidence of a violation of the law. However, in a House companion bill that came out a few months later, that language was modified in ways that looked slight, but actually undermined the encryption carve out entirely. From Riana Pfefferkorn, who called out this nonsense two years ago:

To recap, Leahy’s amendment attempts (albeit imperfectly) to foreclose tech providers from liability for online child sexual exploitation offenses “because the provider”: (1) uses strong encryption, (2) can’t decrypt data, or (3) doesn’t take an action that would weaken its encryption. It specifies that providers “shall not be deemed to be in violation of [federal law]” and “shall not otherwise be subject to any [state criminal charge] … or any [civil] claim” due to any of those three grounds. Again, I explained here why that’s not super robust language: for one thing, it would prompt litigation over whether potential liability is “because of” the provider’s use of encryption (if so, the case is barred) or “because of” some other reason (if so, no bar).

That’s a problem in the House version too (found at pp. 16-17), which waters Leahy’s language down to even weaker sauce. For one thing, it takes out Leahy’s section header, “Cybersecurity protections do not give rise to liability,” and changes it to the more anodyne “Encryption technologies.” True, section headers don’t actually have any legal force, but still, this makes it clear that the House bill does not intend to bar liability for using strong encryption, as Leahy’s version ostensibly was supposed to do. Instead, it merely says those three grounds shall not “serve as an independent basis for liability.” The House version also adds language not found in the Leahy amendment that expressly clarifies that courts can consider otherwise-admissible evidence of those three grounds.

What does this mean? It means that a provider’s encryption functionality can still be used to hold the provider liable for child sexual exploitation offenses that occur on the encrypted service – just not as a stand-alone claim. As an example, WhatsApp messages are end-to-end encrypted (E2EE), and WhatsApp lacks the information needed to decrypt them. Under the House EARN IT bill, those features could be used as evidence to support a court finding that WhatsApp was negligent or reckless in transmitting child sex abuse material (CSAM) on its service in violation of state law (both of which are a lower mens rea requirement than the “actual knowledge” standard under federal law). Plus, I also read this House language to mean that if WhatsApp got convicted in a criminal CSAM case, the court could potentially consider WhatsApp’s encryption when evaluating aggravating factors at sentencing (depending on the applicable sentencing laws or guidelines in the jurisdiction).

In short, so long as the criminal charge or civil claim against WhatsApp has some “independent basis” besides its encryption design (i.e., its use of E2EE, its inability to decrypt messages, and its choice not to backdoor its own encryption), that design is otherwise fair game to use against WhatsApp in the case. That was also a problem with the Leahy amendment, as said. The House version just makes it even clearer that EARN IT doesn’t really protect encryption at all. And, as with the Leahy amendment, the foreseeable result is that EARN IT will discourage encryption, not protect it. The specter of protracted litigation under federal law and/or potentially dozens of state CSAM laws with variable mens rea requirements could scare providers into changing, weakening, or removing their encryption in order to avoid liability. That, of course, would do a grave disservice to cybersecurity – which is probably just one more reason why the House version did away with the phrase “cybersecurity protections” in that section header.

So, take a wild guess which version is in this new EARN IT? Yup. It's the House version. Which, as Riana describes, means that if this bill becomes law encryption becomes a liability for every website.

FOSTA was bad, but at least it didn't also undermine the most important technology for protecting our data and communications.

The "voluntary" best practices committee tripwire:

Another difference between FOSTA and EARN IT is that EARN IT includes this very, very strange best practices committee, called the "National Commission on Online Child Sexual Exploitation Prevention" or NCOSEP. I'm going to assume the similarity in acronym to the organization NCOSE (The National Center on Sexual Exploitation -- formerly Morality in Media -- which has been beating the drum for this law as part of a plan to outlaw all pornography) is on purpose.

In the original version of EARN IT, this commission wouldn't just come up with "best practices," but Section 230 protections would then be only available to companies that followed those best practices. That puts a tremendous amount of power in the hands of the 19 Commissioners, many of which are designated to law enforcement folks, who don't have the greatest history in caring one bit about the public's rights or privacy. The Commission is also heavily weighted against those who understand content moderation and technology. The Commission would include five law enforcement members (the Attorney General, plus four others, including at least two prosecutors) and four "survivors of online child sexual exploitation", but only two civil liberties experts and only two computer science or encryption experts.

In other words, the commission is heavily biased towards moral panic, ignoring privacy rights, and the limits of technology.

Defenders of this note that this Commission is effectively powerless. The best practices that it would come up with don't hold any additional power in theory. But the reality is that we know such a set of best practices, coming from a government commission, will undoubtedly be used over and over again in court to argue that this or that company -- by somehow not following every such best practice -- is somehow "negligent" or otherwise malicious in intent. And judges buy that kind of argument all the time (even when best practices come from private organizations, not the government).

So the best practices are likely to be legally meaningful in reality, even as the law's backers insist they're not. Of course, this raises the separate question: if the Commission's best practices are meaningless, why are they even in the bill? But since they'll certainly be used in court, that means they'll have great power, and the majority of the Commission will be made up by people who have no experience with the challenges and impossibility of content moderation at scale, no experience with encryption, no experience with the dynamic and rapidly evolving nature of fighting content like CSAM -- and are going to come up with "best practices" while the actual experts in technology and content moderation are in the minority on the panel.

That is yet another recipe for disaster that goes way beyond FOSTA.

The surveillance mousetrap:

Undermining encryption would already be a disaster for privacy and security, but this bill goes even further in its attack on privacy. While it's not explicitly laid out in the bill, the myths and facts document that Blumenthal & Graham are sending around reveals -- repeatedly -- that they think that the way to protect yourself against the liability regime this bill imposes is to scan everything. That is, this is really a surveillance bill in disguise.

Repeatedly in the document, the Senators claim that surveillance scanning tools are "simple [and] readily accessible" and suggest over and over again that its only companies who don't spy on every bit of data that would have anything to worry about under this bill.

It's kind of incredible that this comes just a few months after there was a huge public uproar about Apple's plans to scan people's private data. Experts highlighted how such automated scanning was extremely dangerous and open to abuse and serious privacy concerns. Apple eventually backed down.

But it's clear from Senators Blumenthal & Graham's "myths and facts" document that they think any company that doesn't try to surveil everything should face criminal liability.

And that becomes an even bigger threat when you realize how much of our private lives and data have now moved into the cloud. Whereas it wasn't that long ago that we'd store our digital secrets on local machines, these days, more and more people store more and more of their information in the cloud or on devices with continuous internet access. And Blumenthal and Graham have laid bare that if companies do not scan their cloud storage and devices they have access to, they should face liability under this bill.

So, beyond the threat of crazy state laws, beyond the threat to encryption, beyond the threat from the wacky biased Commission, this bill also suggests the only way to avoid criminal liability is to spy on every user.

So, yes, more people have now recognized that FOSTA was a dangerous disaster that literally has gotten people killed. But EARN IT is way, way worse. This isn't just a new version of FOSTA. This is a much bigger, much more dangerous, much more problematic bill that should never be allowed to become law -- but has tremendous momentum to become law in a very short period of time.

Mike Masnick

Small Alabama Town's Overzealous Traffic Cops Also Monitored Internet Traffic To Threaten Critics Of The Corrupt PD

3 years 1 month ago

Welcome back to Brookside, Alabama, home of the surprisingly expensive traffic ticket. Home to one (1) Dollar General, nine (9) police officers, two (2) drug dogs (one named "K9 Cash" just in case you had any doubts about the PD's intentions), and one (1) Lt. Governor-ordered state audit. Brookside (pop. 1,253) made national headlines for soaking every passing driver officers could find with excessive fines, fees, vehicle seizures, and inconvenient court dates.

AL.com's investigation showed that under Police Chief Mike Jones (who was hired in 2018), the small town has seen an increase in traffic fines, topping $600,000 in 2020. The department's overachievers patrolled over 114,000 miles in a single year and issued more than 3,000 citations to passing drivers. Chief Mike Jones still had room to complain, despite his department's funding escalating from $79,000 to $524,000 since he took office. The $600,000 fine figure may have seemed abhorrent to anyone outside the suddenly flush Brookside, but Chief Jones said there was room to improve.

The new chief's directives had an immediate effect on officers, who took to the (very few) streets in unmarked cars while wearing unmarked uniforms. The resulting influx of traffic citation defendants pulled officers from the remarkably un-dangerous streets of rural Brookside to perform traffic control for the dozens of out-of-towners driving into Brookside to attend once-a-month court sessions.

The officers also decided the gloves were off and treated alleged moving violators accordingly. According to multiple accounts from Brookside victims, cops made up laws, fabricated charges, and used racist language to address drivers.

As a result of this unexpected national coverage of Chief Mike Jones's Boss Hoggish practices and policies, Chief Jones resigned his position, leaving it to the Brookside metroplex to decide what to do with all the extra cops it had decided to employ while Chief Jones was making it profitable to be a government employee.

Former Chief Jones may be able to duck under the national press radar, but local scrutiny continues, thanks to AL.com. The testimonials continue to pour in, showing Jones and his employees did pretty much everything but shoot someone on Fifth Avenue before being forced to act like real police in the face of the criticism of millions.

Drivers who have had the displeasure of interacting with the Brookside PD aren't happy. And their complaints have made their way to social media services. Apparently, a couple hundred feet of interstate traffic isn't the only thing the Brookside PD has been policing. Officers have been monitoring the internet airwaves to silence complaints and ensure the continued flow of excessive fines and fees.

Michelle Jones made an official complaint to the Alabama Attorney’s General’s office three years ago, arguing that Brookside police stopped her out of jurisdiction, issued a bogus citation and threatened her with more charges after she criticized them on Facebook.

[...]

In 2020, she had explained her case this way to the AG’s office: “The person threatened me with an arrest if I did not take down my Facebook pictures and posts of their police officers, stop sending emails to the local politicians, as well as others, and show them (Brookside police) that I understand law enforcement practices.”

Jones is not alone, as AL.com inadvertently rhymes. Others have come forward to complain about Brookside cops issuing less-than-implicit threats about online criticism. Another driver pulled over by a Brookside officer claimed the cop confiscated her phone, "explaining" that the PD often had drivers try to "stop and record us."

Jones' case is, however, one of the most alarming. After posting to Facebook, she was called by someone who only identified him as "Detective Johnson" of the Brookside Police Department. He demanded she come in and talk to officers at the PD. When she refused, things escalated:

“Detective Johnson had called and asked that I come to the Brookside Police Department to talk to them. After I told him that I would not, he reported that they have two warrants for my arrest. He stated that I issued threats, incited a riot, and slandered the Brookside Police Department in my Facebook posts. He reported that his Police Chief was mad.”

Others who have been pulled over by Brookside officers claim they've been pulled over again -- not for alleged moving violations -- but to be told there would be "consequences" if more negative content was posted to social media.

It's not surprising that a law enforcement agency that has largely blown off the Fourth and Fifth Amendments would treat the First Amendment so cavalierly. About the only thing the Brookside PD hasn't done is demand US military members be quartered by drivers cited for (possibly imaginary) traffic violations.

While it's somewhat satisfying to see Chief Jones flee his position of power after being pinpointed as the person responsible for flagrant abuses of power, it would be far more satisfying to see him run out of town by aggrieved Brookside residents. But, for whatever reason, locals and local officials have nearly nothing to say about three years of exponentially escalating roadside extortion that took place under their noses for three years.

And it was under their noses. The town is incredibly small and residents had to know the budget situation had changed drastically once Chief Jones was hired. Everyone here is culpable. But town officials are the most culpable. They had the power to stop this but they chose to profit from it instead. And for that, they should all be as out of a job as Chief Jones is. The real shame is Mike Jones will probably be able to leverage this bullshit "success" into a better paying job somewhere else in the nation since nothing he did has been found to be illegal. That may change in the future as lawsuits against him and his department move forward, but for far too many cash-strapped communities, a roadside bandit like Chief Jones might just be the hero they need… or at least endorse until it becomes politically inconvenient.

Tim Cushing

Nintendo Hates You: More DMCA Takedowns Of YouTube Videos Of Game Music Despite No Legit Alternative

3 years 1 month ago

I guess this is nearly an annual thing now. In 2019, we talked about how one YouTuber, GilvaSunner, had over one hundred YouTube videos blocked by Nintendo over copyright claims. GilvaSunner's channel is dedicated to video game music, mostly from Nintendo games. Those videos consist of nothing but that music, as in no footage of video game gameplay. Nintendo, which certainly can take this sort of action from an IP standpoint, also doesn't offer any legit alternative for fans to enjoy this music on any streaming service or the like. Then, in 2020, GilvaSunner had another whole swath of videos consisting of game music blocked by Nintendo over copyright claims. Still no legit alternative for those looking to enjoy music from Nintendo's celebrated catalogue of games.

Well, if Nintendo decided to take 2021 off from this annual project, it certainly has more than made up for it by sending copyright strikes to GilvaSunner's channel at a volume of over 1,300 in one day.

Yesterday morning, YouTuber GilvaSunner posted a tweet explaining that Nintendo had sent them and their channel over 1300 “copyright blocks.” The channel, which is extremely popular, uploads full video game soundtracks, letting fans easily listen to their favorite Kirby or Mario track via YouTube.

After all the copyright blocks went through and the dust settled, GilvaSunner shared a list of all the soundtracks that Nintendo had targeted and blocked from the site. It’s a long list.

A very long list, as you might expect. Now, a couple of items of note here. First, GilvaSunner has insisted that he is not shocked that Nintendo continues to take these actions, nor does he claim that it isn't within its rights to take them. But he's also not going to stop voluntarily.

“I’m also not angry or surprised that Nintendo is doing this, but I do think it’s a bit disappointing there is hardly an alternative,” explained GilvaSunner in a tweet thread from 2020. “If Nintendo thinks this is what needs to be done (to set an example), I will let them take down the channel. It is their content after all.”

Do as you please, in other words, Nintendo. That being said, let's also note that the channel doesn't monetize any of these videos. GilvaSunner doesn't make money off of Nintendo's music.

And neither does Nintendo because, frustratingly, the company still hasn't made this music available on any of the music streaming services we all know and love. Nor has the company announced any plans to. In other words, Nintendo isn't going to provide you with a way to enjoy this music and it is going to shut down anyone who does.

In that scenario, this isn't Nintendo protecting its monetary interests. It's simply the company deciding to take its musical ball and go home. Why? Because Nintendo hates you, that's why.

Timothy Geigner

Virginia Police Used Fake Forensic Documents To Secure Confessions From Criminal Suspects

3 years 1 month ago

Cops lie. It's just something they do.

It's something all people do. We just expect cops to do less of it because they're entrusted with enforcing laws, which suggests their level of integrity should be higher than that of the policed. Unfortunately, the opposite often tends to be the case.

There are many reasons cops lie. All of them are self-centered. They lie to cover up misconduct, salvage illegal searches, deny deployment of excessive force, and ensure narratives are preserved when challenged in court.

They also lie to obtain confessions from criminal suspects. There is nothing illegal about this act. Whether or not it crosses constitutional lines tends to come down to the judgment of the judges handling civil rights lawsuits. There's no hard and fast rule as to which lies are unconstitutional so cops do a lot of lying when trying to fit someone for a criminal charge.

Up until recently, it was okay for the Virginia Beach Police Department to use a particularly nefarious form of lying when trying to coax confessions from criminal suspects. While cops will routinely claim evidence and statements point to the person as the prime suspect, very rarely do they actually show this fake evidence to people being interrogated. Not so in Virginia Beach, where fake documents were just part of investigators' toolkits.

Police in Virginia Beach repeatedly used forged documents purporting to be from the state Department of Forensic Science during interrogations, falsely allowing suspects to believe DNA or other forensic evidence had tied them to a crime, the state attorney general revealed Wednesday in announcing an agreement to ban the practice.

This practice was inadvertently exposed by a prosecutor who asked for a certified copy of a report faked up by police investigators. The state's Department of Forensic Science told the commonwealth's attorney no such report existed, leading to an internal investigation by the PD. That happened last April. The following month (May 2021), the Virginia Beach police chief issued an order forbidding the use of this tactic. Since then, the PD has uncovered five times fake forensic documents were used during investigations.

But it wasn't just limited to investigators trying to convince suspects to admit their guilt. One of these fake documents made its way into court, used as evidence (!!) during a bail hearing.

Now, there's a statewide ban on using fake or forged forensic documents during interrogations, thanks to Virginia's Attorney General. There's been no statement made yet suggesting the prosecutions tied to use of fake documents will be examined further to determine whether their use was coercive, and the Attorney General's office has not said whether it will notify convicts who were subjected to this form of police lying.

The PD's apology is somewhat less than authentic:

The Virginia Beach Police Department said in a statement that the technique, “though legal, was not in the spirit of what the community expects.”

There are a lot of things that are technically legal but that most people would find to be an abuse of power. The key is to not engage in questionable practices just because no court has declared them unconstitutional. No doubt the investigators that used fake documents to secure confessions were aware the community at large would frown on such obviously devious behavior. But they did it anyway because winning at all costs is standard MO for most law enforcement agencies. While it's good this discovery led to swift action, the investigation should really be expanded to see what other unsavory techniques are being deployed to extract confessions.

Tim Cushing

How Disney Got That 'Theme Park Exemption' In Ron DeSantis' Unconstitutional Social Media Bill

3 years 1 month ago

It's been almost exactly a year since Florida Man Governor, Ron DeSantis announced plans to try to pass a law that would ban social media websites from taking down misinformation, abuse, and other types of speech. When the final bill came out, at the very last minute, Florida Rep. Blaise Ingoglia tried to sneak in an amendment that carved out Disney, by saying the law didn't apply to any company that owned a theme park. This took other legislators by surprise, as indicated in this somewhat incredible video of Florida Reps. Anna Eskamani and Andrew Learned confronting Ingoglia over this amendment and what it meant:

In that video, Ingoglia flat out admits that the goal was to try to carve Disney+ out of the definition of a "social media provider." He says they looked at other possible language changes and adding the "theme park" exemption was just the easiest way to exclude Disney. Of course, that never made any sense. In the video he says, repeatedly, that this is to protect "reviews" on Disney+, which is weird because Disney+ doesn't have reviews. He also tries to make weird distinctions between Disney and Netflix which suggests a really confused understanding of Section 230 and how it interacts with first party and third party content. Amusingly, Eskamani points out at one point that Disney owns other websites -- like ESPN.com -- and asks if they, too, would be exempted from the bill, and Ingoglia responds in the most inane way possible: "as long as they follow their policies, everything should be fine." Which... makes no sense and didn't answer the question.

Either way, the bill has since (rightly) been declared unconstitutional (though Florida is appealing), and the issue of the theme park exemption was mostly a sideshow in the ruling.

However, it still left many people scratching their heads as to how that came about -- including intrepid reporter Jason Garcia, who filed some freedom of information requests with the Governor's office to see if he could find out the backstory behind the Disney theme park exemption... and, let me tell you, he hit pay dirt. The emails reveal quite a lot. And, as Garcia notes:

Ron DeSantis’ willingness to give Disney an incoherent carveout from this bill raises real questions about whether the governor really cared about cracking down on Big Tech – or whether he just cared about making voters think he’d cracked down on Big Tech.

But more telling is the finding that the "amendment" appeared to come directly from Disney. The governor's legislative affairs director, Stephanie Kopelousos, emailed staffers in the Florida House to call her, and then 21 minutes later, emailed them the theme park amendment, with the subject line: "New Disney language." And, just to underline the fact that Kopelousos was corresponding with Disney folks, when some House staffers pushed back on some ideas this happened:

In one email to the other governor’s office and House staffers, Kopelousos sent a proposal under the subject line, “Latest from Disney.” A few hours later, after other staffers expressed concern that idea was too broad, she sent in another attempt, which she explained with the note, “Disney responded with this.”

At one point, Disney, through Kopelousos, suggested carving out "journalism" organizations (as if Disney is a journalism organization). That created something of a mess:

An hour later, Kopelousos emailed a third possibility. The subject line was “New Disney language,” and the language, she told the others, came “From Adam,” presumably a reference to a Disney lobbyist named Adam Babington.

[....]

“So Disney is a journalistic enterprise now?” Kurt Hamon, the staff director for the House Commerce Committee, wrote in response to of the company’s ideas. “I would say no to this one too...why would we [exempt] journalism enterprises? Would Google, Facebook and Twitter qualify as a journalistic enterprise?”

“If they have a problem with Kurt’s narrow suggestion, then they are probably doing or seeking to do more than they have indicated,” James Utheier, who was DeSantis’ general counsel and is now the governor’s chief of staff, wrote in response to another.

Basically, it appears that Disney kept trying to carve itself out and, as Ingoglia more or less admitted, with the clock ticking down on the Florida legislative session, most of Disney's own suggestions were ridiculous -- so the nonsense "theme park exemption" became the easiest to carve out Disney.

Some of these emails are hilarious.

I mean, this isn't a surprise, but it just confirms what was obvious all along. DeSantis proposed this dumb idea, and his minions in the legislature ran with it, without bothering to think through basically any of the consequences of the bill (let alone the constitutional problems with it). And then just as they were about to pass it, a Disney lobbyist realized "shit, we have websites too..." and demanded a carve out.

This is not how law-making is supposed to be done, but it sure is how law-making often is done. It sure shows the kind of soft corruption of the system, in which a large company in the state, like Disney, get to write themselves out of bills.

For what it's worth, Garcia also notes that the Senate companion to the House bill sailed through... basically because Florida state Senator Ray Rodriques flat out lied about it when questioned. He noted that the House had passed a similar bill to one they had passed, noting "the House placed some amendments on it." He then describes the other amendments the House added (which made the bill even dumber, but whatever) and then skips right over the Disney exemption. So then the Senate President asks the Senate to approve the Disney Amendment without anyone even saying what it was. Another state Senator, Perry Thurston, jumps in to ask what's in the amendment.

The Senate President, Wilton Simpson, says: "Senator Rodrigues explained the amendment. The amendment that he explained was this amendment."

Except, that's not true. At all. Rodrigues skipped right over the theme park amendment. And... then the Senate just voted to allow the amendment without ever actually saying what it did. In some ways, this is even more embarrassing than the Eskamani/Learned/Ingoglia discussion in the House. At least they were able to discuss the Disney exemption in the open and admit to what it did. Rodriguez just flat tried to ignore it to get it included...

And people wonder why the public doesn't trust politicians? Perhaps this cronyism and nonsense is why...

Mike Masnick

Senator Wyden: EARN IT Will Make Children Less Safe

3 years 1 month ago

Earlier this week we wrote about the problematic reintroduction of the EARN IT Act and explained how it will make children a lot less safe -- exactly the opposite of what its backers claim. Senator Ron Wyden has now put out a statement that succinctly explains the problems of EARN IT, and exactly how it will do incredible harm to the very children it pretends to protect:

“This sadly misguided bill will not protect children. It will not stop the spread of vile child exploitation material or target the monsters that produce it. And it does not spend a single dollar to invest in prevention services for vulnerable children and youth or help victims and their families by providing evidence-based and trauma-informed resources. Instead, the EARN IT Act threatens the privacy and security of law-abiding Americans by targeting any form of private, secure devices and communication. As a result, the bill will make it easier for predators to track and spy on children and also harm the free speech and free expression of vulnerable groups,” Wyden said. “I have spent my career in the Senate fighting to protect kids and aid victims of abuse, and I will do everything in my power to ensure every single monster responsible for exploiting children or spreading horrific CSAM materials is prosecuted to the fullest extent of the law. But this bill does nothing to turn around the Justice Department’s tragic failure to prioritize child welfare and abuse cases.”

As Wyden notes, he introduced a bill that would put $5 billion towards actually fighting child sexual abuse, but for whatever reason that bill is going nowhere, while EARN IT is on the fast track.

Only one of those bills (Wyden's) actually moves us towards really fighting against child sexual exploitation. The other one grandstands and makes children less safe because it fails to understand technology or the law. Yet which one is Congress gearing up to support?

Mike Masnick

Explainer: The Whole Spotify / Joe Rogan Thing Has Absolutely Nothing To Do With Section 230

3 years 1 month ago

I really wasn't going to write anything about the latest Spotify/Joe Rogan/Neil Young thing. We've posted older case studies about content moderation questions regarding Rogan and Spotify and we should have an upcoming guest post exploring one angle of the Rogan/Young debate that is being worked on.

However, because it's now come up a few times, I did want to address one point and do a little explainer post: Spotify's decisions about Rogan (and Young and others) has absolutely nothing to do with Section 230. At all.

Now, we can blame Spotify a bit for people thinking it does, because (for reasons I do not understand, and for which both its lawyers and its PR people should be replaced), Spotify has tried to make this about "content moderation." Hours after Spotify's internal "content policy" leaked, the company put out a blog post officially releasing the policy... that had already leaked.

And, when you're talking about "content policy" it feels like the same old debates we've had about content moderation and trust and safety and "user generated content" websites and whatnot. But the decision to keep Rogan on the platform has nothing, whatsoever, to do with Section 230. The only issue for Section 230 here is if Rogan did something that created an underlying cause of action -- such as defamation -- then, there might be a Section 230 issue if the defamed individual chose to sue Spotify. Spotify could then use Section 230 to get dismissed from the lawsuit, though the plaintiff could still sue Rogan. (If you want an analogous case, years back, AOL was sued over something Matt Drudge wrote -- after AOL had licensed the Drudge Report in order to distribute it to AOL users -- and the court said that Section 230 protected AOL from a lawsuit -- thought not Drudge himself).

The thing is, no one (that I can find at least) is alleging any actual underlying cause of action against Rogan here. They're just arguing that somehow Section 230 is to blame for Spotify's decision to keep Rogan on their platform.

But the question of Spotify's decision to keep Rogan or not has nothing to do with Section 230 at all. Spotify has every right to decide whether or not to keep Rogan in the same manner that a book publisher gets to decide whether or not they'll publish a book by someone. And that right is protected by the 1st Amendment. If someone sued Spotify for "hosting Joe Rogan," Spotify would win easily, not using Section 230, but for failure to state any actual claim, backed up by the 1st Amendment right of Spotify to work with whatever content providers they want (and not work with ones they don't).

Unfortunately, Spotify's founder Daniel Ek made matters even dumber yesterday by pulling out the mythical and entirely non-existent "platform/publisher" divide:

At the employee town hall, both Ek and chief content and advertising business officer Dawn Ostroff “repeatedly used the phrase ‘if we were a publisher,’ very strongly implying we are not a publisher, so we don’t have editorial responsibility” for Rogan’s show, said a second Spotify employee who listened to the remarks — and who, like some Spotify employees listening, found the executives’ position “a dubious assertion at best.”

In a chat linked to the town hall livestream, “A large portion of the angry comments were about how Spotify’s exclusive with Rogan means it’s more than just a regular platform,” said one employee.

That LA Times article, by Matt Pearce and Wendy Lee (who are good reporters and should know better), then confuses things as well, implying that Section 230 depends on whether or not a website acts as a "publisher or a platform." It does not. Section 230 applies equally to all "interactive computer services" with regards to content provided by "another information content provider." There is no distinction between "platform" and "publisher." The only issue is if Spotify helps create the content -- in whole or in part -- and courts have determined that merely paying for it doesn't matter here. It's whether or not the company actively had a role in making the actual content (and, more specifically, in contributing to the law-violating nature of any content). But that's not the case here.

Still, with all this talk of "platforms" and "publishers" and "content policies" and content moderation -- people seem very very quick to want to somehow blame Section 230. Superstar tech reporter Kara Swisher went on Anderson Cooper's CNN show and argued that Spotify doesn't deserve Section 230, which is weird, again, because Section 230 isn't implicated at all by Spotify's decision.

“It’s great to have different opinions. It’s not great to put out incorrect facts. There is a difference. There still is, no matter how you slice it.” @karaswisher on Spotify’s decision to add a content advisory to all podcasts that discuss Covid-19. pic.twitter.com/e7aYCe1ALt

— Anderson Cooper 360° (@AC360) February 1, 2022

Then, the folks at Sleeping Giants, an activism group that I think does really great work communicating with advertisers about where their ad dollars are going, also tweeted about the LA Times article suggesting that it was another reason why Section 230 was "too broad." After I (and many others) tweeted at them that this wasn't a 230 issue at all, they quickly apologized and removed the tweet:

Okay, @mmasnick and @evan_greer, two people who are extra knowledgeable on 230 and whose opinions I trust pretty much body slammed me on this, so I’m going to do some penance and dig deeper. Apologies to all.

Lesson learned. Never tweet, then go to a show for two hours. pic.twitter.com/q6W5Yqlrqh

— Sleeping Giants (@slpng_giants) February 3, 2022

But since so many smart people are getting this confused, I wanted to try to do my best to make it clear why this is not a 230 issue.

And the simplest way to do so is this: How would this situation play out any differently if Section 230 didn't exist? If it didn't exist then... Spotify still would be making decisions about whether or not to cut a deal with Rogan. Spotify, just like a publishing company, a newspaper, a TV cable news channel, would have a 1st amendment editorial right to determine who to allow on its platform and who not to. 230 doesn't create a right to editorial discretion (both up and down). That already exists thanks to the 1st Amendment.

Indeed, if you're thinking that Spotify might somehow be liable if someone gets hurt because they listened to someone spreading stupid advice on Rogan's podcast, that's not going to fly -- but, again, because of the 1st Amendment, not Section 230. As Section 230/1st Amendment expert Prof. Jeff Kosseff explained in this great thread, book publishers have (multiple times!) been found to be not liable for dangerous information found in the books they publish.

There has been a lot of talk about Spotify, Joe Rogan, and Section 230. The problem with the discussion is that 230 is irrelevant because there is not a viable cause of action against Spotify -- or Rogan -- for health misinfo. These books from the 80s explain why. pic.twitter.com/o1iFPfVvBt

— Jeff Kosseff (@jkosseff) February 3, 2022

In both of the cases he describes, people were injured, tried to hold the book publisher responsible for telling them to do something dangerous, and the courts said the 1st Amendment doesn't allow that.

or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers . . . Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs."

— Jeff Kosseff (@jkosseff) February 3, 2022

So then, the only way 230 comes into play here is in the specific case of if Rogan broke the law with his speech on the podcast (with defamation being the most obvious possibility). As far as I can tell, Rogan has never been sued for defamation (though he has threatened to sue CNN for defamation, but that's another dumb story for another day). So, the risk here seems minimal. Some people have suggested suing for "medical misinformation" but anything Rogan says along those lines is almost certainly protected 1st Amendment speech as well. But, if Rogan somehow said something that opened him up to a civil suit and the plaintiff also sued Spotify... Section 230 would... help Spotify... a tiny bit? It would likely help Spotify get the case tossed out marginally earlier in the process. But even if we had no 230, based on how the law was before Section 230 (and the examples like those shown by Jeff Kosseff), the courts would likely say Spotify could only be liable if it had knowledge of the illegal nature of the content, which Spotify could easily show it did not -- since Rogan produces the show himself without Spotify.

So in the end, 230 provides Spotify a tiny kind of benefit here -- the same it provides to all websites that host 3rd party content. But that benefit has nothing to do with the decision of whether to keep Rogan or not. It would only apply to the mostly unlikely situation of someone suing, and even then the benefit would be something akin to "getting a case dismissed for $50k instead of $100k, because the case would still be dismissed. Just with slightly less lawyer time.

We can have debates about Joe Rogan. We can have debates about Spotify. We can have debates about Section 230. All may be worth discussing. But the argument that Spotify keeping Rogan has anything to do with Section 230... is just wrong. The 1st Amendment lets Spotify host Rogan's podcast, just like it lets any publisher publish someone's book. Taking it away won't change the calculus for Spotify. It won't make Spotify any more likely to remove Rogan.

So, go ahead and have those other debates, but there's no sense in trying to claim it's all one debate.

Mike Masnick

Daily Deal: GoSafe S780 Dash Cam with Sony Image Sensor

3 years 1 month ago

Looking for a great dash cam that records well in low light? Check out the GoSafe S780. With its revolutionary Sony Starvis sensor, the S780 delivers remarkable performance in those tricky dusk driving situations. Plus, thanks to its dual-channel system, you can record both the front and rear of your vehicle at the same time. It's on sale for $200.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Can You Solve The Miserable Being Miserable Online By Regulating Tech?

3 years 1 month ago

Over the last few months, I've been asking a general question which I don't know the answer to, but which I think needs a lot more research. It gets back to the issue of how much of the "bad" that many people seem to insist is caused by social media (and Facebook in particular) is caused by social media, and how much of it is just shining a light on what was always there. I've suggested that it would be useful to have a more nuanced account of this, because it's become all too common for people to insist that anything bad they see talked about on social media was magically caused by social media (oddly, traditional media, including cable news, rarely gets this kind of treatment). The reality, of course, is likely that there are a mix of things happening, and they're not easily teased apart, unfortunately. So, what I'd like to see is some more nuanced accounting of how much of the "bad stuff" we see online is (1) just social media reflecting back things bad things that have always been there, but which we were less aware of as opposed to (2) enabled by social media connecting and amplifying people spreading the bad stuff. On top of that, I think we should similarly be comparing how social media also has connected tons of people for good purposes as well -- and see how much of that happens as compared to the bad.

I'm not holding my breath for anyone to actually produce this research, but I did find a recent Charlie Warzel piece very interesting, and worth reading, in which he suggests (with some interesting citations), that social media disproportionately encourages the miserable to connect with each other and egg each other on. It's a very nuanced piece that does a good job highlighting the very competing incentives happening, and notes that part of the reason there's so much garbage online is that there's tremendous demand for it:

But online garbage (whether political and scientific misinformation or racist memes) is also created because there’s an audience for it. The internet, after all, is populated by people—billions of them. Their thoughts and impulses and diatribes are grist for the algorithmic content mills. When we talk about engagement, we are talking about them. They—or rather, we—are the ones clicking. We are often the ones telling the platforms, “More of this, please.”

This is a disquieting realization. As the author Richard Seymour writes in his book The Twittering Machine, if social media “confronts us with a string of calamities—addiction, depression, ‘fake news,’ trolls, online mobs, alt-right subcultures—it is only exploiting and magnifying problems that are already socially pervasive.” He goes on, “If we’ve found ourselves addicted to social media, in spite or because of its frequent nastiness … then there is something in us that’s waiting to be addicted.”

In other words, at least some of this shouldn't be laid at the feet of the technology, but rather us, as humanity, in what we want out of the technology. It's potentially a sad statement on human psychology that we'd rather seek out the garbage than the other stuff, but it also kind of suggests that the "solution" is not so much in attacking the technology, but maybe figuring out solutions that have more to do with our own societal and psychological outlook on the world.

However, as Warzel notes, if social media is preternaturally good at linking up the miserable, and encouraging them to be more miserable together, then you could argue that it does deserve some of the blame.

Misery is a powerful grouping force. In a famous 1950s study, the social psychologist Stanley Schachter found that when research subjects were told that an upcoming electrical-shock test would be painful, most wished to wait for their test in groups, but most of those who thought the shock would be painless wanted to wait alone. “Misery doesn’t just love any kind of company,” Schachter memorably argued. “It loves only miserable company.”

The internet gives groups the ability not just to express and bond over misery but to inflict it on others—in effect, to transfer their own misery onto those they resent. The most extreme examples come in the form of racist or misogynist harassment campaigns—many led by young white men—such as Gamergate or the hashtag campaigns against Black feminists.

Misery trickles down in subtler ways too. Though the field is still young, studies on social media suggest that emotions are highly contagious on the web. In a review of the science, Harvard’s Amit Goldenberg and Stanford’s James J. Gross note that people “share their personal emotions online in a way that affects not only their own well-being, but also the well-being of others who are connected to them.” Some studies found that positive posts could drive engagement as much as, if not more than, negative ones, but of all the emotions expressed, anger seems to spread furthest and fastest. It tends to “cascade to more users by shares and retweets, enabling quicker distribution to a larger audience.”

This part is fascinating to me in that it actually does try to tease out some of the differences between what anger does to us at an emotional level as compared to happiness. It also reminds me of the (misleadingly reported) Washington Post story regarding how Facebook kept adjusting the "weighting" of the various emoji responses it added, especially focused on how to weight the "anger" emoji.

Anger certainly feels like the kind of emotion that will lead something to spread quickly -- we've all had that moment of anger over something, and spreading the news feels like at least some kind of outlet when you feel powerless over something awful that has happened. But I'm still not clear on how to break down the different aspects of how all of this interacts with social media, as compared to how much it's shining a light on deeper, more underlying societal problems that need solving at their core.

Warzel argues that the connecting of the miserable is something different, and perhaps leads to a more combustible world:

But it also means that miserable people, who were previously alienated and isolated, can find one another, says Kevin Munger, an assistant professor at Penn State who studies how platforms shape political and cultural opinions. This may offer them some short-term succor, but it’s not at all clear that weak online connections provide much meaningful emotional support. At the same time, those miserable people can reach the rest of us too. As a result, the average internet user, Munger told me in a recent interview, has more exposure than previous generations to people who, for any number of reasons, are hurting. Are they bringing all of us down?

Some of the other research he highlights suggests something similar:

“Our data show that social-media platforms do not merely reflect what is happening in society,” Molly Crockett said recently. She is one of the authors of a Yale study of almost 13 million tweets that found that users who expressed outrage were rewarded with engagement, which made them express yet more outrage. Surprisingly, the study found that politically moderate users were the most susceptible to this feedback loop. “Platforms create incentives that change how users react to political events over time,” Crockett said.

But in the end, he notes that, well, this is all interconnected and way more complicated than most people proposing solutions would like to admit. Destroying Facebook doesn't solve this. Removing Section 230 doesn't solve this (and, would almost certainly make this much, much worse).

But the technology is only part of the battle. Think of it in terms of supply and demand. The platforms provide the supply (of fighting, trolling, conspiracies, and junk news), but the people—the lost and the miserable and the left-behind—provide the demand. We can reform Facebook and Twitter while also reckoning with what they reveal about the nation’s mental health. We should examine more urgently the deeper forces—inequality, a weak social safety net, a lack of accountability for unchecked corporate power—that have led us here. And we should interrogate how our broken politics drive people to seek out easy, conspiratorial answers. This is a bigger ask than merely regulating technology platforms, because it implicates our entire country.

I think his suggestion is correct. We need to be looking across the board at how we build a better society -- and in doing so, we're doing everyone a disservice if we just think that "regulating tech" somehow will solve any of the underlying societal problems. But, as the article makes clear, there are so many different factors at play that's it not easy to tease them a part.

Mike Masnick

New FCC Broadband 'Nutrition Label' Will More Clearly Inform You You're Being Ripped Off

3 years 1 month ago

For years we've noted how broadband providers impose all manner of bullshit fees on your bill to drive up the cost of service post sale. They've also historically had a hard time being transparent about what kind of broadband connection you're buying. As was evident back when Comcast thought it would be a good idea to throttle all upstream BitTorrent traffic (without telling anybody), or AT&T decided to cap and throttle the usage of its "unlimited" wireless users (without telling anybody), or Verizon decided to modify user packets to track its customers around the internet (without telling anybody).

Maybe you see where I'm going with this.

Back in 2016 the FCC eyed the voluntary requirement that broadband providers be required to provide a sort of "nutrition label" for broadband. The idea was that this label would clearly disclose speeds, throttling, limitation, sneaky fees, and all the stuff big predatory ISPs like to bury in their fine print (if they disclose it at all). This was the example image the FCC circulated at the time:

While the idea was scuttled by the Trump administration, Congress demanded the FCC revisit it as part of the recent infrastructure bill. So the Rosenworcel FCC last week, as instructed by Congress, voted 4-0 to begin exploring new rules:

We’ve got nutrition labels on foods. They make it easy to compare products. It’s time to have the same simple nutrition labels on broadband. Everyone should be able to compare service, price and data. No more hiding fees in fine print.https://t.co/Jdc3fj4HgP

— Jessica Rosenworcel (@JRosenworcel) January 27, 2022

A final vote on approved rules will come after the Biden FCC finally has a voting majority, likely this summer. And unlike the first effort, this time the requirements will be mandatory, so ISPs will have to comply.

This is all well intentioned, and to be clear it's a good thing Comcast and AT&T will now need to be more transparent in the ways they're ripping you off. In fact, when AT&T recently announced it would be providing faster 2 and 5 Gbps fiber to some users, it stated it would be getting rid of hidden fees and caps entirely on those tiers. AT&T announced this as if they'd come up with the idea, when in reality they were just getting out ahead of the requirement they knew was looming anyway. So stuff like this does matter.

The problem of course is that forcing ISPs to be transparent about how they're ripping you off doesn't stop them from ripping you off. Big broadband providers are able to nickel-and-dime the hell out of users thanks to two things: regional monopolization causing limited competition, and the state and federal corruption that protects it. U.S. policymakers and lawmakers can't (and often won't) tackle that real problem, so instead we get these layers of band aids that only treat the symptom of a broken U.S. telecom market, not the underlying disease.

Karl Bode

Moar Consolidation: Sony Acquires Bungie, But Appears To Be More Hands Off Than Microsoft

3 years 1 month ago

A couple of weeks back we asked the question: is the video game industry experiencing an age of hyper-consolidation? The answer to that increasingly looks to be "yes". That post was built off of a pair of Microsoft acquisitions of Zenimax for $7 billion and then a bonkers acquisition of Activision Blizzard King for roughly $69 billion. Whereas consolidations in industries are a somewhat regular thing, what caused my eyes to narrow was all of the confused communications coming out of Microsoft as to how the company would handle these properties when it came to exclusivity on Microsoft platforms. It all went from vague suggestions that the status quo would be the path forward to, eventually, the announcement that some (many?) titles would in fact be Microsoft exclusives.

So, back to my saying that consolidation does seem to be the order of the day: Sony recently announced it had acquired game studio Bungie for $3.6 billion.

Sony Interactive Entertainment today announced a deal to acquire Bungie for $3.6 billion, the latest in a string of big-ticket consolidation deals in the games industry.

After the deal closes, Bungie will be "an independent subsidiary" of SIE run by a board of directors consisting of current CEO and chairman Pete Parsons and the rest of the studio's current management team.

This is starkly different than the Microsoft acquisitions in a couple of ways. Chief among them is that Bungie will continue to operate with much more independence than those acquired by Microsoft. While Sony obviously wants to recoup its investment in Bungie, the focus there appears to be on continuing to make great games using existing IP, building new IP, and creating content for that IP that expands far beyond just the video game publishing space.

What does not appear to be part of the plan are PlayStation exclusives, as explicitly stated in this interview with both Sony Interactive Entertainment CEO Jim Ryan and Bungies' CEO Pete Parsons.

In an interview with GamesIndustry.biz, Sony Interactive Entertainment CEO Jim Ryan says that Destiny 2 and future Bungie games will continue to be published on other platforms, including rival consoles. The advantages Bungie offers Sony is in its ability to make huge, multiplatform, live-service online games, which is something the wider organisation is eager to learn from.

"The first thing to say unequivocally is that Bungie will stay an independent, multiplatform studio and publisher. Pete [Parsons, CEO] and I have spoken about many things over recent months, and this was one of the first, and actually easiest and most straightforward, conclusions we reached together. Everybody wants the extremely large Destiny 2 community, whatever platform they're on, to be able to continue to enjoy their Destiny 2 experiences. And that approach will apply to future Bungie releases. That is unequivocal."

That's about as firm a stance as you're going to get in this industry. And it is a welcome sign in a few ways. Primarily, Bungie fans will be pleased to know the acquisition doesn't mean they'll lose out on game releases if they don't own a PlayStation. But perhaps just as important is that this demonstrates another route big gaming companies can go with these acquisitions.

As I stated in previous posts on the Microsoft acquisitions: consolidation doesn't have to be a bad thing, but when it results in less customer choice, that's not great. That Sony is doing this differently is a good sign.

Timothy Geigner

Spying Begins At Home: Israel's Government Used NSO Group Malware To Surveill Its Own Citizens

3 years 1 month ago

Israeli malware purveyor NSO Group may want to consider changing its company motto to "No News Is Good News." The problem is there's always more news.

The latest report from Calcalist shows NSO is aiding and abetting domestic abuse. No, we're not talking about the king of Dubai deploying NSO's Pegasus spyware to keep tabs on his ex-wife and her lawyer. This is all about how the government of Israel uses NSO's phone hacking tools. And that use appears to be, in a word, extremely irresponsible.

Israel police uses NSO’s Pegasus spyware to remotely hack phones of Israeli citizens, control them and extract information from them, Calcalist has revealed. Among those who had their phones broken into by police are mayors, leaders of political protests against former Prime Minister Benjamin Netanyahu, former governmental employees, and a person close to a senior politician.

Not exactly the terrorists and dangerous criminals NSO claims its customers target. Instead, the targets appear to be more of the same non-terrorists and non-criminals NSO customers have targeted with alarming frequency: political opponents, activists, etc.

That already looks pretty terrible (but extremely on-brand for NSO customers). But it gets a lot worse. The government didn't even bother trying to fake up any justification for this spying.

Calcalist learned that the hacking wasn’t done under court supervision, and police didn’t request a search or bugging warrant to conduct the surveillance.

Is it a "rogue state" when the entire state has decided the rules don't apply to them? Asking for people I would never consider friends.

Perhaps this abuse could have been contained, curtailed, or averted entirely. But the upper layers of the Israeli government cake couldn't be bothered.

There is also no supervision on the data being collected, the way police use it, and how it distributes it to other investigative agencies, like the Israel Securities Authority and the Tax Authority.

"Fuck it," said multiple levels of the Israeli government. It would be a shame to let these powerful hacking tools go to waste -- not when there are anti-government activists out doing activism. Israeli law enforcement decided -- not incorrectly, it appears -- it was a law unto itself, and issued its own paperwork to target protesters demonstrating against the former Prime Minister and COVID restrictions handed down by the Israeli government.

At least some of these malware attacks were targeted. In other cases, law enforcement engaged in almost-literal fishing expeditions to find more targets for NSO's Pegasus spyware.

NSO’s spyware was also used by police for phishing purposes: attempts to phish for information in an intelligence target’s phone without knowing in advance that the target committed any crime. Pegasus was installed in a cellphone of a person close to a senior politician in order to try and find evidence relating to a corruption investigation.

If you like your damning reports to be breathtaking in their depiction of government audacity, click through to read more. The further you scroll down, the worse it gets. Evidence obtained with illicit malware deployments was laundered via parallel construction. Employees of government contractors were targeted without consultation with any level of oversight. A town's mayor was hacked -- allegedly because the Israeli government suspected corruption -- but no evidence of corruption was obtained. However, all data and communications harvested from the compromised phone still remains in the hands of the government. In one case, cops used NSO malware -- again without court permission -- to identify a phone thief suspected of publishing "intimate images" from the stolen phone online.

In only a few cases was the malware used to investigate serious crimes. But even in those cases, no legal approval was obtained and the malware was deployed furtively to fly under the oversight radar.

NSO's response to this report is more of the same: Hey, we just sell the stuff. We can't control how its used, even when it's being purchased by our own government.

The Israeli police statement is far more defensive:

“The claims included in your request are untrue. Israel Police acts according to the authority granted to it by law and when necessary according to court orders and within the rules and regulations set by the responsible bodies. The police’s activity in this sector is under constant supervision and inspection of the Attorney General of Israel and additional external legal entities…"

Well, then I assume the paperwork containing signatures and explicit approval of all relevant authorities is being swiftly couriered to Calcalist HQ to provide evidence refuting the claims made in its article. Otherwise, this just sounds like the bitter muttering of an angry government spokesperson willing to do nothing more than allude to the Emperor's New Court Orders. Given the routine abuse of NSO Group malware by governments around the world, it comes as absolutely no surprise it's being abused at home as well. And the non-denials by governments are starting to wear as thin as NSO's "hey, we're only an enabler of abuse" statements.

Tim Cushing

Hollywood, Media, And Telecom Giants Are Clearly Terrified Gigi Sohn Will Do Her Job At The FCC

3 years 1 month ago

Media and telecom giants have been desperately trying to stall the nomination of Gigi Sohn to the FCC. Both desperately want to keep the Biden FCC gridlocked at 2-2 Commissioners thanks to the rushed late 2020 Trump appointment of Nathan Simington to the Commission. Both industries most assuredly don't want the Biden FCC to do popular things like restore the FCC's consumer protection authority, net neutrality, or media consolidation rules. But because Sohn is so popular, they've had a hell of a time coming up with any criticisms that make any coherent sense.

One desperate claim being spoon fed to GOP lawmakers is that Sohn wants to "censor conservatives," despite the opposite being true: Sohn has considerable support from conservatives for protecting speech and fostering competition and diversity in media (even if she disagrees with them). Another lobbying talking point being circulated is that because Sohn briefly served on the board of the now defunct Locast, she's somehow incapable of regulating things like retransmission disputes objectively. Despite the claim being a stretch, Sohn has agreed to recuse herself from such issues for the first three years of her term.

Hoping to seize on the opportunity, former FCC boss turned top cable lobbyist Mike Powell is now trying to claim that because Sohn has experience working on consumer protection issues at both Public Knowledge and the FCC (she helped craft net neutrality rules under Tom Wheeler), she should also be recused from anything having to do with telecom companies. It's a dumb Hail Mary from a revolving door lobbyist whose only interest is in preventing competent oversight of clients like Comcast:

"He said it is not clear why those would be the only issues from which she would recuse herself, “given the breadth of issues in which Public Knowledge was involved” under Sohn. He said the recusal should ”logically extend“ to all the matters she advocated for at Public Knowledge, or none.

Second, he said: “Next, in the more recent years since her service at the Commission during the Obama administration, Ms. Sohn has been publicly involved on matters of direct interest to our membership. There is no logical basis for treating these matters differently from the retransmission and copyright issues for purposes of recusal."

Facebook, Amazon, and Google all tried similar acts of desperation to thwart FTC boss Lina Khan, suggesting that because she opined on antitrust matters as an influential academic, she was utterly incapable of regulating these companies objectively. But both have a deep understanding of the sectors they're tasked with regulating. Both are also the opposite of revolving door policymakers with financial conflicts of interest, which you'll note none of these critics have the slightest issue with.

Of course telecom and big broadcasters aren't the only industries terrified of competent, popular women in positions of authority. Hollywood (and the politicians paid to love them) are also clearly terrified of someone competent at the FCC. The Directors Guild of America is also urging the Senate Commerce Committee to kill Sohn's nomination. Their justification for their opposition? Sohn once attempted to (gasp) bring competition to the cable box:

"Hollander pointed to one of the proposals that Sohn championed when she served as counselor to FCC Chairman Tom Wheeler during Barack Obama’s second term. Wheeler and Sohn saw the proposal, introduced in 2016, as a way to free cable and satellite subscribers from having to pay monthly rental fees for their set top box. The proposal would have required that pay TV providers offer a free app to access the channels, but ran into objections from the MPAA, which said it would be akin to a “compulsory copyright license.” It’s unlikely that the proposal would come up again in that form, as it was sidelined when Jessica Rosenworcel, who now is chairwoman of the FCC, declined to support it."

You might recall the 2016 proposal in question tried to force open the cable industry's dated monopoly over cable boxes by requiring cable companies provide their existing services in app form (it wasn't "free"). You might also recall that the plan failed in part because big copyright, with the help of the Copyright Office, falsely claimed the proposal was an attack on the foundations of copyright. It wasn't. But the claims, hand in hand with all kinds of other bizarre and false claims from media and cable (including the false claim the proposal would harm minorities), killed it before it really could take its first steps.

I had my doubts about the proposal. Streaming competition will inevitably kill the cable box if we wait long enough, so it would seemingly make sense to focus the FCC's limited resources on more pressing issues: like regional broadband monopolization and the resulting dearth of competition. But the FCC's doomed cable box proposal most absolutely was not an "attack on copyright." Companies just didn't want a cash cow killed (cable boxes generate about $20 billion in fee revenue annually), and the usual suspects were just absolutely terrified of disruption, competition, and change.

Congress was supposed to vote Sohn's nomination forward on Wednesday, but that has been delayed because Senator Ben Ray Luján suffered a stroke (he's expected to make a full recovery). Industry opponents to Sohn's nomination then exploited that stroke to convince Senator Maria Cantwell to postpone the vote further so they could hold yet another hearing. Industry wanted an additional hearing so they can either try to scuttle the nomination with bogus controversies they spoon feed to select lawmakers, or simply delay the vote even further.

We're now a year into Biden's first term and his FCC still doesn't have a voting majority. If you're a telecom or shitty media giant (looking at you, Rupert), that gridlock is intentional; it prevents the agency from restoring any of the unpopular favors doled out during Trumpism, be it the neutering of the FCC's consumer protection authority, or decades old media consolidation rules crafted with bipartisan support. It's once again a shining example of how U.S. gridlock and dysfunction are a lobbyist-demanded feature, not a bug or some inherent, unavoidable part of the American DNA.

These companies, organizations, and politicians aren't trying to thwart Sohn's nomination because they have meaningful, good faith concerns. Guys like Mike Powell couldn't give any less of a shit about ethics or what's appropriate. They're trying to thwart Sohn's nomination because she knows what she's doing, values competition and consumer welfare, and threatens them with the most terrifying of possibilities if you're a monopoly or bully: competent, intelligent oversight.

Karl Bode

Can We At Least Make Sure Antitrust Isn't Deliberately Designed To Make Everyone Worse Off?

3 years 1 month ago

For decades here on Techdirt I've argued that competition is the biggest driver of innovation, and so I'm very interested in policies designed to drive more competition. Historically this has been antitrust policy, but over the past decade or so it feels like antitrust policy has become less and less about competition, and more and more about punishing companies that politicians dislike. We can debate whether or not consumer welfare is the right standard for antitrust -- I think there are people on both sides of that debate who make valid points -- but I have significant concerns about any antitrust policy that seems deliberately designed to make consumers worse off.

That's why I'm really perplexed by the push recently to push through the “American Innovation and Choice Online Act” from Amy Klobuchar which, for the most part, doesn't seem to be about increasing competition, innovation, or choice. It seems almost entirely punitive in not just punishing the very small number of companies it targets, but rather everyone who uses those platforms.

There's not much I agree with Michael Bloomberg about, but I think his recent opinion piece on the AICOA bill is exactly correct.

At the heart of the bill is an effort to prevent big tech companies from using a widespread business practice called self-preferencing, which is generally good for both consumers and competition. Think of it this way: An ice-cream parlor makes its own flavors and sells other companies’ flavors, too. Its storefront window carries a large sign advertising its homemade wares. In smaller letters, the sign mentions that Haagen-Dazs and Breyers are available, too. Should Congress force the ice-cream store owners to advertise Haagen-Dazs and Breyers as prominently as their own products?

That’s essentially what this bill would force a handful of the largest tech companies to do. For instance, Google users searching the name of a local business now get, in their search results, the option of clicking a Google-built map. But under the bill’s requirements, the search results would likely have to exclude the Google map. Similarly, Amazon would likely be prevented from promoting its less-expensive generic goods against the biggest brand names.

Lots of businesses offer configurations of products and services in ways that are attractive to customers, often for both price and convenience. Doing this can allow companies to enter — and potentially disrupt — new markets, to the great advantage of customers.

Yet the bill views such standard business conduct as harmful. It would require covered companies — essentially Amazon, Apple, Google, Facebook and TikTok — to prove that any new instance of preferencing would “maintain or enhance the core functionality” of their business. Failure to comply could lead to fines of up to 15% of a company’s total U.S. revenue over the offending period.

Now, I think there's a very legitimate argument that if a dominant company is using its dominant position to preference something in a manner that harms competition and the end user experience, then that can be problematic, and existing antitrust law can take care of that. But this bill seems to assume that any effort to offer your own services is somehow de facto against the law.

And whether or not that harms these companies is besides the point: it will absolutely harm the users and customers of these companies, and why should that be enabled by US competition policy? The goal seems to be "if we force these companies to be worse, maybe it will drive people to competitors," which is a really bizarre way of pushing competition. We should drive competition by encouraging great innovation, not limiting how companies can innovate.

Even if you don't think that the "consumer welfare" standard makes sense for antitrust, I hope most people can at least agree that any such policy should never deliberately be making consumers worse off.

Mike Masnick

Texas Town To Start Issuing Traffic Tickets By Text Message

3 years 1 month ago

Way back in 2014, Oklahoma state senator (and former police officer) Al McAffrey had an idea: what if cops could issue traffic tickets electronically, without ever having to leave the safety and comfort of their patrol cars?

The idea behind it was officer safety. This would keep officers from standing exposed on open roads and/or interacting face-to-face with a possibly dangerous driver. The public's safety was apparently low on the priority list, since this lack of interaction could permit impaired drivers to continue driving or allow actually dangerous people to drive away from a moving violation to do more dangerous things elsewhere.

It also would allow law enforcement agencies to convert drivers to cash more efficiently by speeding up the process and limiting things that might slow down the revenue stream, like having actual conversations with drivers. On the more positive side, it would also have lowered the chance of a traffic stop turning deadly (either for the officer or the driver) by limiting personal interactions that might result in the deployment of excessive or deadly force. And it also would limit the number of pretextual stops by preventing officers from claiming to have smelled something illegal while conducting the stop.

Up to now, this has only been speculative legislation. But it's becoming a reality, thanks to government contractor Trusted Driver. Run by former police officer Val Garcia, the program operates much like the TSA's Trusted Traveler program. Users create accounts and enter personal info and then receive traffic citations via text messages.

The program is debuting in Texas, where drivers who opt in will start being texted by cops when they've violated the law.

It's a concept never done before, and it's about to happen in Bexar County: Getting a traffic ticket sent to your phone without an officer pulling you over. One police department will be the first in the nation to test it.

"It's not a 100% solution, but it's a step forward in the right direction," said Val Garcia, President & CEO of the Trusted Driver Program.

Garcia is one of five former SAPD officers who are part of a 12-member team that created and developed Trusted Driver.

"We're proud to still give back with what we've gained with our experience as a law enforcement officer," said Garcia.

The company claims the program will have several benefits, above and beyond limiting cop-to-driver interactions that have the possibility of escalating into deadly encounters. Some of the benefits aren't immediately discernible, but giving cops more personal information could actually help prevent the senseless injury or killing of drivers who may have medical reasons that would explain their seeming non-compliance. Here's Scott Greenfield highlighting this particular aspect of the Trusted Driver Program.

But this also offers an opportunity that can be critical in police interactions and has led to a great many tragic encounters.

“If you’re deaf, if you have PTSD, autism, a medical condition like diabetes or a physical disability but you’re still allowed to drive,” said Garcia. “It really gives an officer information faster in the field to handle a traffic stop if it does occur and be able to deescalate.”

That police will be aware that a driver is deaf or autistic could be of critical importance in preventing a mistaken shooting, provided the cop reads it and is adequately trained not to kill deaf people because they didn’t comply with commands.

Unfortunately, the cadre of cops behind Trusted Driver seem to feel citizens are looking for even more ways to interact with officers, even if this interaction is limited to text messages.

Through Trusted Driver, police are also able to send positive messages to drivers who are doing a stellar job obeying traffic laws.

Just like cops thinking they're doing a good thing by pulling over drivers who haven't committed a crime to give them a thumbs up or a Thanksgiving turkey, Trusted Driver seems to believe the public will be receptive to text messages from cops telling them they're doing a good job driving, delivered to them via a number they associate with punishment for criminal acts. And it's not like drivers in the program will be able to select which messages they receive: once you've opted in, you can have your heart rate temporarily increased by the law enforcement equivalent of slacktivism -- one Trusted Driver believes will somehow build and repair the public's relationship with the law enforcement officers that serve them.

This lies somewhere between the frontier of law enforcement and the inevitability of tech development. It's not that it's an inherently bad idea, but there's a lot in there that's problematic, including officers receiving increased access to driver's personal info, which will now include their cell phone numbers. Law enforcement officers have a history of abusing access to personal info and this program gives them the opportunity to do so without ever leaving their patrol cars.

Then there's the unanswered question about enforcement. Will members of this program receive more tickets just because they're easier to ticket? Or will traffic enforcement still be evenly distributed (so to speak) across all drivers? Like other automated traffic enforcement efforts, tickets will be issued to the owner of the vehicle, rather than the actual driver, which is going to cause problems for people who haven't actually committed a moving violation, beginning with increased insurance rates and possibly ending with bench warrants for unpaid tickets that were issued to the wrong person.

Still, it's worth experimenting with. But it needs to be subject to intense scrutiny the entire time it's deployed. There's too much at risk for agencies and the general public to just let it hum along unattended in the background, steadily generating revenue. Unfortunately, if it does that part of the job (deepening the revenue stream), concerns about its use and operation are likely to become background noise easily drowned out by the sound of city coffers being filled.

Tim Cushing

Daily Deal: The 2022 FullStack Web Developer Bundle

3 years 1 month ago

The 2022 FullStack Web Developer Bundle has 11 courses to help you step up your game as a developer. You'll learn frontend and backend web technologies like HTML, CSS, JavaScript, MySQL, and PHP. You'll also learn how to use Git and GitHub, Vuex, Docker, Ramda, and more. The bundle is on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

With Stephen Breyer's Retirement, The Supreme Court Has Lost A Justice Who Was Wary Of Overly Burdensome Copyright

3 years 1 month ago

Whatever the (I'd argue unfortunate) politics behind Stephen Breyer's decision to retire as a Supreme Court Justice at the conclusion of this term, it is notable around here for his views on copyright. Breyer has generally been seen as the one Justice on the court most open to the idea that overly aggressive copyright policy was dangerous and potentially unconstitutional. Perhaps ironically, given that they are often lumped together on the overly simplistic "left/right" spectrum -- Justices Breyer and Ginsburg -- presented somewhat opposite ends of the copyright spectrum. Ginsburg consistently was a voice in favor of expanding copyright law to extreme degrees, while Breyer seemed much more willing to recognize that the rights of users -- including fair use -- were extremely important.

If you want to see that clearly, read Ginsburg's majority opinion in the Eldred case (on whether or not copyright term extension is constitutional) as compared to Breyer's dissent. To this day I believe that 21st century copyright law would have been so much more reasonable and so much more for the benefit of the public if Breyer had been able to convince others on the court to his views. As Breyer notes in his dissent, a copyright law that does not benefit the public should not be able to survive constitutional scrutiny:

Thus, I would find that the statute lacks the constitutionally necessary rational support (1) if the significant benefits that it bestows are private, not public; (2) if it threatens seriously to undermine the expressive values that the Copyright Clause embodies; and (3) if it cannot find justification in any significant Clause-related objective.

(As an aside, the book No Law has a very, very thorough breakdown of how the majority ruling by Justice Ginsburg in that case was just, fundamentally, objectively wrong.)

That said, Breyer wasn't -- as he was sometimes painted -- a copyleft crusader or anything. As Jonathan Band details, Breyer's views on copyright appeared to be extremely balanced -- sometimes ruling for the copyright holder, and sometimes not. Indeed, to this day, I still cannot fathom how he came to write the majority opinion in the Aereo case, which used a "looks like a duck" kind of test. In that case, the company carefully followed the letter of the law regarding copyright, and the end result was that, even by playing within the lines, because it felt like some other service, the court was fine with declaring it to be a different kind of service (even though technically it was not). We are still suffering from the impact of that case today.

So, while I didn't always think that Breyer got copyright cases correct, he was -- consistently -- much more thoughtful on copyright issues that any other Justice on today's court, and that perspective will certainly be missed.

Mike Masnick

Congress Introduces New Agricultural 'Right to Repair' Bill With Massive Farmer Support

3 years 1 month ago

Back in 2015, frustration at John Deere's draconian tractor DRM helped birth a grassroots tech movement dubbed "right to repair." The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM (and the company's EULA) prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair (which for many owners involved hauling tractors hundreds of miles and shelling out thousands of additional dollars), or toying around with pirated firmware just to ensure the products they owned actually worked.

Seven years later and this movement is only growing. This week Senator Jon Tester said he was introducing new legislation (full text here, pdf) that would require tractor and other agricultural hardware manufacturers to make manuals, spare parts, and and software access codes publicly available:

"We’ve got to figure out ways to empower farmers to make sure they can stay on the land. This is one of the ways to do it,” Tester said. “I think that the more we can empower farmers to be able to control their own destiny, which is what this bill does, the safer food chains are going to be."

The legislation comes as John Deere recently was hit with two new lawsuits accusing the company of violating antitrust laws by unlawfully monopolizing the tractor repair market. In 2018 John Deere had promised to make sweeping changes to address farmers' complaints, though by 2021 those changes had yet to materialize. Tester's legislation also comes as a new US PIRG survey shows that a bipartisan mass of famers overwhelmingly support reform on this front.

Tester's proposal is just one of several new efforts to rein in attempts to monopolize repair, be it John Deere or Apple. More that a dozen state-level laws have been proposed, and the Biden administration's recent executive order on competition also urges the FTC to craft tougher rules on repair monopolization efforts. In an era rife with partisan bickering, it's refreshing to see an issue with such broad, bipartisan public support, resulting in an issue that only had niche support a half decade ago rocketing into the mainstream.

Karl Bode

YouTube Dusts Off Granular National Video Blocking To Assist YouTuber Feuding With Toei Animation

3 years 1 month ago

Hopefully, you will recall our discussion about one YouTuber, Totally Not Mark, suddenly getting flooded with 150 copyright claims on his YouTube channel all at once from Toei Animation. Mark's channel is essentially a series of videos that discuss, critique, and review anime. Toei Animation produces anime, including the popular Dragon Ball series. While notable YouTuber PewDiePie weighed in with some heavy criticism over how YouTube protects its community in general from copyright claims, the real problem here was one of location. Matt is in Ireland, while Toei Animation is based out of Japan. Japan has terrible copyright laws when it comes to anything resembling fair use, whereas Ireland is governed by fair dealing laws. In other words, Matt's use was just fine in Ireland, where he lives, but would not be permitted in Japan. Since YouTube is a global site, takedowns have traditionally been global.

Well, Matt has updated the world to note that he was victorious in getting his videos restored and cleared, with a YouTube rep working directly with him on this.

But shortly after, as Fitzpatrick revealed in a new video providing an update on the legal saga, someone “high up at YouTube’’ who wished to remain anonymous, reached out to him via Discord. Fitzpatrick said the contact not only apologized for his situation not being addressed sooner, but divulged a prior conflict between YouTube and Toei regarding his videos fair use status.

“I’m not going to lie, hearing a human voice that felt both sincerely eager to help and understanding of this impossible situation felt like a weight lifted off my shoulders,” Fitzpatrick said.

Hey, Twitch folks, if you're reading this, this is how it is done. But it isn't the whole story. Before the videos were claimed and blocked, Toei had requested that YouTube manually take Matt's videos offline. YouTube pushed back on Toei, asking for more information on its requested takedowns, specifically asking if the company had considered fair use/fair dealing laws in its request. Alongside that, YouTube also asked Toei to provide more information as to what and why Matt's videos were infringing. Instead of complying, Toei utilized YouTube's automated tools to simply claim and block those 150 videos.

The following week, a game of phone tag ensued between Toei, the Japanese YouTube team, the American YouTube team, Fitzpatrick’s YouTube contact, and himself to reach “some sort of understanding” regarding his copyright situation. Toei ended up providing a new list of 86 videos of the original 150 or so that the company deemed should not remain on YouTube, a move Fitzpatrick described as “baffling” and “inconsistent.” Toei, he concludes, has no idea of the meaning of fair use or the rules the company wants creators to abide by.

“Contained in this list was frankly the most arbitrary assortment of videos that I had ever seen,” he said. “It honestly appeared as if someone chose videos at random as if chucking darts at a dart board.”

While Matt regained control of his videos thanks to his work alongside the YouTube rep, he was still in danger of Toei filing a lawsuit in Japan that he would almost certainly lose, given that country's laws. Fortunately, YouTube has a method for blocking videos based on copyright claims in certain countries for these types of disputes. The Kotaku post linked above suggests that this method is brand new for YouTube, but it isn't. It's been around for a while but, somewhat amazingly, it appears to have never been used specifically when it comes to copyright laws in specific countries.

YouTube’s new copyright rule allows owners like Toei to have videos removed from, say, Japan’s YouTube site, but said videos will remain up in other territories as long as they fall under the country’s fair use policies. To have videos removed from places with more allowances for fair use, companies would have to argue their cases following the copyright laws of those territories.

And so Matt's review videos remain up everywhere except in Japan. That isn't a perfect solution by any stretch, but it seems to be as happy a middle ground as we're likely to find given the circumstances. Those circumstances chiefly being that Toei Animation for some reason wants to go to war with a somewhat popular YouTuber who, whatever else you might want to say about his content, is certainly driving interest publicly in Toei's products, for good or bad. This is a YouTuber the company could have collaborated with in one form or another, but instead it is busy burning down bridges.

“Similarly to how video games have embraced the online sphere, I sincerely believe that a collaborative or symbiotic relationship between online creators and copyright owners is not only more than possible but would likely work extremely well for both sides if they are open to it,” Fitzpatrick said.

That Toei Animation is not open to it is the chief problem here.

Timothy Geigner