a Better Bubble™

TechDirt 🕸

San Francisco Cops Are Running Rape Victims' DNA Through Criminal Databases Because What Even The Fuck

2 years 8 months ago

There are things people expect the government to do. And then there are the things the government actually does. The government assumes many people are comfortable with things it does that are technically legal, but certainly not how the average government user expects the system to behave.

Some of this can be seen in the Third Party Doctrine, which says people who knowingly share information with third parties also willingly share it with the government. But very few citizens are actually cool with this extended sharing, no matter what the Supreme Court-created doctrine says. This tension between people's actual expectations and the government's portrayal of the people's expectations is finally being addressed by the nation's top court. Recent rulings have shifted the balance back towards actual reasonable expectations of privacy, but there's still a whole lot of work to be done.

So, when rape victims report sexual assaults to law enforcement, they certainly don't expect their DNA samples will be run through crime databases to see if these victims of crimes have committed any crimes. But that's exactly what the San Francisco PD has been doing, according to this report from Megan Cassidy of the San Francisco Chronicle.

The San Francisco police crime lab has been entering sexual assault victims’ DNA profiles in a database used to identify suspects in crimes, District Attorney Chesa Boudin said Monday, an allegation that raises legal and ethical questions regarding the privacy rights of victims.

Boudin said his office was made aware of the purported practice last week, after a woman’s DNA collected years ago as part of a rape exam was used to link her to a recent property crime.

Shocking to the conscience, as the courts say? You'd better believe it. No one reporting a crime expects to be investigated for a different crime. And there are already enough logistical and psychological barriers standing between rape victims and justice. Knowing their rape kit might be processed in hopes of finding the accuser guilty of other crimes isn't going to encourage more victims to step forward.

On top of that, it might be illegal. California has pretty robust protections for crime victims. The state has a "Victims' Bill of Rights" that guarantees several things to those reporting crimes. Nothing explicitly forbids police from running victim DNA through crime lab databases, but this clause directly addresses the outcome of successful searches, which would result in publicly available records as police move forward with arresting and prosecuting the crime victim for crimes they allegedly committed.

To prevent the disclosure of confidential information or records to the defendant, the defendant’s attorney, or any other person acting on behalf of the defendant, which could be used to locate or harass the victim or the victim’s family or which disclose confidential communications made in the course of medical or counseling treatment, or which are otherwise privileged or confidential by law.

Prosecuting a crime creates plenty of paperwork and arrest records are public records. A defendant could easily access records about their accuser -- records that wouldn't have existed without the assistance of this completely extraneous search.

Fortunately, this revelation has prompted an internal investigation by the SFPD. Unfortunately, an internal investigation is also the easiest way to bury incriminating documents, stiff-arm outsiders seeking information, stonewall requests from city officials for more information, and, most importantly, find some way to clear anyone involved of wrongdoing.

SFPD police chief Bill Scott at least has the presence of mind to comprehend the problem this practice poses.

Scott said, “We must never create disincentives for crime victims to cooperate with police, and if it’s true that DNA collected from a rape or sexual assault victim has been used by SFPD to identify and apprehend that person as a suspect in another crime, I’m committed to ending the practice.”

Good. And: whatever. Don't be "committed" to "ending the practice." Just fucking do it. You're the police chief. There's no reason you can't issue a mandate immediately forbidding running DNA searches on rape victims. I'm no expert on police protocol, but it seems like a memo beginning with "EFFECTIVE IMMEDIATELY" would end the practice, um, immediately and inform future violators of the potential consequences of their action. A wishy-washy "commitment" that's accompanied by no action tells the rank-and-file they're free to do whatever until the internal investigation is completed and its results handed over to city officials. Waiting until the facts are in (and thoroughly massaged) is a blank check for months or years of abuse.

And this sort of thing may not be an anomaly localized entirely within the SFPD. Other law enforcement agencies may be doing the same thing. The only difference is the SFPD was the first to successfully hit the middle of the Venn diagram containing rape victims and alleged criminals. Any other agency doing the same shady searching should probably knock it the fuck off. While it may seem like good police work to run searches on any DNA samples willingly handed to them, the optics -- if nothing else -- should be all the deterrent they need, especially when it comes to victims of sexual assault who are already treated with something approaching disdain by far too many law enforcement officers.

Tim Cushing

Daily Deal: The Complete 2022 Java Coder Bundle

2 years 8 months ago

The Complete 2022 Java Coder Bundle has 9 courses to help you kick-start your Java learning, providing you with the key concepts necessary to write code. You'll learn about Java, Oracle, Apache Maven, and more. From applying the core concepts of object-oriented programming to writing common algorithms, you'll foster real, employable skills as you make your way through this training. It's on sale for $40.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

As Expected, Trump's Social Network Is Rapidly Banning Users It Doesn't Like, Without Telling Them Why

2 years 8 months ago

Earlier this week we took a look at Donald Trump and Devin Nunes' Truth Social's terms of service, noting that they -- despite claiming that Section 230 should be "repealed" -- had explicitly copied Section 230 into their terms of service. In the comments, one of our more reliably silly commenters, who inevitably insists that no website should ever moderate, and that "conservatives" are regularly removed for their political views on the major social networks (and refusing to provide any evidence to support his claims, because he cannot), insisted that Truth Social wouldn't ban people for political speech, only for "obscenity."

So, about that. As Mashable has detailed, multiple people are describing how they've been banned from Truth Social within just the first few days -- and not for obscenity. The funniest one is someone -- not the person who runs the @DevinCow account on Twitter -- tried to sign up for a @DevinCow account on Truth Social. As you probably know, Devin Nunes, as a congressman, sued the satirical cow account for being mean to him (the case is still, technically, ongoing). You may recall that the headline of my article about Devin Nunes quitting Congress to run Truth Social announced that he was leaving Congress to spend more time banning satirical cows from Truth Social.

And apparently that was accurate. Matt Ortega first tried to register the same @DevinCow on Truth Social, only to be told that the username was not even allowed (which suggests that Nunes or someone else there had already pre-banned the Cow). Ortega then tried other varieties of the name, getting through with @DevinNunesCow... briefly. Then it, too, was banned:

This is censorship. pic.twitter.com/Ih6odqlsJh

— Matt Ortega (@MattOrtega) February 22, 2022

Note that the ban email does not identify what rules were broken by the account (another point that Trumpists often point to in complaining about other websites' content moderation practices: that they don't provide a detailed accounting).

So, it certainly appears that it's not just "obscenity" that Nunes and Trump are banning. They seem to be banning accounts that might, possibly, make fun of them and their microscopically thin skins.

The Mashable article also notes that Truth Social has also banned a right wing anti-vaxxer, who you might expect to be more welcome on the site, but no such luck:

Radical anti-vax right-wing broadcaster Stew Peters complains that he's "being censored on Truth Social" simply for demanding that those responsible for the COVID-19 vaccine "be put on trial and executed." pic.twitter.com/Uf9WXA793A

— Right Wing Watch (@RightWingWatch) February 22, 2022

And here's the thing: this is normal and to be expected, and I'm glad that Truth Social is doing the standard forms of content moderation that every website needs to do to be able to operate a functional service. It would just be nice if Nunes/Trump and their whiny sycophants stopped pretending that this website is somehow more about "free speech" than other social media sites. It's not. Indeed, so far, they seem more willing to quickly ban people simply because they don't like them, than for any more principled reason or policy.

Mike Masnick

Comcast Continues To Bleed Olympics Viewers After Years Of Bumbling

2 years 8 months ago

NBC (now Comcast NBC Universal) has enjoyed the rights to broadcast the US Olympics since 1998. In 2011, the company paid $4.4 billion for exclusive US broadcast rights to air the Olympics through 2020. In 2014, Comcast NBC Universal shelled out another $7.75 billion for the rights to broadcast the summer and winter Olympics in the US... until the year 2032.

Despite years of experience Comcast/NBC still struggles to provide users what they actually want. For years the cable, broadband, and broadcast giant has been criticized for refusing to air events live, spoiling some events, implementing annoying cable paywall restrictions, implementing heavy handed and generally terrible advertising, often sensationalizing coverage, avoiding controversial subjects during broadcasts, and streaming efforts that range from clumsy to scattershot.

Not too surprisingly, years of this continues to have a profound drag on viewer numbers, which are worse than ever:

"Through Tuesday, an average of 12.2 million people watched the Olympics in prime-time on NBC, cable or the Peacock streaming service, down 42 percent from the 2018 Winter Olympics in South Korea. The average for NBC alone was 10 million, a 47 percent drop, the Nielsen company said."

And this was with Comcast/NBC's attempt to goose ratings by jumping right to Olympics coverage before the Super Bowl postgame celebrations had barely started. This year's ratings were also impacted by doping scandals, COVID, an Olympics location that barely had any snow, and disgust at the host country's human rights abuses:

"One woman on Twitter proclaimed the Olympics were “over for me. My lasting impression will be fake snow against a backdrop of 87 nuclear reactors in a country with a despicable human rights record during a pandemic. And kids who can look forward to years of therapy.”

While the Olympic veneer might not be what it used to be, you still have to think Comcast could boost viewership by exploiting the internet to broaden and improve coverage and provide more real-time live coverage of all events, while bundling it in a more coherent overall presentation. After all, they've only had two decades to perfect the formula.

Karl Bode

Apple Finally Defeats Dumb Diverse Emoji Lawsuit One Year Later

2 years 8 months ago

Roughly a year ago, we discussed a wildly silly lawsuit brought against Apple by a company called Cub Club and an individual, Katrina Parrott. At issue were "diverse emojis", which by now are so ubiquitous as to be commonplace. Parrott had created some emojis featuring more diverse and expansive color/skin tones. And, hey, that's pretty cool. The problem is that, after she had a meeting with Apple about her business, Apple decided to simply incorporate diverse skin tones into its existing emojis. The traditional yellow thumbs up hand suddenly came with different coloration options. Cub Club and Parrott sued, claiming both copyright and trademark infringements.

We said at the time we covered Apple's motion to dismiss that there was very, very little chance of this lawsuit going anywhere. The trademark portion was completely silly, given that Apple wasn't accused of any direct copying, but merely of copying the idea of diverse emojis. Since ideas aren't afforded copyright protection, well, that didn't seem like much of a winner. The trade dress claims made even less sense, since they were levied over the same content: Apple's diverse emojis. The argument from Parrott was that Apple having diverse emojis would confuse the public into thinking it had contracted with Cub Club. But that isn't how the law works. The thing you're suing over can't be a functional part of the actual product. In this case, that's literally all it was.

And so it is not particularly surprising that I'm able to up date you all that the court has dismissed the case a year later.

Apple Inc convinced a California federal judge on Wednesday to throw out a lawsuit accusing the tech giant of ripping off another company's multiracial emoji and violating its intellectual property rights.

Cub Club Investment LLC didn't show that Apple copied anything that was eligible for copyright protection, U.S. District Judge Vince Chhabria said.

Chhabria gave Cub Club a chance to amend its lawsuit but said he was "skeptical" it could succeed based on several differences between its emoji design and Apple's.

The analysis you'll see in the order embedded below basically follows our previous analysis. On the copyright claim, the judge points out that the idea of diverse emojis cannot be copyrighted and, since the accusation about similarity between the emojis themselves is made in an area where very little differences could exist, this doesn't amount to copyright infringement.

Chhabria said in a Wednesday order that even if the complaint was true, Apple at most copied Cub Club's unprotectable "idea" of diverse emoji.

"There aren't many ways that someone could implement this idea," Chhabria said. "After all, there are only so many ways to draw a thumbs up."

Exactly. As to the trade dress portion of this, well, there again the court found that the trade dress accusation concerned non-protectable elements.

To state a claim for trade dress infringement, a plaintiff must allege that “the trade dress is nonfunctional, the trade dress has acquired secondary meaning, and there is substantial likelihood of confusion between the plaintiff’s and defendant’s products.” Art Attacks Ink, LLC v. MGA Entertainment Inc., 581 F.3d 1138, 1145 (9th Cir. 2009). The trade dress alleged in the complaint is functional. The asserted trade dress consists of “the overall look and feel” of Cub Club’s “products,” including “the insertion of an emoji into messages . . . on mobile devices by selecting from a palette of diverse, five skin tone emoji.” This is functional in the utilitarian sense...

Again, right on point.

At the end of the day, while it's true that's it's easy to point at any civil lawsuit and call it a money grab, it's hard to see how this one isn't. There's simply nothing in any of this that's particularly unique or novel, even though I grant that it's a good thing there is more representation options in emojis.

Timothy Geigner

Clearview Pitch Deck Says It's Aiming For A 100 Billion Image Database, Restarting Sales To The Private Sector

2 years 8 months ago

Clearview AI -- the facial recognition tech company so sketchy other facial recognition tech companies don't want to be associated with it -- is about to get a whole lot sketchier. Its database, which supposedly contains 10 billion images scraped from the internet, continues to expand. And, despite being sued multiple times in the US and declared actually illegal abroad, the company has expansion plans that go far beyond the government agencies it once promised to limit its sales to.

A Clearview pitch deck obtained by the Washington Post contains information about the company's future plans, all of which are extremely concerning. First, there's the suggestion nothing is slowing Clearview's automated collection of facial images from the web.

The facial recognition company Clearview AI is telling investors it is on track to have 100 billion facial photos in its database within a year, enough to ensure “almost everyone in the world will be identifiable,” according to a financial presentation from December obtained by The Washington Post.

As the Washington Post's Drew Harwell points out, 100 billion images is 14 images for every person on earth. That's far more than any competitor can promise. (And for good reason. Clearview's web scraping has been declared illegal in other countries. It may also be illegal in a handful of US states. On top of that, it's a terms of service violation pretty much everywhere, which means its access to images may eventually be limited by platforms who identify and block Clearview's bots.)

As if it wasn't enough to brag about an completely involuntary, intermittently illegal amassing of facial images, Clearview wants to expand aggressively into the private sector -- something it promised not to do after being hit with multiple lawsuits and government investigations.

The company wants to expand beyond scanning faces for the police, saying in the presentation that it could monitor “gig economy” workers and is researching a number of new technologies that could identify someone based on how they walk, detect their location from a photo or scan their fingerprints from afar.

Clearview is looking for $50 million in funding to supercharge its collection process and expand its offerings beyond facial recognition. That one thing it suggests is more surveillance of freelancers, work-from-home employees, and already oft-abused "gig workers" is extremely troubling, since it would do little more than give abusive employers one more way to mistreat people they don't consider to be "real" employees.

Clearview also says its surveillance system compares favorably to ones run by the Chinese government… and not the right kind of "favorably."

[Clearview says] that its product is even more comprehensive than systems in use in China, because its “facial database” is connected to “public source metadata” and “social linkage” information.

Being more intrusive and evil than the Chinese government should not be a selling point. And yet, here we are, watching the company wooing investors with a "worse than China" sales pitch. Once again, Clearview has made it clear it has no conscience and no shame, further distancing it from competitors in the highly-controversial field who are unwilling to sink to its level of corporate depravity.

Clearview may be able to talk investors into parting with $50 million, but -- despite its grandiose, super-villainesque plans for the future -- it may not be able to show return on that investment. A sizable part of that may be spent just trying to keep Clearview from sinking under the weight of its voluminous legal bills.

Clearview is battling a wave of legal action in state and federal courts, including lawsuits in California, Illinois, New York, Vermont and Virginia. New Jersey’s attorney general has ordered police not to use it. In Sweden, authorities fined a local police agency for using it last year. The company is also facing a class-action suit in a Canadian federal court, government investigations in Canada, Sweden and the United Kingdom and complaints from privacy groups alleging data protection violations in France, Greece, Italy and the U.K.

As for its plan to violate its promise to not sell to commercial entities, CEO Hoan Ton-That offers two explanations for this reversal, one of which says it's not really a reversal.

Clearview, he told The Post, does not intend to “launch a consumer-grade version” of the facial-search engine now used by police, adding that company officials “have not decided” whether to sell the service to commercial buyers.

Considering the pitch being made, it's pretty clear company officials will decide to start selling to commercial buyers. That's exactly what's being pitched by Clearview -- something investors will expect to happen to ensure their investment pays off.

Here's the other… well, I don't know what to call this exactly. An admission Clearview will do whatever it can to make millions? That "principles" is definitely the wrong word to use?

In his statement to The Post, Ton-That said: “Our principles reflect the current uses of our technology. If those uses change, the principles will be updated, as needed.”

Good to know. Ton-That will adjust his company's morality parameters as needed. Anything Clearview has curtailed over the past two years has been the result of incessant negative press, pressure from legislators, and multiple adverse legal actions. Clearview has done none of this willingly. So, it's not surprising in the least it would renege on earlier promises as soon as it became fiscally possible to do so.

Tim Cushing

Peloton Outage Prevents Customers From Using $2,500 Exercise Bikes

2 years 8 months ago

Peloton hasn't been having a great run lately. While business boomed during the pandemic, things have taken a sour turn of late on a bizarre host of fronts. In just the last month or two the company has seen an historic drop in company valuation, fired 20 percent of its workforce, shaken up its executive management team, been forced to pause treadmill and bike production due to plummeting demand, been the subject of several TV shows featuring people having heart attacks, and now has been caught up in a new scandal for trying to cover up a rust problem to avoid a recall.

Some of the issues have been self-inflicted, while others are just the ebb and flow of the pandemic. Most users still generally love the product, and a lot of these issues are likely to fade away over time. But adding insult to injury, connectivity issues this week prevented Peloton bike and treadmill owners from being able to use their $2000-$5000 luxury exercise equipment for several hours Tuesday morning. The official Peloton Twitter account tried to downplay the scope of the issues:

We are currently investigating an issue with Peloton services. This may impact your ability to take classes or access pages on the web.

We apologize for any impact this may have on your workout and appreciate your patience. Please check https://t.co/Dxcht2tQB0 for updates.

— Peloton (@onepeloton) February 22, 2022

For much of Tuesday morning the pricey equipment simply wouldn't work. I have a Peloton Bike+, and while the pedals would physically spin, I couldn't change the resistance or load into my account; you just were stuck staring at a loading wheel in perpetuity. Some app users say they had better luck, but many Bike, Bike+, and Peloton Tread owners not only couldn't ride in live classes, they couldn't participate in recorded classes because there's no way to download a class to local storage (despite the devices being glorified Android tablets).

The outage (which occurred at the same time as a major Slack outage) was ultimately resolved after several hours, but not before owners got another notable reminder that dumb tech can often be the smarter option. Your kettlebells will never see a bungled firmware update or struggle to connect to the cloud.

Karl Bode

The GOP Knows That The Dem's Antitrust Efforts Have A Content Moderation Trojan Horse; Why Don't The Dems?

2 years 8 months ago

Last summer, I believe we were among the first to highlight that the various antitrust bills proposed by mainly Democratic elected officials in DC included an incredibly dangerous trojan horse that would aid Republicans in their "playing the victim" desire to force websites to host their disinformation and propaganda. The key issue is that many of the bills included a bar on self-preferencing a large company's own services against competitors. The supporters of these bills claimed it was to prevent, say, an Apple from blocking a competing mapping service while promoting Apple Maps, or Google from blocking a competing shopping service, while pushing Google's local search results.

But the language was so broad, and so poorly thought out, that it would create a massive headache for content moderation more broadly -- because the language could just as easily be used to say that, for example, Amazon couldn't kick Parler off it's service, or Google couldn't refuse to allow Gab's app in its app store. You would have thought that after raising this issue, the Democratic sponsors of these bills would fix the language. They have not. Bizarrely, they've continued to issue more bills in both the House and the Senate with similarly troubling language. Recently, TechFreedom called out this problematic language in two antitrust bills in the Senate that seem to have quite a lot of traction.

Whatever you think of the underlying rationale for these bills, it seems weird that these bills, introduced by Democrats, would satisfy the Republicans' desire to force online propaganda mills onto their platforms.

Every “deplatformed” plaintiff will, of course, frame its claims in broad terms, claiming that the unfair trade practice at issue isn’t the decision to ban them specifically, but rather a more general problem — a lack of clarity in how content is moderated, a systemic bias against conservatives, or some other allegation of inconsistent or arbitrary enforcement — and that these systemic flaws harm competition on the platform overall. This kind of argument would have broad application: it could be used against platforms that sell t-shirts and books, like Amazon, or against app platforms, like the Google, Apple and Amazon app stores, or against website hosts, like Amazon Web Services.

Indeed, as we've covered in the past, Gab did sue Google for being kicked out of the app store, and Parler did sue Amazon for being kicked of that company's cloud platform. These kinds of lawsuits would become standard practice -- and even if the big web services could eventually get such frivolous lawsuits dismissed, it would still be a tremendous waste of time and money, while letting grifters play the victim.

Incredibly, Republicans like Ted Cruz have made it clear this is why they support such bills. In fact, Cruz introduced an amendment to double down on this language and make sure that the bill would prohibit "discriminating on the basis of a political belief." Of course, Cruz knows full well this doesn't actually happen anywhere. The only platform that has ever discriminated based on a political belief is... Parler, whose then CEO once bragged to a reporter how he was banning "leftist trolls" from the platform.

Even more to the point, during the hearings about the bill and his amendment, Cruz flat out said that he was hoping to "unleash the trial lawyers" to sue Google, Facebook, Amazon, Apple and the like for moderating those who violate their policies. While it may sound odd that Cruz -- who as a politician has screamed about how evil trial lawyers are -- would be suddenly in favor of trial lawyers, the truth is that Cruz has no underlying principles on this or any other subject. He's long been called "the ultimate tort reform hypocrite" who supports trial lawyers when convenient, and then rails against them when politically convenient.

So no one should be surprised by Cruz's hypocrisy.

What they should be surprised by is the unwillingness of Democrats to fix their bills. A group of organizations (including our Copia Institute) signed onto another letter by TechFreedom that laid out some simple, common-sense changes that could be made to one of the bills -- the Open App Markets Act -- to fix this potential concern. And, yet, supporters of the bill continue to either ignore this or dismiss it -- even as Ted Cruz and his friends are eagerly rubbing their hands with glee.

This has been an ongoing problem with tech policy for a while now -- where politicians so narrowly focus on one issue that they don't realize how their "solutions" mess up some other policy goal. We get "privacy laws" that kill off competition. And now we have "competition" laws that make fighting disinformation harder.

It's almost as if these politicians don't want to solve actual issues, and just want to claim they did.

Mike Masnick

Hertz Ordered To Tell Court How Many Thousands Of Renters It Falsely Accuses Of Theft Every Year

2 years 8 months ago

It all started with Hertz being less than helpful when a man was falsely accused of murder. Michigan resident Herbert Alford was arrested and convicted for a murder he didn't commit. He maintained his innocence, claiming he was at the airport in Lansing, Michigan during the time the murder occurred. And he could have proven it, too, if he had just been able to produce the receipt showing he had been renting a car at Hertz twenty minutes away from the crime scene.

It wasn't until Alford had spent five years in prison that Hertz got around to producing the receipt. Three of those years can be laid directly at Hertz's feet. The receipt was requested in 2015. Hertz handed it over in 2018. Alford sued.

That's not the only lawsuit Hertz is facing. It apparently also has a bad habit of accusing paying customers of theft, something that has resulted in drivers being accosted by armed officers and/or arrested and charged.

Nine months later, another lawsuit rolled in. A proposed class action suit -- covering more than 100 Hertz customers -- claimed the company acts carelessly and engages in supremely poor recordkeeping. The lawsuit, (then) representing 165 customers, contains details of several customers who have been pulled over, arrested, and/or jailed because Hertz's rental tracking system is buggier than its competitors'. Hertz takes pain to point out these incidents only represent a very small percentage of its renters. But that's essentially meaningless when this small error rate doesn't appear to occur at other car rental agencies.

This lawsuit is forcing Hertz to disclose exactly what this error rate is and how many renters it affects. It's a much larger number than the 165 customers the lawsuit started with last November.

In a ruling Wednesday, a federal judge in Delaware sided with the request from attorneys for 230 customers who say they were wrongly arrested.

The total still depends on whom you ask. Hertz said it reports to police 0.014% of its 25 million annual rental transactions - or 3,500 customers. Attorneys for the renters said they believe the number is closer to 8,000.

It may look like only a rounding error to Hertz, but each of these 3,500-8,000 incorrect reports represents a possible loss of liberty, if not a possible loss of life. Law enforcement officers treat auto thieves as dangerous criminals. Being falsely accused by a rental company's software doesn't alter the threat matrix until long after the guns have been drawn.

Sometimes the problem has a human component. If a rental agent does not see a vehicle they thought was returned, they may file a report. And when humans aren't involved, it's Hertz's computer system doing the dirty work.

Other times, [the attorney representing Hertz customers, France Malofiy] said, the confusion is caused by a customer swapping cars during their rental period or extending the time frame. If the credit or debit card charge fails to process correctly, he said Hertz's system generates a theft report.

Malofiy said the company does not update its police reports if a payment ultimately processes - leaving customers to flounder in the criminal justice system. In 2020, a spokesperson for Hertz told the Philadelphia Inquirer that a stolen-vehicle report "was valid when it was made" and that it was "up to law enforcement to decide what to do with the case."

And there's another data point to add to Hertz's perhaps inadvertent but very fucking real infliction of misery on thousands of renters every year. A man who has spent over $15,000 with Hertz since 2020 is currently sitting in jail thanks to yet another bogus Hertz theft alert.

All of this is at odds with Hertz's repeated claim it only issues stolen vehicle notices to law enforcement following "extensive investigations." If it did actually engage in thorough investigations of every generated theft report, it would not be currently facing a lawsuit from hundreds of drivers who've been arrested and jailed over bogus theft allegations. And the problem it claims isn't really a problem wouldn't still be getting people locked up for crimes they didn't commit.

Tim Cushing

Even As Trump Relies On Section 230 For Truth Social, He's Claiming In Lawsuits That It's Unconstitutional

2 years 8 months ago

With the launch of Donald Trump's ridiculous Truth Social offering, we've already noted that he's so heavily relying on Section 230's protections to moderate that he's written Section 230 directly into his terms of service. However, at the same time, Trump is still fighting his monstrously stupid lawsuits against Twitter, Facebook, and YouTube for banning him in the wake of January 6th.

Not surprisingly (after getting the cases transferred to California), the internet companies are pointing the courts to Section 230 as to why the cases should be dismissed. And, also not surprisingly (but somewhat hilariously), Trump is making galaxy brain stupid claims in response. That's the filing in the case against YouTube which somehow has eight different lawyers signed onto a brief so bad that all eight of those lawyers should be laughed out of court.

The argument as to why Section 230 doesn't apply is broken down into three sections, each dumber than the others. First up, it claims that "Section 230 Does Not Immunize Unfair Discrimination," which claims (falsely) that YouTube is a "common carrier" (it is not, has never been, and does not resemble one in any manner). The argument is not even particularly well argued here. It's three ridiculous paragraphs, starting with Packingham (which is not relevant to a private company choosing to moderate), then claiming (without any support, since there is none) that YouTube is a common carrier, and then saying that YouTube's terms of service mean that it "must carry content, irrespective of any desire or external compulsion to discriminate against Plaintiff."

Literally all of that is wrong. It took EIGHT lawyers to be this wrong.

The second section claims -- incorrectly -- that Section 230 "does not apply to political speech." They do this by totally misrepresenting the "findings" part of Section 230 and then ignoring basically all the case law that says, of course Section 230 applies to political speech. As for the findings, while they do say that Congress wants "interactive computers services" to create "a true diversity of political discourse" as the authors of the bill themselves have explained, this has always been about allowing every individual website to moderate as they see fit. It was never designed so that every website must carry all speech, but rather by allowing websites to curate the community and content they want, there will be many different places for different kinds of speech.

Again. Eight lawyers to be totally and completely wrong.

Finally, they argue that "Section 230(c) Violates the First Amendment as Applied to This Matter." It does not. Indeed, should Trump win this lawsuit (he won't) that would violate the 1st Amendment in compelling speech on someone else's private property who does not wish to be associated with it. And this section goes off the rails completely:

The U.S. contends that Section 230(c) does not implicate the First Amendment because “it “does not regulate Plaintiff’s speech,” but only “establishes a content- and viewpoint-neutral rule prohibiting liability” for certain companies that ban others’ speech. (U.S. Mot. at 2). Defendants’ egregious conduct in restraining Plaintiff’s political speech belies its claims of a neutral standard.

I mean, the mental gymnastics necessary to make this claim are pretty impressive, so I'll give them that. But this is mixing apples and orangutans in making an argument that, even if it did make sense, still doesn't make any sense. Section 230 does not regulate speech. That's why it's content neutral. The fact that the defendant, YouTube, does moderate its content -- egregiously or not -- is totally unrelated to the question of whether or not Section 230 is content neutral. Indeed, YouTube's ability to kick Trump off its platform is itself protected by the 1st Amendment.

The lawyers seem to be shifting back and forth between the government "The U.S." and the private entity, YouTube, here, to make an argument that might make sense if it were only talking about one entity, but doesn't make any sense at all when you switch back and forth between the two.

Honestly, this filing should become a case study in law schools about how not to law.

Mike Masnick

Medical, Home Alarm Industries Warn Of Major Outages As AT&T Shuts Down 3G Network

2 years 8 months ago

It was only 2009 that AT&T heralded its cutting edge 3G network as it unveiled the launch of the iPhone (which subsequently crashed AT&T's cutting edge 3G network). Fast forward a little more than a decade and AT&T is preparing to shut that 3G network down, largely so it can repurpose the spectrum it utilizes for fifth-generation (5G) wireless deployments. While the number of actual wireless phone users still using this network is minimal, the network is still being heavily used as a connectivity option for some older medical devices and home alarm systems.

As such, the home security industry is urging regulators to delay the shutdown to give them some more time to migrate home security users on to other networks:

"The Alarm Industry Communications Committee said in a filing posted Friday by the FCC that more time is needed to work out details. A delay of at least 60 to 70 days could help some customers who have relied on AT&T’s 3G network, although arrangements remain to be negotiated, the group said.

“It would be tragic and illogical for the tens of millions of citizens being protected by 3G alarm radios and other devices to be put at risk of death or serious injury, when the commission was able to broker a possible solution but inadequate time exists to implement that solution,” the group said.

If you recall, part of the T-Mobile Sprint merger conditions involved trying to make a viable fourth wireless carrier out of Dish Network (that's generally not going all that well). T-Mobile's ongoing feud with Dish has resulted in T-Mobile keeping its 3G network alive a bit longer than AT&T. So the alarm industry is asking both the FCC and AT&T for a little more time, as well as some help migrating existing home security gear temporarily on to T-Mobile's 3G network so things don't fall apart when AT&T shuts down its 3G network (currently scheduled for February 22).

Nothing more comforting than a hidden, systemic failure of the communications elements of multiple alarm systems that does not truly reveal itself until the alarms fail in a moment of cascading crisis https://t.co/2pxuvmdhLR

— Michael Weinberg (@mweinberg2D) February 18, 2022

AT&T gave companies whose technology still use 3G three full years to migrate to alternative solutions. And it's not entirely clear how many companies, services, and industries will be impacted by the shut down. But there's an awful lot of different companies and technologies that still use 3G for internet connectivity, including a lot of fairly important medical alert systems. Nobody seems to actually know how prepared we truly are, so experts suggest the problems could range anywhere from mildly annoying to significantly disruptive:

So how bad could #Alarmageddon be? Hard to say. Lots of personal medical alerts ("Help, I've fallen and can't get up!"), DUI locks on cars, ankle bracelets for home confinement, school bus GPS system. So potentially pretty severe. (see Docket No. 21-304) /20

— (((haroldfeld))) (@haroldfeld) February 18, 2022

Again, this is all something that could have been avoided if we placed a little less priority on freaking out about various superficial issues and a put a little more attention on nuanced, boring policy issues that actually matter.

Karl Bode

Video Game History Foundation: Nintendo Actions 'Actively Destructive To Video Game History'

2 years 8 months ago

I've been banging on a bit lately about the importance of video game preservation as a matter of art preservation. It's not entirely clear to me how much buy in there is out there in general on this concept, but it's a challenge in this specific industry because much of the control over what can be preserved or not sits in the hands of game publishers and platforms compared with other forms of art. Books have libraries, films have the academies and museums, and music is decently preserved all over the place. But for gaming, even organizations like the Video Game History Foundation have to rely on publishers and platforms to let them do their work, or risk art being lost entirely to the digital ether or lawsuits over copyright. We've talked in the past about how copyright law is far too often used in a way that results in a loss of our own cultural history, and digital-only video games are particularly vulnerable to that.

We just discussed Nintendo's forthcoming shutdown of the 3DS and Wii U stores, and what that meant for digital games that Nintendo indicates it is not planning on selling anywhere else. Well, the Video Game History Foundation released a statement on that action and, well, hoo-boy...

While it is unfortunate that people won’t be able to purchase digital 3DS or Wii U games anymore, we understand the business reality that went into this decision. What we don’t understand is what path Nintendo expects its fans to take, should they wish to play these games in the future. As a paying member of the Entertainment Software Association, Nintendo actively funds lobbying that prevents even libraries from being able to provide legal access to these games. Not providing commercial access is understandable, but preventing institutional work to preserve these titles on top of that is actively destructive to video game history. We encourage ESA members like Nintendo to rethink their position on this issue and work with existing institutions to find a solution.

Accusing Nintendo of being "actively destructive to video game history" is a hell of a charge, but point out where it's wrong. I'll wait.

The problem here is that video games are still seen, both by the public and producers, as something less than the kind of artistic output of literature, paintings, sculptures, or movies. Imagine a world where someone took the collective works of Monet or Bach, shutdown the venue in which you could pay to see them, and then also indicated that nobody else was allowed to display them for commercial benefit or not. Nobody would accept such a situation. That is culture and it belongs, in at least some small ways, to all of us.

Either because the history of video games is much more recent, or due to stodgy hand-waiving about how these games are not "real art", far less fur is raised over Nintendo taking these actions without any guarantee, or in some cases hostility, to preservation efforts. Yes, Nintendo has directly produced many of these games and it has rights for them due to that. But those games are also part of our shared cultural history, and no individual or company is, or should be, afforded the right to determine how we document that cultural history.

If nothing else, that certainly isn't the purpose of copyright law.

Timothy Geigner

Massachusetts Court Says No Expectation Of Privacy In Social Media Posts Unwittingly Shared With An Undercover Cop

2 years 8 months ago

Can cops pretend to be real people on social media to catfish people into criminal charges? Social media services say no. Facebook in particular has stressed -- on more than one occasion -- that it's "real name" policy applies just as much to cops as it does to regular people.

Law enforcement believes terms of service don't apply to investigators and actively encourages officers to create fake accounts to go sniffing around for crime. That's where the Fourth Amendment comes into play. It's one thing to passively access public posts from public accounts. It's quite another when investigators decide the only way to obtain evidence to support search or arrest warrants involves "friending" someone whose posts aren't visible to the general public.

What's public is public and the third party doctrine definitely applies: users are aware their public posts are visible to anyone using the service. But those who use some privacy settings are asking courts whether it's ok for cops to engage in warrantless surveillance of their posts just because they made the mistake of allowing a fake account into their inner circle.

Accepting a friend request is an affirmative act. And that plays a big part in court decisions finding in favor of law enforcement agencies. Getting duped isn't necessarily a constitutional violation. And it's difficult to claim you've been unlawfully surveilled by fake accounts run by cops. You know, due diligence and all that. It apparently makes no difference to courts that cops violated platforms' terms of service or engaged in subterfuge to engage in fishing expeditions for culpatory evidence.

Massachusetts' top court has been asked to settle this. And the state justices seem somewhat skeptical that current law (including the state's constitution) allows for extended surveillance via fake social media accounts. No decision has been reached yet, but lower courts in the state are adding to case law, providing additional precedent that may influence the final decision from the state's Supreme Court.

This recent decision [PDF] by a Massachusetts Superior Court indicates the courts are willing to give cops leeway considering the ostensibly-public nature of social media use. But it doesn't give the Commonwealth quite as much leeway as it would like.

Here's how it started:

After accepting a "friend" request from the officer, the defendant published a video recording to his social media account that featured an individual seen from the chest down holding what appeared to be a firearm. The undercover officer made his own recording of the posting, which later was used in criminal proceedings against the defendant. A Superior Court judge denied the defendant's motion to suppress the recording as the fruit of an unconstitutional search, and the defendant appealed. We transferred the matter to this court on our own motion.

Here's how it's going:

Among other arguments, the defendant suggests that because his account on this particular social media platform was designated as "private," he had an objectively reasonable expectation of privacy in its contents. The Commonwealth contends that the act of posting any content to a social media account de facto eliminates any reasonable expectation of privacy in that content.

The competing arguments about expectation are (from the defendant) "some" and (from the Commonwealth) "none." It's not that simple, says the court.

Given the rapidly evolving role of social media in society, and the relative novelty of the technology at issue, we decline both the defendant's and the Commonwealth's requests that we adopt their proffered brightline rules.

In this case, Boston police officer Joseph Connolly created a fake Snapchat account and sent a friend request to a private account run by "Frio Fresh." Fresh accepted the friend request, allowing the officer access to all content posted. In May 2017, Officer Connolly saw a "story" posted by "Frio Fresh" that showed him carrying a silver revolver. Connolly recorded this and passed the information on to a BPD strike force after having observed (but not recorded) a second "story" showing "Frio Fresh" in a gym. The strike force began surveilling the gym and soon saw "Frio Fresh" wearing the same clothes observed in the first story (the one the officer was able to record with a second device). Strike force members pursued "Frio Fresh" and searched him, recovering the revolver seen in the Snapchat story.

The court recognizes the damage free-roaming surveillance of social media can do to constitutional rights, as well as people's generally accepted right to converse freely among friends.

Government surveillance of social media, for instance, implicates conversational and associational privacy because of the increasingly important role that social media plays in human connection and interaction in the Commonwealth and around the world. For many, social media is an indispensable feature of social life through which they develop and nourish deeply personal and meaningful relationships. For better or worse, the momentous joys, profound sorrows, and minutiae of everyday life that previously would have been discussed with friends in the privacy of each others' homes now generally are shared electronically using social media connections. Government surveillance of this activity therefore risks chilling the conversational and associational privacy rights that the Fourth Amendment and art. 14 seek to protect.

Despite this acknowledgment, the court rules against the defendant, in essence saying it was his own fault for not vetting his "friends" more thoroughly. The defendant seemed unclear as to Snapchat privacy settings and, in this case, willingly accepted a friend request from someone he didn't know who used a Snapchat-supplied image in his profile. In essence, the court is saying either you care about your privacy or you don't. And, in this case, the objective expectation of privacy is undercut by the subjective expectation of privacy this user created by being less than thorough in his vetting of friend requests.

Nonetheless, the defendant's privacy interest in this case was substantially diminished because, despite his asserted policy of restricting such access, he did not adequately "control[] access" to his Snapchat account. Rather, he appears to have permitted unknown individuals to gain access to his content. See id. For instance, Connolly was granted access to the defendant's content using a nondescript username that the defendant did not recognize and a default image that evidently was not Connolly's photograph. By accepting Connolly's friend request in those circumstances, the defendant demonstrated that he did not make "reasonable efforts to corroborate the claims of" those seeking access to his account.

[...]

Indeed, Connolly was able to view the defendant's stories precisely because the defendant gave him the necessary permissions to do so. That the defendant not only did not exercise control to exclude a user whose name he did not recognize, but also affirmatively gave Connolly the required permissions to view posted content, weighs against a conclusion that the defendant retained a reasonable expectation of privacy in his Snapchat stories.

The final conclusion is that this form of surveillance -- apparently without a warrant -- is acceptable because the surveilled user didn't take more steps to protect his posts from government surveillance. There's no discussion about the "reasonableness" of officers creating fake accounts to gain access to private posts without reasonable suspicion of criminal activity. Instead, the court merely states that "undercover police work" is "legitimate," and therefore not subjected to the same judicial rigor as the claims of someone who was duped into revealing the details of their life to an undercover cop.

The defendant may get another chance to appeal this decision if the state's Supreme Court decides creating fake accounts to trawl for criminal activity falls outside the boundaries of the Constitution. Until then, the only bright line is don't accept friend requests from people you don't know. But that's still problematic, considering there's no corresponding restriction on government activities, which may lead to officers impersonating people from targets' social circles to gain access to private posts. And when that happens, what recourse will defendants have? The court says it's on defendants to protect their privacy no matter how many lies law enforcement officers tell. That shifts too much power to the government and places the evidentiary burden solely on people who expect their online conversations to be free of government surveillance.

Tim Cushing

Techdirt Podcast Episode 312: Regulating The Internet

2 years 8 months ago

We've got another cross-post this week: Mike was recently a guest on the new Internet of Humans podcast by Jillian York and Konstantinos Komaitis, for a wide-ranging discussion about internet regulation issues today and where they might be headed. You can listen to the entire conversation on this week's episode.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

Leigh Beadon

US Copyright Office Gets It Right (Again): AI-Generated Works Do Not Get A Copyright Monopoly

2 years 8 months ago

For years, throughout the entire monkey selfie lawsuit saga, we kept noting that the real reason a prestigious law firm like Irell & Manella filed such a patently bogus lawsuit was to position itself to be the go-to law firm to argue for AI-generated works deserving copyright. However, we've always argued that AI-generated works are (somewhat obviously) in the public domain, and get no copyright. Again, this goes back to the entire nature of copyright law -- which is to create a (limited time) incentive for creators, in order to get them to create a work that they might not have otherwise created. When you're talking about an AI, it doesn't need a monetary incentive (or a restrictive one). The AI just generates when it's told to generate.

This idea shouldn't even be controversial. It goes way, way back. In 1966 the Copyright Office's annual report noted that it needed to determine if a computer-created work was authored by the computer and how copyright should work around such works:

In 1985, prescient copyright law expert, Pam Samuelson, wrote a whole paper exploring the role of copyright in works created by artificial intelligence. In that paper, she noted that, while declaring such works to be in the public domain, it seemed like an unlikely result as "the legislature, the executive branch, and the courts seem to strongly favor maximalizing intellectual property rewards" and:

For some, the very notion of output being in the public domain may seem to be an anathema, a temporary inefficient situation that will be much improved when individual property rights are recognized. Rights must be given to someone, argue those who hold this view; the question is to whom to give rights, not whether to give them at all.

Indeed, we've seen exactly that. Back in 2018, we wrote about examples of lawyers having trouble even conceptualizing a public domain for such works, as they argued that someone must hold the copyright. But that's not the way it needs to be. The public domain is a thing, and it shouldn't just be for century-old works.

Thankfully (and perhaps not surprisingly, since they started thinking about it all the way back in the 1960s), when the Copyright Office released its third edition of the giant Compendium of U.S. Copyright Office Practices, it noted that it would not grant a copyright on "works that lack human authorship" using "a photograph taken by a monkey" as one example, but also noting "the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author."

Of course, that leaves open some kinds of mischief, and the Office even admits that whether the creative work is done by a human or a computer is "the crucial question." And, that's left open attempts to copyright AI-generated works. Jumping in to push for copyrights for the machines was... Stephen Thaler. We've written about Thaler going all the way back to 2004 when he was creating a computer program to generate music and inventions. But, he's become a copyright and patent pest around the globe. We've had multiple stories about attempts to patent AI-generated inventions in different countries -- including the US, Australia, the EU and even China. The case in China didn't involve Thaler (as far as we know), but the US, EU, and Australia cases all did (so far, only Australia has been open to allowing a patent for AI).

But Thaler is not content to just mess up patent law, he's pushing for AI copyrights as well. And for years, he's been trying to get the Copyright Office go give his AI the right to claim copyright. As laid out in a comprehensive post over at IPKat, the Copyright Office has refused him many times over, with yet another rejection coming on Valentine's Day.

The Review Board was, once again, unimpressed. It held that “human authorship is a prerequisite to copyright protection in the United States and that the Work therefore cannot be registered.”

The phrase ‘original works of authorship’ under §102(a) of the Act sets limits to what can be protected by copyright. As early as in Sarony (a seminal case concerning copyright protection of photographs), the US Supreme Court referred to authors as human.

This approach was reiterated in other Supreme Court’s precedents like Mazer and Goldstein, and has been also consistently adopted by lower courts.

While no case has been yet decided on the specific issue of AI-creativity, guidance from the line of cases above indicates that works entirely created by machines do not access copyright protection. Such a conclusion is also consistent with the majority of responses that the USPTO received in its consultation on Artificial Intelligence and Intellectual Property Policy.

The Review also rejected Thaler’s argument that AI can be an author under copyright law because the work made for hire doctrine allows for “non-human, artificial persons such as companies” to be authors. First, held the Board, a machine cannot enter into any binding legal contract. Secondly, the doctrine is about ownership, not existence of a valid copyright.

Somehow, I doubt that Thaler is going to stop trying, but one hopes that he gets the message. Also, it would be nice for everyone to recognize that having more public domain is a good thing and not a problem...

Mike Masnick

LA Sheriff Threatens To 'Subject' City Council To 'Defamation Law' If They Won't Stop Calling His Deputies 'Gang Members'

2 years 8 months ago

The man presiding over a law enforcement agency filled with gangs and cliques would prefer city officials stop referring to his employees as gang members.

Los Angeles County Sheriff Alex Villanueva has stated that there are no gangs within the Sheriff's Department, a claim he is obviously unable to back up with facts, because the facts make it clear that the LASD has been (and apparently still is) home to multiple gangs composed of deputies. There's even a Wikipedia page dedicated to the gangs infesting the Sheriff's Department.

If you distrust the info on the anyone-can-edit Wikipedia page, there's also this comprehensive database compiled by journalist Cerise Castle for Knock LA -- one that pulls info from public records and court documents to list suspected and verified members of LASD gangs.

Sheriff Villanueva continues to claim there are no gangs within his department. He has also instituted a policy to address the problem he says doesn't exist, forbidding deputies from "joining any group that commits misconduct." You'd think this policy would forbid any deputy from being employed by the Los Angeles Sheriff's Department, but I guess that's not how Villanueva reads his edict.

As for Villanueva's claim gangs and cliques don't exist within his department? Well, let's take a look at what his employees say:

Hundreds of Los Angeles County sheriff’s deputies said they have been recruited to join secretive, sometimes gang-like cliques that operate within department stations, according to the findings of a survey by independent researchers.

The anticipated study into the problematic fraternities — which L.A. County officials commissioned the Rand Corp. to conduct in 2019 — found 16% of the 1,608 deputies and supervisors who anonymously answered survey questions had been invited to join a clique, with some invitations having come in the last five years.

Well, all evidence to the contrary aside, Sheriff Villanueva is no longer going to stand idly by while city officials continue to make accurate statements about his problematic agency. He's issued a… well, not really a "cease and desist" letter [PDF] to the Los Angeles Board of Supervisors demanding (but not really) they stop saying his department has a gang problem. (h/t Adam Steinbaugh)

The letter is a fun read, even more so because Sheriff Villanueva definitely did not want his vaguely threatening fluff to be considered enjoyable for all the wrong reasons. Behold the semi-coherent wrath of a pissed off public servant.

As the elected Sheriff of Los Angeles County, I demand you and other elected leaders, as well as your appointees, immediately cease and desist from using the derogatory term “deputy gangs” when referring to members of the Los Angeles County Sheriff's Department (Department). This willful defamation of character has injured both individuals and the organization. It also serves no purpose other than to fuel hatred and increase the probability of assault and negative confrontations against our people.

So, it looks like a cease-and-desist (it even uses the words!), but the Sheriff has no power to make this demand. And Villanueva is hopefully using the phrase "defamation of character" in the colloquial, no-relation-to-the-legal-meaning sense of the words, because there's plenty of evidence out there that would make any accusations about LASD gangs "substantially true" and, therefore, not defamation at all. I know we (and by "we," I mostly mean courts) don't expect law enforcement officers to be legal experts, which is good, I guess, because they clearly fucking aren't.

The letter continues in the same vein: Villanueva bitching, mostly ineffectively, that it's unfair to his department when city officials say bad things about him and his employees. The next paragraph of the letter basically says the Sheriff's Department has all the heroes and the Board of Supervisors has all the hypocritical assholes.

My personnel routinely place themselves in harm's way while serving our community and ask nothing in return, other than a paycheck and maybe a little respect for the tough job they perform. Elected officials have no problem attending the funeral of a peace officer killed in the line of duty and often fight for the opportunity to speak at the podium, but the manner in which some have enthusiastically branded my personnel as "gang members” every opportunity they get is disgusting.

It is completely possible for officials to show their respect for an officer killed in the line of duty while still suspecting the law enforcement agency they work for is home to groups of officers who commit serious misconduct while engaging in gang-like behavior: violent acts, tattoos/clothing/insignias/etc., codes of silence, et al. You know, just like it's possible for officers of the law to recognize the War on Drugs harms more than it helps.

According to the sheriff's letter, the only reason board members might refer to deputies as gang members stems from a dismissed lawsuit brought by a former LASD deputy. The letter claims this is the only "evidence" anyone has ever had and that other research arriving at the same conclusions is completely undermined because a single source of information was declared to be untrustworthy by a court decision. That willfully ignores the years of data that shows deputies have formed cliques/gangs within the department. And while that may not be the sole contributing factor to large amounts of misconduct, it certainly hasn't helped neutralize the "us vs. them" mentality that is the root of so many casual abuses of rights.

From there it gets truly laughable, with Sheriff Villanueva again demonstrating his inability to understand speech-related laws before claiming that referring to LASD gangs is actually a form of bigotry.

Those who want to further undermine the perception of law enforcement use it as hate speech to promote their own agendas, such as defunding law enforcement and redirecting those funds to their own non-profit organizations, many of which are nothing more than sham corporations who operate with virtually zero accountability. Further use of the term will be evidence of your actual underlying intent, which appears to be a campaign to inflict harm upon the reputation of the Department and myself.

First off, calling someone a gang member or implying there are gangs in the LASD isn't hate speech. It's not even hate speech in the most ignorant sense of the word. Speech someone doesn't like is not hate speech, and that's all that's really happening here. The Sheriff and his deputies aren't a protected class, nor is being employed by the LASD an immutable characteristic that can trigger hate crime laws when derogatory language is used. The rest of this is no less stupid. "Further use… will be evidence of your actual underlying intent" to harm the Department. Whatever. This isn't legally binding and further use will be evidence of nothing.

So very stupid.

As the first fluently Spanish speaking Latino Sheriff in over a hundred years, who supervises a majority Latino workforce, I hope you can see the blatant racial inferences your conscious bias displays every time you choose to attack our Department with this derogatory term.

Um, people were saying the LASD was gang-infested long before you took office, Sheriff. That they're still saying it doesn't reflect on you or your multilingual skills. All it says is that the problem persists and it's now your problem, Sheriff.

Finally, the Sheriff appears to believe this somehow is a valid legal threat, despite the fact he's unlikely to prevail in a defamation lawsuit against city council members. Here's how the letter wraps up:

I openly challenge every elected leader, or their appointees, to provide facts to me and name individuals who they can prove are "gang members," as defined by California Penal Code Section 13670, and subject yourself to defamation laws if wrong.

LOL. Well, this shouldn't be too hard. Here's the relevant part of the California Code:

"Law enforcement gang" means a group of peace officers within a law enforcement agency who may identify themselves by a name and may be associated with an identifying symbol, including, but not limited to, matching tattoos, and who engage in a pattern of on-duty behavior that intentionally violates the law or fundamental principles of professional policing, including, but not limited to, excluding, harassing, or discriminating against any individual based on a protected category under federal or state antidiscrimination laws, engaging in or promoting conduct that violates the rights of other employees or members of the public, violating agency policy, the persistent practice of unlawful detention or use of excessive force in circumstances where it is known to be unjustified, falsifying police reports, fabricating or destroying evidence, targeting persons for enforcement based solely on protected characteristics of those persons, theft, unauthorized use of alcohol or drugs on duty, unlawful or unauthorized protection of other members from disciplinary actions, and retaliation against other officers who threaten or interfere with the activities of the group.

To sum up: the Los Angeles Sheriff's Department is a gang associated with an identifying symbol that engages in all of the listed behavior. Therefore, it should be declared illegal under state law and disbanded.

There are few things more enjoyable than sternly-worded letters that are 50% bluster, 50% unintentionally hilarious. Recipients of this letter should take the Sheriff up on his dare and let him know just how many bad apples he's overseeing. If nothing else, council members should send Villanueva a "Thanks for the laugh. I really needed that." in response to his declaration of keyboard war.

Tim Cushing

Daily Deal: codeSpark Academy Sibling Bundle

2 years 8 months ago

codeSpark’s mission is to help all kids learn to code by igniting their curiosity in computer science and turning programming into play. The app is designed to teach kids 4 to 9 the foundations of computer science through puzzles, coding challenges, and creative tools. It's a great way for your kid to learn how to code, and it has no ads or in-game purchases. Kids learn concepts such as: sequencing, loops, conditional statements, events, Boolean logic and sorting, and, variables (coming soon). Get 3 months of unlimited access for 2 accounts for $18.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Daily Deal

Trump's Truth Social Bakes Section 230 Directly Into Its Terms, So Apparently Trump Now Likes Section 230

2 years 8 months ago

When Donald Trump first announced his plans to launch his own Twitter competitor, Truth Social, we noted that the terms of service on the site indicated that the company -- contrary to all the nonsense claims of being more "free speech" supportive than existing social media sites -- was likely going to be quite aggressive in banning users who said anything that Trump disliked. Last month, Devin Nunes, who quit Congress to become CEO of the fledgling site, made it clear that the site would be heavily, heavily moderated, including using Hive, a popular tool for social media companies that want to moderate.

So with the early iOS version of the app "launching" this past weekend, most people were focused on the long list of things that went wrong with the launch, mainly security flaws and broken sign-ups. There's also been some talk about how the logo may be a copy... and the fact that Trump's own wife declared that she'll be using Parler for her social media efforts.

But, for me, I went straight to checking out the terms of service for the site. They've been updated since the last time, but the basics remain crystal clear: despite all the silly yammering from Nunes and Trump about how they're the "free speech" supporting social network, Truth Social's terms are way more restrictive regarding content than just about any I've ever seen before.

Still, the most incredible part is not only that Truth Social is embracing Section 230, but it has literally embedded parts of 230 into its terms of service. The terms require people who sign up to "represent and warrant" that their content doesn't do certain things. And the site warns that if you violate any of these terms it "may result in, among other things, termination or suspension of your rights to use the Service and removal or deletion of your Contributions." I don't know, but I recall a former President and a former cow farming Representative from California previously referring to that kind of termination as "censorship." But, one of the things that users must "represent and warrant" is the following:

your Contributions are not obscene, lewd, lascivious, filthy, violent, harassing, libelous, slanderous, or otherwise objectionable.

That might sound familiar to those of you who are knowledgeable about Section 230 -- because it's literally cribbed directly from Section 230(c)(2), which says:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable...

That's almost word for word the same as 230. The only changes are that it removes "excessively" from "violent" and adds in "libelous" and "slanderous," -- subjects in which Devin Nunes considers himself something of an expert, though courts don't seem to agree.

Hell, they even leave in the catch-all "otherwise objectionable," even as some of their Republican friends in Congress have tried to remove that phrase in a few of their dozens of "Section 230 reform" bills.

So it's not at all surprising, but potentially a bit ironic that the man who demanded the outright repeal of Section 230 (even to the point of trying to stop funding the US military if Congress didn't repeal the law) has now not only embraced Section 230, but has literally baked a component of it (the part that he and his ignorant fans have never actually understood) directly into his own service's terms.

It's so blatant I almost wonder if it was done just for the trolling. That said, I still look forward to Truth Social using Section 230 to defend itself against inevitable lawsuits.

There are some other fun tidbits in the terms of service that suggest the site will be one of the most aggressive in moderating content. It literally claims that it may take down content that is "false, inaccurate, or misleading" (based on Truth Social's own subjective interpretation, of course). You can't advertise anything on the site without having it "authorized." You need to "have the written consent, release, and/or permission of each and every identifiable individual person in your Contributions." Does Truth Social think you actually need written permission to talk about someone?

There's also a long, long list of "prohibited" activities, including compiling a database of Truth Social data without permission, any advertising (wait, what?), bots, impersonation, "sexual content or language," or "any content that portrays or suggest explicit sexual acts." I'm not sure how Former President "Grab 'em by the p***y" will survive on his own site. Oh right, also "sugar babies" and "sexual fetishes" are banned.

Lots of fun stuff that indicates that like 4chan, then 8chan, then Gab, then Parler, then Gettr that have at times declared themselves to be "free speech zones," every website knows that it needs to moderate to some level, and also that it's Section 230 that helps keep them out of court when they moderate in ways that piss off some of their users.

Mike Masnick

15 Years Late, The FCC Cracks Down On Broadband Apartment Monopolies

2 years 8 months ago

A major trick dominant broadband providers use to limit competition is exclusive broadband arrangements with landlords. Often an ISP will strike an exclusive deal with the owner of a building, apartment complex, or development that effectively locks in a block by block monopoly. And while the FCC passed rules in 2007 to purportedly stop this from happening, they contained too many loopholes to be of use.

Susan Crawford wrote pretty much the definitive story on this at Wired a while back, noting that the rules are so terrible, ISPs and landlords can tap dance around them by simply calling what they're doing... something else:

"...The Commission has been completely out-maneuvered by the incumbents. Sure, a landlord can’t enter into an exclusive agreement granting just one ISP the right to provide Internet access service to an MDU, but a landlord can refuse to sign agreements with anyone other than Big Company X, in exchange for payments labeled in any one of a zillion ways. Exclusivity by any other name still feels just as abusive."

Fifteen years later and the FCC is finally doing something about it. After being nudged toward the action via Biden's executive order on competition, the FCC has finally voted to update its rules on this front, tightening rules banning outright building by building monopolies.

There's still some wiggle room for ISPs though, even under the new rules that should be formally adopted later this year. One thing ISPs enjoy doing is striking a financial partnership with a landlord, then signing a deal that bans anybody but the primary ISP from advertising in the building. Under the updated rules ISPs and landlords can still do this, they just have to be transparent about it.

The updated rules do tighten up the rules to clearly prohibit other shady tactics, however. For example the FCC's original 2007 rules prohibited ISPs from blocking any competitors from using in-building wiring (which in many cases was installed by a regional monopoly years ago). So to get around this, cable and phone monopoly lawyers came up with a workaround: the ISP would deed ownership of the in-building wires to the landlord, who would turn around and grant exclusive access to those wires to their favored ISP (read: whichever ISP gave them the most money or had the best lawyers).

According to a statement by FCC boss Jessica Rosenworcel, the rule update specifically prohibits this practice:

"We clarify that sale-and-leaseback arrangements violate our existing rules that regulate cable wiring inside buildings. Since the 1990s, we have had rules that allow buildings and tenants to exercise choice about how to use the wiring in the building when they are switching cable providers, but some companies have circumvented these rules by selling the wiring to the building and leasing it back on an exclusive basis. We put an end to that practice today."

Again, it's fairly inexcusable that it took the FCC literally the better part of a generation to outlaw these kinds of practices to help boost building-by building competition. But it's fairly representative of a U.S. regulatory apparatus that's consistently handcuffed, under-funded, and lobbied into apathy by regional monopolies who very much prefer the profitable status quo (cable providers, as you'd expect, fought against these latest rule updates). And while it's great news the FCC still did something about it, enforcement and actual tough penalties (not the FCC's strong suit) will be key. As will acting more swiftly and competently when they find telecom monopoly lawyers have crafted entirely new convoluted legal workarounds.

Karl Bode