a Better Bubble™

Aggregator

Cassandra Garibay and Ashley Clarke Join ProPublica as Engagement Reporters

3 months 3 weeks ago

ProPublica announced on Wednesday that Cassandra Garibay and Ashley Clarke have joined the crowdsourcing and engagement reporting team.

“I was so heartened by the incredible applicant pool for our engagement reporter position,” said Ariana Tobin, ProPublica’s crowdsourcing and engagement team editor. “Our field has grown by leaps and bounds in recent years, and we are so excited to have hired two journalists working at the cutting edge of it. Ashley and Cassandra have both done exceptional, thoughtful, creative work reporting on and with communities facing some of the most pressing issues of our time. I can’t wait for people to see how they level up our coverage of housing, education, immigration and more.”

Garibay is a Bay Area-based engagement reporter who plans to work on community-sourced investigations related to issues like housing and health equity. She comes to ProPublica from the bilingual news outlet El Tímpano, where she was a senior housing reporter, leading investigations into topics including how exposure to lead-based paint has impacted Latino communities in Oakland, California. Her work there was driven by citizen-fueled science, text message outreach, data analysis, research partnerships and community events.

Before her time at El Tímpano, Garibay was the California engagement editor at the University of Southern California’s Center for Health Journalism, working with journalists across the state to center the communities they covered and reach audiences in innovative ways. She previously reported on housing, health and local government for the Fresno Bee, Fresnoland and the San Luis Obispo Tribune.

“I am thrilled to join the team and excited to help crowdsource investigations and center communities at the heart of important issues across the country,” said Garibay.

Clarke plans to cover issues that impact low-income individuals and families, particularly those living in urban communities, focusing on topics like housing insecurity and homelessness, education, transportation and environment. She comes to ProPublica from Bloomberg Industry Group, where she covered law firms and worked with a product team to test and write prompts for machine learning tools designed for reporters.

Prior to her time at Bloomberg, Clarke worked as an audience engagement editor at the Center for Public Integrity, where she both reported and worked with reporters to build relationships with communities. She also managed collaborations between CPI and local newsrooms, including the award-winning investigation “Unhoused and Undercounted,” which focused on the lack of support for public school students experiencing homelessness and housing insecurity. She was named the Institute for Nonprofit News’ 2023 Nonprofit Newcomer of the Year for shaping how the CPI reports on impacted communities.

Clarke began her career in local television news at NBC in Washington, D.C., where she continues to be based. She is an adjunct professor at American University’s School of Communications and serves on the board of the Washington Association of Black Journalists. She will be with ProPublica through at least this fall.

“I’m honored to be working alongside such a talented team of journalists who are committed to doing work that drives impact and changes lives,” said Clarke. “I’m so excited to dig in and contribute to the mission.”

by ProPublica

Missouri House approves $1.3B tax cut plan

3 months 3 weeks ago
Wide-ranging legislation that would lower tax rates for individuals and businesses passed the Missouri House on Wednesday. The 100 to 53 vote saw three Republicans joining with the Democrats in opposition. It now heads to the Senate for consideration. A key provision in the bill is a gradual reduction in the state income tax from 4.7% to 3.7% over the next 10 years. The yearly percentage point rate reduction would only go into effect if state revenues grow by at least $175 million per year. When…
Jason Hancock

More Washington Post Staffers Resign Over Bezos’ Mismanagement And Authoritarian Ass Kissing

3 months 3 weeks ago
Last month U.S. oligarch Jeff Bezos gutted what was left of the paper’s op-ed section, declaring that they’d only publish pieces that supported “personal liberties and free markets” (read: kinder to right wing, corporatist ideals). As Mike noted at the time, it was an obvious trampling of editorial discretion by billionaire owner Jeff Bezos that […]
Karl Bode

How ProPublica Uses AI Responsibly in Its Investigations

3 months 3 weeks ago

ProPublica is a nonprofit newsroom that investigates abuses of power. This story was originally published in our Dispatches newsletter; sign up to receive notes from our journalists.

In February, my colleague Ken Schwencke saw a post on the social media network Bluesky about a database released by Sen. Ted Cruz purporting to show more than 3,400 “woke” grants awarded by the National Science Foundation that “promoted Diversity, Equity, and Inclusion (DEI) or advanced neo-Marxist class warfare propaganda.”

Given that Schwencke is our senior editor for data and news apps, he downloaded the data, poked around and saw some grants that seemed far afield from what Cruz, a Texas Republican, called “the radical left’s woke nonsense.” The grants included what Schwencke thought was a “very cool sounding project” on the development of advanced mirror coatings for gravitational wave detectors at the University of Florida, his alma mater.

The grant description did, however, mention that the project “promotes education and diversity, providing research opportunities for students at different education levels and advancing the participation of women and underrepresented minorities.”

Schwencke thought it would be interesting to run the data through an AI large language model — one of those powering ChatGPT — to understand the kinds of grants that made Cruz’s list, as well as why they might have been flagged. He realized there was an accountability story to tell.

In that article, Agnel Philip and Lisa Song found that “Cruz’s dragnet had swept up numerous examples of scientific projects funded by the National Science Foundation that simply acknowledged social inequalities or were completely unrelated to the social or economic themes cited by his committee.”

Among them: a $470,000 grant to study the evolution of mint plants and how they spread across continents. As best Philip and Song could tell, the project was flagged because of two specific words used in its application to the NSF: “diversify,” referring to the biodiversity of plants, and “female,” where the application noted how the project would support a young female scientist on the research team.

Another involved developing a device that could treat severe bleeding. It included the words “victims” — as in gunshot victims — and “trauma.”

Neither Cruz’s office nor a spokesperson for Republicans on the Senate Committee on Commerce, Science and Transportation responded to our requests for comment for the article.

The story was a great example of how artificial intelligence can help reporters analyze large volumes of data and try to identify patterns.

First, we told the AI model to mimic an investigative journalist reading through each of these grants to identify whether they contained themes that someone looking for “wokeness” may have spotted. And crucially, we made sure to tell the model not to guess if it wasn’t sure. (AI models are known to hallucinate, and we wanted to guard against that.)

For newsrooms new to AI and readers who are curious how this worked in practice, here’s an excerpt of the actual prompt we used:

Background: We will be showing you grants from the national science foundation that have been targeted for cancellation because they contain themes as identified by Republican Senator Ted Cruz's office as involving woke ideology; diversity, equity, and inclusion; or pro-Marxist ideology. We are looking to analyze themes of the award descriptions in this list to determine what may have terms or themes that would be considered "woke" or related to Diversity, Equity, and Inclusion (DEI). It is your task to determine whether or not the text contains these themes and tell me about what you've found. Only extract information from the NSF grant if it contains the information requested.

--

As an investigative journalist, I am looking for the following information

--

woke_description: A short description (at maximum a paragraph) on why this grant is being singled out for promoting "woke" ideology, Diversity, Equity, and Inclusion (DEI) or advanced neo-Marxist class warfare propaganda. Leave this blank if it's unclear.

why_flagged: Look at the "STATUS", "SOCIAL JUSTICE CATEGORY", "RACE CATEGORY", "GENDER CATEGORY" and "ENVIRONMENTAL JUSTICE CATEGORY" fields. If it's filled out, it means that the author of this document believed the grant was promoting DEI ideology in that way. Analyze the "AWARD DESCRIPTIONS" field and see if you can figure out why the author may have flagged it in this way. Write it in a way that is thorough and easy to understand with only one description per type and award.

citation_for_flag: Extract a very concise text quoting the passage of "AWARDS DESCRIPTIONS" that backs up the "why_flagged" data.

Of course, members of our staff reviewed and confirmed every detail before we published our story, and we called all the named people and agencies seeking comment, which remains a must-do even in the world of AI.

Philip, one of the journalists who wrote the query above and the story, is excited about the potential new technologies hold but also is proceeding with caution, as our entire newsroom is.

“The tech holds a ton of promise in lead generation and pointing us in the right direction,” he told me. “But in my experience, it still needs a lot of human supervision and vetting. If used correctly, it can both really speed up the process of understanding large sets of information, and if you’re creative with your prompts and critically read the output, it can help uncover things that you may not have thought of.”

This was just the latest effort by ProPublica to experiment with using AI to help do our jobs better and faster, while also using it responsibly, in ways that aid our human journalists.

In 2023, in partnership with The Salt Lake Tribune, a Local Reporting Network partner, we used AI to help uncover patterns of sexual misconduct among mental health professionals disciplined by Utah’s licensing agency. The investigation relied on a large collection of disciplinary reports, covering a wide range of potential violations.

To narrow in on the types of cases we were interested in, we prompted AI to review the documents and identify ones that were related to sexual misconduct. To help the bot do its work, we gave it examples of confirmed cases of sexual misconduct that we were already familiar with and specific keywords to look for. Each result was then reviewed by two reporters, who used licensing records to confirm it was categorized correctly.

In addition, during our reporting on the 2022 school shooting in Uvalde, Texas, ProPublica and The Texas Tribune obtained a trove of unreleased raw materials collected during the state’s investigation. This included hundreds of hours of audio and video recordings, which were difficult to sift through. The footage wasn’t organized or clearly labeled, and some of it was incredibly graphic and disturbing for journalists to watch.

We used self-hosted open-source AI software to securely transcribe and help classify the material, which enabled reporters to match up related files and to reconstruct the day’s events, showing in painstaking detail how law enforcement’s lack of preparation contributed to delays in confronting the shooter.

We know full well that AI does not replicate the very time-intensive work we do. Our journalists write our stories, our newsletters, our headlines and the takeaways at the top of longer stories. We also know that there’s a lot about AI that needs to be investigated, including the companies that market their products, how they train them and the risks they pose.

But to us, there’s also potential to use AI as one of many reporting tools that enables us to examine data creatively and pursue the stories that help you understand the forces shaping our world.

Agnel Philip, Ken Schwencke, Hannah Fresques and Tyson Evans contributed reporting.

by Charles Ornstein