a Better Bubble™

Aggregator

Texas Lawmakers Pull Funding for Child Identification Kits Again After Newsrooms Report They Don’t Work

3 months 1 week ago

This article is co-published with The Texas Tribune, a nonprofit, nonpartisan local newsroom that informs and engages with Texans. Sign up for The Brief Weekly to get up to speed on their essential coverage of Texas issues.

Texas state legislators dropped efforts to spend millions of dollars to buy what experts call ineffective child identification kits weeks after ProPublica and The Texas Tribune reported that lawmakers were again trying to fund the program.

This is the second consecutive budget cycle in which the Legislature considered purchasing the products, which promise to help find missing children, only to reverse course after the news organizations documented the lack of evidence that the kits work.

ProPublica and the Tribune originally published their findings in a 2023 investigation that revealed the state had spent millions of dollars on child identification kits made by a Waco-based company called the National Child Identification Program, run by former NFL player Kenny Hansmire. He had a history of legal and business troubles, according to public records, and although less expensive alternatives were available to lawmakers, Hansmire used outdated and exaggerated statistics about missing children to help boost sales.

He also managed to develop connections with powerful Texas legislators who supported his initiatives. In 2021, Republican state Sen. Donna Campbell authored a bill that created a Texas child safety program. The measure all but guaranteed any state funding would go to Hansmire’s business whenever lawmakers allotted money for child identification kits. That year, the state awarded his company about $5.7 million for the kits.

Two years later, both the House and the Senate proposed spending millions more on the program. But when the final budget was published, about a month after the newsrooms’ investigation, legislators had pulled the funding. They declined to answer questions about why.

Funding for the program appeared again in this year’s House budget. State Rep. Armando Martinez, a Democratic member of the lower chamber’s budget committee, suggested allotting $2 million to buy the kits for students in kindergarten through the second grade. The Senate, however, didn’t include that funding in its version of the budget.

The newsrooms published a story in early May about the proposed spending plan. The final version of the budget that lawmakers passed this week again had no designated funding for the identification kits.

Campbell, Martinez and the leaders of the House and Senate budget committees did not respond to the newsrooms’ interview requests for this story or written questions about why the funding didn’t make the final cut.

Hansmire did not reply to an interview request this week. In a prior response, he told the newsrooms he’d resolved his financial troubles and said that his company’s kits have helped identify missing children, though he did not provide any concrete examples. Hansmire told reporters to reach out to “any policeman,” naming several departments specifically. The newsrooms contacted a number of them. Of the dozen Texas law enforcement agencies that responded to the queries, none could identify one case where the kits helped find a runaway or kidnapped child.

Stacey Pearson, a child safety consultant who previously oversaw the Louisiana Clearinghouse for Missing and Exploited Children, said legislators made the correct decision to eliminate the identification kits from the budget because there is no data proving they actually help improve kids’ safety. She remains disappointed that Texas lawmakers continue to give the program any attention and hopes they won’t contemplate the funding in the future.

“Every dollar and every minute, every hour that you spend on a program like this, is a dollar and a minute and an hour that you can’t spend on something that is more promising or more sound,” said Pearson.

by Lexi Churchill, ProPublica and The Texas Tribune

Inside the AI Prompts DOGE Used to “Munch” Contracts Related to Veterans’ Health

3 months 1 week ago

ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.

When an AI script written by a Department of Government Efficiency employee came across a contract for internet service, it flagged it as cancelable. Not because it was waste, fraud or abuse — the Department of Veterans Affairs needs internet connectivity after all — but because the model was given unclear and conflicting instructions.

Sahil Lavingia, who wrote the code, told it to cancel, or in his words “munch,” anything that wasn’t “directly supporting patient care.” Unfortunately, neither Lavingia nor the model had the knowledge required to make such determinations.

Sahil Lavingia at his office in Brooklyn (Ben Sklar for ProPublica)

“I think that mistakes were made,” said Lavingia, who worked at DOGE for nearly two months, in an interview with ProPublica. “I’m sure mistakes were made. Mistakes are always made.”

It turns out, a lot of mistakes were made as DOGE and the VA rushed to implement President Donald Trump’s February executive order mandating all of the VA’s contracts be reviewed within 30 days.

ProPublica obtained the code and prompts — the instructions given to the AI model — used to review the contracts and interviewed Lavingia and experts in both AI and government procurement. We are publishing an analysis of those prompts to help the public understand how this technology is being deployed in the federal government.

The experts found numerous and troubling flaws: the code relied on older, general-purpose models not suited for the task; the model hallucinated contract amounts, deciding around 1,100 of the agreements were each worth $34 million when they were sometimes worth thousands; and the AI did not analyze the entire text of contracts. Most experts said that, in addition to the technical issues, using off-the-shelf AI models for the task — with little context on how the VA works — should have been a nonstarter.

Lavingia, a software engineer enlisted by DOGE, acknowledged there were flaws in what he created and blamed, in part, a lack of time and proper tools. He also stressed that he knew his list of what he called “MUNCHABLE” contracts would be vetted by others before a final decision was made.

Portions of the prompt are pasted below along with commentary from experts we interviewed. Lavingia published a complete version of it on his personal GitHub account.

Problems with how the model was constructed can be detected from the very opening lines of code, where the DOGE employee instructs the model how to behave:

You are an AI assistant that analyzes government contracts. Always provide comprehensive few-sentence descriptions that explain WHO the contract is with, WHAT specific services/products are provided, and WHO benefits from these services. Remember that contracts for EMR systems and healthcare IT infrastructure directly supporting patient care should be classified as NOT munchable. Contracts related to diversity, equity, and inclusion (DEI) initiatives or services that could be easily handled by in-house W2 employees should be classified as MUNCHABLE. Consider 'soft services' like healthcare technology management, data management, administrative consulting, portfolio management, case management, and product catalog management as MUNCHABLE. For contract modifications, mark the munchable status as 'N/A'. For IDIQ contracts, be more aggressive about termination unless they are for core medical services or benefits processing.

This part of the prompt, known as a system prompt, is intended to shape the overall behavior of the large language model, or LLM, the technology behind AI bots like ChatGPT. In this case, it was used before both steps of the process: first, before Lavingia used it to obtain information like contract amounts; then, before determining if a contract should be canceled.

Including information not related to the task at hand can confuse AI. At this point, it’s only being asked to gather information from the text of the contract. Everything related to “munchable status,” “soft-services” or “DEI” is irrelevant. Experts told ProPublica that trying to fix issues by adding more instructions can actually have the opposite effect — especially if they’re irrelevant.

Analyze the following contract text and extract the basic information below. If you can't find specific information, write "Not found".

CONTRACT TEXT: {text[:10000]} # Using first 10000 chars to stay within token limits

The models were only shown the first 10,000 characters from each document, or approximately 2,500 words. Experts were confused by this, noting that OpenAI models support inputs over 50 times that size. Lavingia said that he had to use an older AI model that the VA had already signed a contract for.

Please extract the following information: 1. Contract Number/PIID 2. Parent Contract Number (if this is a child contract) 3. Contract Description - IMPORTANT: Provide a DETAILED 1-2 sentence description that clearly explains what the contract is for. Include WHO the vendor is, WHAT specific products or services they provide, and WHO the end recipients or beneficiaries are. For example, instead of "Custom powered wheelchair", write "Contract with XYZ Medical Equipment Provider to supply custom-powered wheelchairs and related maintenance services to veteran patients at VA medical centers." 4. Vendor Name 5. Total Contract Value (in USD) 6. FY 25 Value (in USD) 7. Remaining Obligations (in USD) 8. Contracting Officer Name 9. Is this an IDIQ contract? (true/false) 10. Is this a modification? (true/false)

This portion of the prompt instructs the AI to extract the contract number and other key details of a contract, such as the “total contract value.”

This was error-prone and not necessary, as accurate contract information can already be found in publicly available databases like USASpending. In some cases, this led to the AI system being given an outdated version of a contract, which led to it reporting a misleadingly large contract amount. In other cases, the model mistakenly pulled an irrelevant number from the page instead of the contract value.

“They are looking for information where it’s easy to get, rather than where it’s correct,” said Waldo Jaquith, a former Obama appointee who oversaw IT contracting at the Treasury Department. “This is the lazy approach to gathering the information that they want. It’s faster, but it’s less accurate.”

Lavingia acknowledged that this approach led to errors but said that those errors were later corrected by VA staff.

Once the program extracted this information, it ran a second pass to determine if the contract was “munchable.”

Based on the following contract information, determine if this contract is "munchable" based on these criteria:

CONTRACT INFORMATION: {text[:10000]} # Using first 10000 chars to stay within token limits

Again, only the first 10,000 characters were shown to the model. As a result, the munchable determination was based purely on the first few pages of the contract document.

Then, evaluate if this contract is "munchable" based on these criteria: - If this is a contract modification, mark it as "N/A" for munchable status - If this is an IDIQ contract:   * For medical devices/equipment: NOT MUNCHABLE   * For recruiting/staffing: MUNCHABLE   * For other services: Consider termination if not core medical/benefits - Level 0: Direct patient care (e.g., bedside nurse) - NOT MUNCHABLE - Level 1: Necessary consultants that can't be insourced - NOT MUNCHABLE

The above prompt section is the first set of instructions telling the AI how to flag contracts. The prompt provides little explanation of what it’s looking for, failing to define what qualifies as “core medical/benefits” and lacking information about what a “necessary consultant” is.

For the types of models the DOGE analysis used, including all the necessary information to make an accurate determination is critical.

Cary Coglianese, a University of Pennsylvania professor who studies the governmental use of artificial intelligence, said that knowing which jobs could be done in-house “calls for a very sophisticated understanding of medical care, of institutional management, of availability of human resources” that the model does not have.

- Contracts related to "diversity, equity, and inclusion" (DEI) initiatives - MUNCHABLE

The prompt above tries to implement a fundamental policy of the Trump administration: killing all DEI programs. But the prompt fails to include a definition of what DEI is, leaving the model to decide.

Despite the instruction to cancel DEI-related contracts, very few were flagged for this reason. Procurement experts noted that it’s very unlikely for information like this to be found in the first few pages of a contract.

- Level 2+: Multiple layers removed from veterans care - MUNCHABLE - Services that could easily be replaced by in-house W2 employees - MUNCHABLE

These two lines — which experts say were poorly defined — carried the most weight in the DOGE analysis. The response from the AI frequently cited these reasons as the justification for munchability. Nearly every justification included a form of the phrase “direct patient care,” and in a third of cases the model flagged contracts because it stated the services could be handled in-house.

The poorly defined requirements led to several contracts for VA office internet services being flagged for cancellation. In one justification, the model had this to say:

The contract provides data services for internet connectivity, which is an IT infrastructure service that is multiple layers removed from direct clinical patient care and could likely be performed in-house, making it classified as munchable.

IMPORTANT EXCEPTIONS - These are NOT MUNCHABLE: - Third-party financial audits and compliance reviews - Medical equipment audits and certifications (e.g., MRI, CT scan, nuclear medicine equipment) - Nuclear physics and radiation safety audits for medical equipment - Medical device safety and compliance audits - Healthcare facility accreditation reviews - Clinical trial audits and monitoring - Medical billing and coding compliance audits - Healthcare fraud and abuse investigations - Medical records privacy and security audits - Healthcare quality assurance reviews - Community Living Center (CLC) surveys and inspections - State Veterans Home surveys and inspections - Long-term care facility quality surveys - Nursing home resident safety and care quality reviews - Assisted living facility compliance surveys - Veteran housing quality and safety inspections - Residential care facility accreditation reviews

Despite these instructions, AI flagged many audit- and compliance-related contracts as “munchable,” labeling them as “soft services.”

In one case, the model even acknowledged the importance of compliance while flagging a contract for cancellation, stating: “Although essential to ensuring accurate medical records and billing, these services are an administrative support function (a ‘soft service’) rather than direct patient care.”

Key considerations: - Direct patient care involves: physical examinations, medical procedures, medication administration - Distinguish between medical/clinical and psychosocial support

Shobita Parthasarathy, professor of public policy and director of the Science, Technology, and Public Policy Program at University of Michigan, told ProPublica that this piece of the prompt was notable in that it instructs the model to “distinguish” between the two types of services without instructing the model what to save and what to kill.

The emphasis on “direct patient care” is reflected in how often the AI cited it in its recommendations, even when the model did not have any information about a contract. In one instance where it labeled every field “not found,” it still decided the contract was munchable. It gave this reason:

Without evidence that it involves essential medical procedures or direct clinical support, and assuming the contract is for administrative or related support services, it meets the criteria for being classified as munchable.

In reality, this contract was for the preventative maintenance of important safety devices known as ceiling lifts at VA medical centers, including three sites in Maryland. The contract itself stated:

Ceiling Lifts are used by employees to reposition patients during their care. They are critical safety devices for employees and patients, and must be maintained and inspected appropriately.

Specific services that should be classified as MUNCHABLE (these are "soft services" or consulting-type services): - Healthcare technology management (HTM) services - Data Commons Software as a Service (SaaS) - Administrative management and consulting services - Data management and analytics services - Product catalog or listing management - Planning and transition support services - Portfolio management services - Operational management review - Technology guides and alerts services - Case management administrative services - Case abstracts, casefinding, follow-up services - Enterprise-level portfolio management - Support for specific initiatives (like PACT Act) - Administrative updates to product information - Research data management platforms or repositories - Drug/pharmaceutical lifecycle management and pricing analysis - Backup Contracting Officer's Representatives (CORs) or administrative oversight roles - Modernization and renovation extensions not directly tied to patient care - DEI (Diversity, Equity, Inclusion) initiatives - Climate & Sustainability programs - Consulting & Research Services - Non-Performing/Non-Essential Contracts - Recruitment Services

This portion of the prompt attempts to define “soft services.” It uses many highly specific examples but also throws in vague categories without definitions like “non-performing/non-essential contracts.”

Experts said that in order for a model to properly determine this, it would need to be given information about the essential activities and what’s required to support them.

Important clarifications based on past analysis errors: 2. Lifecycle management of drugs/pharmaceuticals IS MUNCHABLE (different from direct supply) 3. Backup administrative roles (like alternate CORs) ARE MUNCHABLE as they create duplicative work 4. Contract extensions for renovations/modernization ARE MUNCHABLE unless directly tied to patient care

This section of the prompt was the result of analysis by Lavingia and other DOGE staff, Lavingia explained. “This is probably from a session where I ran a prior version of the script that most likely a DOGE person was like, ‘It’s not being aggressive enough.’ I don’t know why it starts with a 2. I guess I disagreed with one of them, and so we only put 2, 3 and 4 here.”

Notably, our review found that the only clarifications related to past errors were related to scenarios where the model wasn’t flagging enough contracts for cancellation.

Direct patient care that is NOT MUNCHABLE includes: - Conducting physical examinations - Administering medications and treatments - Performing medical procedures and interventions - Monitoring and assessing patient responses - Supply of actual medical products (pharmaceuticals, medical equipment) - Maintenance of critical medical equipment - Custom medical devices (wheelchairs, prosthetics) - Essential therapeutic services with proven efficacy

For maintenance contracts, consider whether pricing appears reasonable. If maintenance costs seem excessive, flag them as potentially over-priced despite being necessary.

This section of the prompt provides the most detail about what constitutes “direct patient care.” While it does cover many aspects of care, it still leaves a lot of ambiguity and forces the model to make its own judgements about what constitutes “proven efficacy” and “critical” medical equipment.

In addition to the limited information given on what constitutes direct patient care, there is no information about how to determine if a price is “reasonable,” especially since the LLM only sees the first few pages of the document. The models lack knowledge about what’s normal for government contracts.

“I just do not understand how it would be possible. This is hard for a human to figure out,” Jaquith said about whether AI could accurately determine if a contract was reasonably priced. “I don’t see any way that an LLM could know this without a lot of really specialized training.”

Services that can be easily insourced (MUNCHABLE): - Video production and multimedia services - Customer support/call centers - PowerPoint/presentation creation - Recruiting and outreach services - Public affairs and communications - Administrative support - Basic IT support (non-specialized) - Content creation and writing - Training services (non-specialized) - Event planning and coordination

This section explicitly lists which tasks could be “easily insourced” by VA staff, and more than 500 different contracts were flagged as “munchable” for this reason.

“A larger issue with all of this is there seems to be an assumption here that contracts are almost inherently wasteful,” Coglianese said when shown this section of the prompt. “Other services, like the kinds that are here, are cheaper to contract for. In fact, these are exactly the sorts of things that we would not want to treat as ‘munchable.’” He went on to explain that insourcing some of these tasks could also “siphon human sources away from direct primary patient care.”

In an interview, Lavingia acknowledged some of these jobs might be better handled externally. “We don’t want to cut the ones that would make the VA less efficient or cause us to hire a bunch of people in-house,” Lavingia explained. “Which currently they can’t do because there’s a hiring freeze.”

The VA is standing behind its use of AI to examine contracts, calling it “a commonsense precedent.” And documents obtained by ProPublica suggest the VA is looking at additional ways AI can be deployed. A March email from a top VA official to DOGE stated:

Today, VA receives over 2 million disability claims per year, and the average time for a decision is 130 days. We believe that key technical improvements (including AI and other automation), combined with Veteran-first process/culture changes pushed from our Secretary’s office could dramatically improve this. A small existing pilot in this space has resulted in 3% of recent claims being processed in less than 30 days. Our mission is to figure out how to grow from 3% to 30% and then upwards such that only the most complex claims take more than a few days.

If you have any information about the misuse or abuse of AI within government agencies, reach out to us via our Signal or SecureDrop channels.

If you’d like to talk to someone specific, Brandon Roberts is an investigative journalist on the news applications team and has a wealth of experience using and dissecting artificial intelligence. He can be reached on Signal @brandonrobertz.01 or by email brandon.roberts@propublica.org.

by Brandon Roberts and Vernal Coleman

DOGE Developed Error-Prone AI Tool to “Munch” Veterans Affairs Contracts

3 months 1 week ago

ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.

As the Trump administration prepared to cancel contracts at the Department of Veteran Affairs this year, officials turned to a software engineer with no health care or government experience to guide them.

The engineer, working for the Department of Government Efficiency, quickly built an artificial intelligence tool to identify which services from private companies were not essential. He labeled those contracts “MUNCHABLE.”

The code, using outdated and inexpensive AI models, produced results with glaring mistakes. For instance, it hallucinated the size of contracts, frequently misreading them and inflating their value. It concluded more than a thousand were each worth $34 million, when in fact some were for as little as $35,000.

The DOGE AI tool flagged more than 2,000 contracts for “munching.” It’s unclear how many have been or are on track to be canceled — the Trump administration’s decisions on VA contracts have largely been a black box. The VA uses contractors for many reasons, including to support hospitals, research and other services aimed at caring for ailing veterans.

VA officials have said they’ve killed nearly 600 contracts overall. Congressional Democrats have been pressing VA leaders for specific details of what’s been canceled without success.

We identified at least two dozen on the DOGE list that have been canceled so far. Among the canceled contracts was one to maintain a gene sequencing device used to develop better cancer treatments. Another was for blood sample analysis in support of a VA research project. Another was to provide additional tools to measure and improve the care nurses provide.

ProPublica obtained the code and the contracts it flagged from a source and shared them with a half dozen AI and procurement experts. All said the script was flawed. Many criticized the concept of using AI to guide budgetary cuts at the VA, with one calling it “deeply problematic.”

Cary Coglianese, professor of law and of political science at the University of Pennsylvania who studies the governmental use and regulation of artificial intelligence, said he was troubled by the use of these general-purpose large language models, or LLMs. “I don’t think off-the-shelf LLMs have a great deal of reliability for something as complex and involved as this,” he said.

Sahil Lavingia, the programmer enlisted by DOGE, which was then run by Elon Musk, acknowledged flaws in the code.

“I think that mistakes were made,” said Lavingia, who worked at DOGE for nearly two months. “I’m sure mistakes were made. Mistakes are always made. I would never recommend someone run my code and do what it says. It’s like that ‘Office’ episode where Steve Carell drives into the lake because Google Maps says drive into the lake. Do not drive into the lake.”

Though Lavingia has talked about his time at DOGE previously, this is the first time his work has been examined in detail and the first time he’s publicly explained his process, down to specific lines of code.

Lavingia has nearly 15 years of experience as a software engineer and entrepreneur but no formal training in AI. He briefly worked at Pinterest before starting Gumroad, a small e-commerce company that nearly collapsed in 2015. “I laid off 75% of my company — including many of my best friends. It really sucked,” he said. Lavingia kept the company afloat by “replacing every manual process with an automated one,” according to a post on his personal blog.

Sahil Lavingia at his office in Brooklyn (Ben Sklar for ProPublica)

Lavingia did not have much time to immerse himself in how the VA handles veterans’ care between starting on March 17 and writing the tool on the following day. Yet his experience with his own company aligned with the direction of the Trump administration, which has embraced the use of AI across government to streamline operations and save money.

Lavingia said the quick timeline of Trump’s February executive order, which gave agencies 30 days to complete a review of contracts and grants, was too short to do the job manually. “That’s not possible — you have 90,000 contracts,” he said. “Unless you write some code. But even then it’s not really possible.”

Under a time crunch, Lavingia said he finished the first version of his contract-munching tool on his second day on the job — using AI to help write the code for him. He told ProPublica he then spent his first week downloading VA contracts to his laptop and analyzing them.

VA press secretary Pete Kasperowicz lauded DOGE’s work on vetting contracts in a statement to ProPublica. “As far as we know, this sort of review has never been done before, but we are happy to set this commonsense precedent,” he said.

The VA is reviewing all of its 76,000 contracts to ensure each of them benefits veterans and is a good use of taxpayer money, he said. Decisions to cancel or reduce the size of contracts are made after multiple reviews by VA employees, including agency contracting experts and senior staff, he wrote.

Kasperowicz said that the VA will not cancel contracts for work that provides services to veterans or that the agency cannot do itself without a contingency plan in place. He added that contracts that are “wasteful, duplicative or involve services VA has the ability to perform itself” will typically be terminated.

Trump officials have said they are working toward a “goal” of cutting around 80,000 people from the VA’s workforce of nearly 500,000. Most employees work in one of the VA’s 170 hospitals and nearly 1,200 clinics.

The VA has said it would avoid cutting contracts that directly impact care out of fear that it would cause harm to veterans. ProPublica recently reported that relatively small cuts at the agency have already been jeopardizing veterans’ care.

The VA has not explained how it plans to simultaneously move services in-house, as Lavingia’s code suggested was the plan, while also slashing staff.

Many inside the VA told ProPublica the process for reviewing contracts was so opaque they couldn’t even see who made the ultimate decisions to kill specific contracts. Once the “munching” script had selected a list of contracts, Lavingia said he would pass it off to others who would decide what to cancel and what to keep. No contracts, he said, were terminated “without human review.”

“I just delivered the [list of contracts] to the VA employees,” he said. “I basically put munchable at the top and then the others below.”

VA staffers told ProPublica that when DOGE identified contracts to be canceled early this year — before Lavingia was brought on — employees sometimes were given little time to justify retaining the service. One recalled being given just a few hours. The staffers asked not to be named because they feared losing their jobs for talking to reporters.

According to one internal email that predated Lavingia’s AI analysis, staff members had to respond in 255 characters or fewer — just shy of the 280 character limit on Musk’s X social media platform.

A VA email tells staffers that the justification of contracts targeted by DOGE must be limited to 255 characters. (Obtained by ProPublica)

Once he started on DOGE’s contract analysis, Lavingia said he was confronted with technological limitations. At least some of the errors produced by his code can be traced to using older versions of OpenAI models available through the VA — models not capable of solving complex tasks, according to the experts consulted by ProPublica.

Moreover, the tool’s underlying instructions were deeply flawed. Records show Lavingia programmed the AI system to make intricate judgments based on the first few pages of each contract — about the first 2,500 words — which contain only sparse summary information.

“AI is absolutely the wrong tool for this,” said Waldo Jaquith, a former Obama appointee who oversaw IT contracting at the Treasury Department. “AI gives convincing looking answers that are frequently wrong. There needs to be humans whose job it is to do this work.”

Lavingia’s prompts did not include context about how the VA operates, what contracts are essential or which ones are required by federal law. This led AI to determine a core piece of the agency’s own contract procurement system was “munchable.”

At the core of Lavingia’s prompt is the direction to spare contracts involved in “direct patient care.”

Then, evaluate if this contract is "munchable" based on these criteria: … - Level 0: Direct patient care (e.g., bedside nurse) - NOT MUNCHABLE - Level 1: Necessary consultants that can't be insourced - NOT MUNCHABLE - Level 2+: Multiple layers removed from veterans care - MUNCHABLE - Contracts related to "diversity, equity, and inclusion" (DEI) initiatives - MUNCHABLE - Services that could easily be replaced by in-house W2 employees - MUNCHABLE

Such an approach, experts said, doesn’t grapple with the reality that the work done by doctors and nurses to care for veterans in hospitals is only possible with significant support around them.

Lavingia’s system also used AI to extract details like the contract number and “total contract value.” This led to avoidable errors, where AI returned the wrong dollar value when multiple were found in a contract. Experts said the correct information was readily available from public databases.

Lavingia acknowledged that errors resulted from this approach but said those errors were later corrected by VA staff.

In late March, Lavingia published a version of the “munchable” script on his GitHub account to invite others to use and improve it, he told ProPublica. “It would have been cool if the entire federal government used this script and anyone in the public could see that this is how the VA is thinking about cutting contracts.”

According to a post on his blog, this was done with the approval of Musk before he left DOGE. “When he asked the room about improving DOGE’s public perception, I asked if I could open-source the code I’d been writing,” Lavingia said. “He said yes — it aligned with DOGE’s goal of maximum transparency.”

That openness may have eventually led to Lavingia’s dismissal. Lavingia confirmed he was terminated from DOGE after giving an interview to Fast Company magazine about his work with the department. A VA spokesperson declined to comment on Lavingia’s dismissal.

VA officials have declined to say whether they will continue to use the “munchable” tool moving forward. But the administration may deploy AI to help the agency replace employees. Documents previously obtained by ProPublica show DOGE officials proposed in March consolidating the benefits claims department by relying more on AI.

And the government’s contractors are paying attention. After Lavingia posted his code, he said he heard from people trying to understand how to keep the money flowing.

“I got a couple DMs from VA contractors who had questions when they saw this code,” he said. “They were trying to make sure that their contracts don’t get cut. Or learn why they got cut.

“At the end of the day, humans are the ones terminating the contracts, but it is helpful for them to see how DOGE or Trump or the agency heads are thinking about what contracts they are going to munch. Transparency is a good thing.”

If you have any information about the misuse or abuse of AI within government agencies, Brandon Roberts is an investigative journalist on the news applications team and has a wealth of experience using and dissecting artificial intelligence. He can be reached on Signal @brandonrobertz.01 or by email brandon.roberts@propublica.org.

If you have information about the VA that we should know about, contact reporter Vernal Coleman on Signal, vcoleman91.99, or via email, vernal.coleman@propublica.org, and Eric Umansky on Signal, Ericumansky.04, or via email, eric.umansky@propublica.org.

by Brandon Roberts, Vernal Coleman and Eric Umansky

Judith Shaw: Upended

3 months 1 week ago

Bruno David Gallery is pleased to present Upended, a sculpture installation by multi-disciplinary artist Judith Shaw. This is Shaw’s second solo exhibition with Bruno David Gallery. ‘Upended’ is part of […]

The post Judith Shaw: Upended appeared first on Explore St. Louis.

Myranda Levins

McHale, Susanne

3 months 1 week ago
Susanne McHale, a loved and admired wife and mother, passed away on May 24, 2025. Susanne was born on September 11, 1961, and in her 63 years, lived a full life.