#AI

See tagged statuses in the local Kirjakasa community

Alex Hanna, Emily M. Bender: The AI Con (Hardcover, 2025, HarperCollins Publishers)

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls …

Google is aware of their responsibility for depriving the news ecosystem of its major source of advertising revenue, and has been experimenting with ways to support existing organizations. However, like many of the efforts put forward by Big Tech, many of their proposals will further entrench Al in the ecosystem, not lessen it. An investigation by 404 Media found that Google News is boosting ripped off content, slightly altered with LLM outputs, from other sites. Google has responded that they have no problem boosting these articles, stating, "Our focus when ranking content is on the quality of the content, rather than how it was produced." In other words: Al-generated content is A-okay for creating the news.

The AI Con by ,

Google: let's mess up news for greed.

Absolutely horrendous.

#AI #ArtificialIntelligence #google #grift #news #capitalism

Alex Hanna, Emily M. Bender: The AI Con (Hardcover, 2025, HarperCollins Publishers)

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls …

At the core of Al-for-science hype is the idea that Al is somehow going to accelerate science and help us solve pressing scientific problems much faster. In 2016, Al researcher and Sony executive Hiroaki Kitano proposed a "grand challenge" of designing an Al system that could "make major scientific discoveries in biomedical sciences and that is worthy of a Nobel Prize and far beyond." In 2021, he rebranded this exercise as the Nobel Turing Challenge—a combination of Nobel ambitions and the Turing Test, which we'll discuss in the next chapter-and started a series of workshops to publicize this goal. His vision is an autonomous agent that can "do science" on its own, rapidly scaling the number of scientific discoveries available to humanity.

The absurdist writer Douglas Adams caricatured this kind of wishful thinking perfectly in the late 1970s, with the characters in The Hitchhiker's Guide to the Galaxy who developed a supercomputer to give them the ultimate answer to the ultimate question of life, the universe, and everything. That answer, they learned after generations of waiting, was 42. Of course, such an answer is useless without the corresponding question, and their supercomputer wasn't powerful enough to determine the question. It was powerful enough, however, to design an even bigger computer (the planet Earth, as it happens) that could, given 10 million years, calculate the question. We can't delegate science to machines, because science isn't a collection of answers. It's a set of processes and ways of knowing.

The AI Con by ,

Alex Hanna, Emily M. Bender: The AI Con (Hardcover, 2025, HarperCollins Publishers)

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls …

Just because you've identified a social problem doesn't mean LLMs or any other kind of so-called Al are a solution. When someone says so, the problem is usually better understood by widening the lens, looking at it in its broader context. As Shankar Narayan, the Tech and Liberty Project director for ACLU of Washington, asked regarding biased recidivism prediction systems: Why are we asking who is most likely to reoffend rather than what do these people need to give them the best chance of not reoffending? Likewise, when someone suggests a robo-doctor, robo-therapist, or robo-teacher, we should ask: Why isn't there enough money for public clinics, mental health counseling, and schools? Text synthesis machines can't fill holes in the social fabric. We need people, political will, and resources.

The AI Con by ,

Alex Hanna, Emily M. Bender: The AI Con (Hardcover, 2025, HarperCollins Publishers)

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls …

Another dire example involves an algorithm called "nH Predict", used by UnitedHealth Group (the largest health care insurer in the U.S.) to determine the length of stays it would approve for patients in nursing homes and care facilities. In a class-action lawsuit filed in November 2023, the estates of the two named plaintiffs-deceased at the time of filing-alleged that UnitedHealth kicked them out of care too early, based on nH Predict's output, even as the company knew the system had an error rate of 90 percent. The court filing says that UnitedHealth used this system anyway, counting on the fact that only a tiny group of policyholders appeal such denials, and that the insurer "[banked) on the elderly) patients' impaired conditions, lack of knowledge, and lack of resources to appeal the erroneous Al-powered decisions." The families of the two plaintiffs spent tens of thousands of dollars paying for care that went uncovered by the insurer. Reporting from Stat News largely confirms the allegations in the lawsuit, namely that after acute health incidents, UnitedHealth aimed at getting elderly patients out of nursing homes and hospitals as fast as possible, even against the advice of their doctors. Moreover, when patients challenged denials, physician medical reviewers were advised by case managers not to add more than 1 percent of the prior advised nursing home stay. And case managers themselves were fired if they strayed from those targets. The executive in charge of the division controlling the algorithm stated on a company podcast, in a comically evil admission, "If [people] go to a nursing home, how do we get them out as soon as possible?"

The AI Con by ,

Alex Hanna, Emily M. Bender: The AI Con (Hardcover, 2025, HarperCollins Publishers)

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls …

Data workers, for their part, have formed collectives to fight back against platforms and their worst excesses. In Kenya, Daniel Motaung and 150 workers convened in Nairobi in mid-2023 to form the African Content Moderators Union, which is one of the first unions of content moderators and data workers worldwide. Motaung was fired from Sama, the contractor providing Facebook (and, previously, OpenAl) with content moderation and other data work, for his organizing efforts, and is now suing Facebook and Sama in Kenyan court.

The AI Con by ,

Alex Hanna, Emily M. Bender: The AI Con (Hardcover, 2025, HarperCollins Publishers)

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls …

Unfortunately, visual artists are not unionized across their industry in the same way writers and actors are. But they have been using other tools to try to protect their work from shoddy automated replacements. In a clever, pro-technology Luddite strategy, University of Chicago computer scientists, with the aid of several visual artists, developed two tools to combat ingestion into the datasets that train image synthesis models. One of them, Glaze, is a "defensive" tool that artists can use to protect their work from being mimicked without their consent. It's like a filter atop of an image that renders it unusable in model training. Meanwhile, Nightshade is an "offensive" filter that not only renders one particular image unusable, but can actually ruin these models at training time. Like the plants of its namesake, images treated with Nightshade actually poison the dataset a would-be Al model create.

The AI Con by ,

I like poison against bad AI, meaning AI that furthers the climate catastrophe and increases leadership pay-outs in capitalist or otherwise nefarious companies.

#AI #ArtificialIntelligence

Alex Hanna, Emily M. Bender: The AI Con (Hardcover, 2025, HarperCollins Publishers)

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls …

Employers have been turning to media synthesis machines in some of the most sensitive domains, with absolutely dire consequences. In one particularly piquing example, the National Eating Disorders Association (NEDA) attempted to replace their workforce-a set of volunteer and paid coordinators and hotline operators-with a chatbot. This happened after NEDA workers, exhausted from the uptick of work during COVID, had voted to unionize under the moniker of Helpline Associates United. Both paid workers and volunteers at NEDA encountered intense workloads, and, despite being a place where others receive mental health support, they received very little themselves. Two weeks after unionizing, they were summarily fired for organizing together, a violation of U.S. labor law.

Soon after, a poorly tested chatbot called "Tessa" was brought online. According to its creator, the chatbot was intended to provide a limited number of responses to a small number of questions about issues like body image. But "Tessa" was quickly found to be an impoverished replacement for workers, offering disordered eating suggestions to people calling the hotline. Eating disorder advocate and fat activist Sharon Maxwell documented how the chatbot offered "healthy eating tips," suggesting that she could safely lose one to two pounds a week through counting calories. These tips are the hallmarks of enabling disordered eating. The chatbot was quickly decommissioned, and the NEDA hotline has since been taken completely offline, creating a major gap in mental health services for those struggling with disordered eating. In short, when NEDA tried to replace the work of actual people with an Al system, the result was not doing more with less, but just less, with greater potential for harm.

The AI Con by ,

Alex Hanna, Emily M. Bender: The AI Con (Hardcover, 2025, HarperCollins Publishers)

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls …

Corporate executives in nearly every industry and mega margin-maximizing consultancies like McKinsey, BlackRock, and Deloitte want to "increase productivity" with Al, which is consultant-speak for replacing labor with technology.

But this promise is highly exaggerated. In the vast majority of cases, Al is not going to replace your job. But it will make your job a lot shittier. What actors and writers are fighting for is a future that doesn't relegate humans to babysitting scriptwriting and acting algorithms, available on call but only paid when the media synthesis machines glitch out. We're already seeing this in domains as diverse as journalism, legal services, and the taxi industry. While executives suggest that Al is going to be a labor-saving device, in reality it is meant to be a labor-breaking one. It is intended to devalue labor by threatening workers with technology that can supposedly do their job at a fraction of the cost.

The AI Con by ,