Little people struggling to lift a placard

AI companies are trying to have it both ways

This is an edition of The Atlantic Daily, a newsletter that guides you through the day’s top stories, helps you discover new ideas and recommends the best of culture. Sign up here.

Last week, seven tech companies appeared at the White House and agreed to voluntary guardrails on the use of AI. In promising to take these steps, companies are nodding to the potential risks of their creations without pausing their aggressive competition.

First, here are four new stories from The Atlantic:

A convenient gesture

The first time I heard anyone compare Silicon Valley in the 2010s to Florence during the Renaissance, I was sitting in a dingy seminar room in a dorm lobby. I was a college student in the Bay Area at the time, in 2013, and professors and colleagues often talked about how we were in a unique period of flowering that would reshape humanity. In some ways it turned out to be true that the age of technology, when companies like Twitter and Facebook were just public and startups abounded, changed that (although the strain of techno-optimism has coagulated somewhat in the intervening years).

I thought back to that sentiment this morning as I read Ross Andersen’s new article for the September issue of The Atlantic, which describes OpenAI and its CEO, Sam Altman. You’re about to enter the greatest golden age, Ross overheard tell Altman to a group of students. Elsewhere, Altman says the AI ​​revolution will be different from previous technological changes and that it will be like a new kind of society. That Altman believes artificial intelligence will reshape the world is clear. How exactly this transformation will play out is less clear. In recent months, as AI tools have achieved widespread use and interest, OpenAI and its competitors have done an interesting dance: They’re ramping up their technology while also warning, many times in apocalyptic terms, of its potential harm.

On Friday, the leaders of seven major AI companies OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft met with Joe Biden and agreed on a series of voluntary safeguards. Companies have pledged, sometimes in vague terms, to take actions such as releasing information about security tests, sharing research with academics and governments, reporting vulnerabilities in their systems, and working on mechanisms that tell people when content is generated by artificial intelligence. Many of these are steps companies were already taking. And because commitments made at the White House are voluntary, there are no enforceable regulations. However, they allow companies and Biden to signal to the public that they are working on AI security. By accepting these voluntary precautions, these companies are nodding to the possible risks of their creations, while also sacrificing little in their aggressive competition.

For AI companies, this is a dream scenario, where they can ease regulatory pressure by pretending it solves the problem, while ultimately continuing business as usual, Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, told me in an email. He added that other companies whose products pose safety risks, such as automakers and nuclear power plants, are failing to regulate themselves.

Altman has emerged as a major player in the AI ​​industry, making his mark as both a champion of the technology and a reasonable adult in the room. As Ross reports, the OpenAI CEO went on an international listening tour this spring, meeting with heads of state and lawmakers. In May, he appeared before Congress saying he wanted AI regulated, which can be seen as both a civically responsible move and a way to shift some of the responsibility onto Congress, which is likely to act slowly. To date, no comprehensive and binding rules have emerged from these conversations and congressional hearings. And the companies continue to grow.

AI leaders talk about the risks of their tools. A couple of months ago, AI luminaries including Altman and Bill Gates signed a one-sentence statement that reads: Mitigating AI’s extinction risk should be a global priority alongside other societal-scale risks like pandemics and nuclear war. (Altman and other AI builders have urged comparisons to Robert Oppenheimer.) But doomsday warnings also have the effect of making the technology seem quite revolutionary. Last month, my colleague Matteo Wong wrote about how this message is not only alarming but also selfish: CEOs, like demigods, are wielding transformative technology like fire, electricity, nuclear fission, or a virus that causes a pandemic. You would be a fool not to invest.

On another positive: As my colleague Damon Beres stated in an issue of this newsletter in May, discussing these technologies in vague, existential terms actually allows Altman, and others discussing the future of AI, to sidestep some of the day-to-day impacts they were already seeing from the technology. AI is actually having very real effects now: chat tools are eroding jobs and reshaping classrooms.

By calling for regulations, Damon added, the heads of these companies can cleverly put the ball in the lawmakers’ court. (If Congress takes forever to pass laws, well, at least the industry has tried!) Critics have pointed out that one of Altman’s regulatory ideas, a new agency that would oversee the AI ​​industry, could take decades to build. In those decades, AI could become ubiquitous. Others noted that by suggesting that Congress pass a law requiring AI companies to have licenses to operate above a certain capacity, large companies like OpenAI can entrench themselves making it potentially harder for smaller players to compete.

The tech industry may have learned a lesson from its public relations disasters in the late 2010s. Instead of testifying after a fiasco, as Mark Zuckerberg did following the Cambridge Analytica debacle, leaders have recently approached Washington and applicant instead regulations. Sam Bankman-Fried, for example, managed to boost his image by glamorizing Washington and appearing dedicated to serious regulations, that is, before FTX collapsed. And after years of lobbying against the regulations, Facebook has begun to require them in recent years.

It’s easy to be cynical about self-imposed guardrails and see them as toothless. But Friday’s pledge acknowledged that there’s work to be done, and the fact that bitter industry rivals have aligned themselves on that fact shows that, at the very least, it’s no longer good publicity to avoid government guardrails entirely. The old way of doing things is no longer so attractive. For now, however, companies may continue to try to have it both ways. As one expert told Matteo, you need to ask yourself: If you think it’s so dangerous, why are you still building it?


Today’s news

  1. Israeli lawmakers have ratified the first piece of a legislative package designed to weaken the country’s Supreme Court after months of protests and repeated warnings from the Biden administration.
  2. Elon Musk has rebranded Twitter to X, replacing the previous blue bird logo.
  3. Russian drones have destroyed grain infrastructure in an attack on Ukrainian ports along the Danube, a key export route.

Evening reading

Little people struggling to lift a placard
Ben Kothe / The Atlantic

The corporate tragedy of the Americas

By Caitlin Flanagan

I was a child soldier in the California grape strikes, my work taking place out of the Shattuck Avenue cooperative in Berkeley. There I was, maybe 7 or 8 years old, shaking a coin-filled Folgers coffee can at the United Farm Workers table where my mom was manned two or three afternoons a week. I did most of my work alongside her, but several times an hour I would do what child soldiers have always done: serve in a capacity that only a very small person could. I’d go out into the parking lot and slip between cars to make sure no one left without donating a few coins or signing a petition. I’d appear next to a driver’s window and give the can an aggressive rattle. I wasn’t Jimmy Hoffa, but I wasn’t playing any games either.

My parents were old-school leftists, born in the 1920s and children during the Great Depression. They would never, ever cross a picket line, fail to participate in a boycott, lose sight of the strikers’ need for money when they weren’t getting wages. My parents would never suggest that poverty was caused by a lack of intelligence or effort. We weren’t a religious family (to say the least), but I had a catechism: a worker is powerless; many workers can bring a company to its knees.

Read the full article.

More from The Atlantic

Cultural break

Mushroom cloud
Harold M. Lambert/Getty

Light. Claude Glass as Night Song, a new poem by Janelle Tan.

I wanted your chest to beat / in my chest, / so I couldn’t look at you.

Clock. Oppenheimer (in theaters now) is everywhere, including people’s nightmares.

Play our daily crossword puzzle.


Speaking of new technology panic, my colleague Jacob Stern has a funny and fascinating article about initial reactions to PowerPoint? Apparently, in 2003, some found the presentation technology sinister. Jacob describes a technological scare of the first order that has now been almost completely forgotten: the belief that PowerPoint, the most unnerving member of the Office software suite, that universal metonym for soporific meetings, could be evil. I haven’t made a PowerPoint in years (a quick browse through my files suggests that my last presentation attempt was before my sister’s graduation, in 2020 I found a file with a single slide that said Good job, Annie in Arial font, and another with a photo of her and the family dog). I hardly ever think about PowerPoint, so it was interesting to read about an era where people did it with alarm. How times change!


Katherine Hu contributed to this newsletter.

When you buy a book using a link in this newsletter, we get a commission. Thanks for the support The Atlantic.

#companies #ways
Image Source :

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *