In Big Election Year, A.I.’s Architects Move Against Its Misuse

No Content

Anthropic, OpenAI, Google, Meta and other key developers are acting to prevent the technology from threatening democracies, even as their tools become more powerful.

Artificial intelligence companies have been at the vanguard of developing the transformative technology. Now they are also racing to set limits on how A.I. is used in a year stacked with major elections around the world.

Last month, OpenAI, the maker of the ChatGPT chatbot, said it was working to prevent abuse of its tools in elections, partly by forbidding their use to create chatbots that pretend to be real people or institutions. In recent weeks, Google also said it would limit its A.I. chatbot, Bard, from responding to certain election-related prompts to avoid inaccuracies. And Meta, which owns Facebook and Instagram, promised to better label A.I.-generated content on its platforms so voters could more easily discern what information was real and what was fake.

On Friday, Anthropic, another leading A.I. start-up, joined its peers by prohibiting its technology from being applied to political campaigning or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it would warn or suspend any users who violated its rules. It added that it was using tools trained to automatically detect and block misinformation and influence operations.

“The history of A.I. deployment has also been one full of surprises and unexpected effects,” the company said. “We expect that 2024 will see surprising uses of A.I. systems — uses that were not anticipated by their own developers.”

The efforts are part of a push by A.I. companies to get a grip on a technology they popularized as billions of people head to the polls. At least 83 elections around the world, the largest concentration for at least the next 24 years, are anticipated this year, according to Anchor Change, a consulting firm. In recent weeks, people in Taiwan, Pakistan and Indonesia have voted, with India, the world’s biggest democracy, scheduled to hold its general election in the spring.

How effective the restrictions on A.I. tools will be is unclear, especially as tech companies press ahead with increasingly sophisticated technology. On Thursday, OpenAI unveiled Sora, a technology that can instantly generate realistic videos. Such tools could be used to produce text, sounds and images in political campaigns, blurring fact and fiction and raising questions about whether voters can tell what content is real.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.


Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.


Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

This post was originally published on this site

Similar Posts