Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Unexpected results: The results of the US elections herald the development of AI indiscriminately


Subscribe to our daily and weekly newsletters for the latest updates and content from the industry’s leading AI site. learn more


While the 2024 US election has focused on cultural issues such as the economy and immigration, its challenges The AI ​​process it can be very transformative. Without a single question of debate or a big promise of AI-related development, voters unwittingly tipped the scales in favor of accelerators — those who encourage rapid AI development without regulatory hurdles. The results of this acceleration are very serious, heralding a new era of AI policy that prioritizes advanced technology over caution and shows a significant change in the conflict between. Risks and rewards for AI.

President-elect Donald Trump’s pro-business policies lead many to believe that his administration is favoring emerging markets and promoting AI and other advanced technologies. His party platform doesn’t have much to say about AI. However, it emphasizes a policy agenda that focuses on deregulation of AI, specifically targeting what they termed “hard-left thinking” within the existing regulatory framework that is emerging. In contrast, the platform supported AI development aimed at promoting free speech and “human development,” calling for policies that support AI technology while opposing measures are those that hinder technological progress.

Early indications in terms of the selection of positions in the government confirm this. However, there is a big issue going on: Resolving a major conflict The future of AI.

A big argument

Since then ChatGPT appeared in November 2022, there has been a fierce debate between those in the field of AI who want to advance the development of AI and those who want to slow it down.

Fortunately, in March 2023 the last group decided to take a six-month hiatus from AI in the development of advanced machines, warning in an open letter that AI tools present “dangers to people and society.” This letter, led by a Future Life Instituteit was inspired by OpenAI’s release of the GPT-4 version of the main language (LLM), a few months after ChatGPT was launched.

The letter was initially signed by more than 1,000 technology leaders and researchers, including Elon Musk, Apple Founder Steve Wozniak, Andrew Yang, 2020 presidential candidate Lex Fridman, and AI pioneers Yoshua Bengio and Stuart Russell. The number of people who signed the letter rose to 33,000. Together, they became known as “disruptors,” a term used to describe their concerns about the dangers of AI.

Not everyone agreed. OpenAI CEO Sam Altman did not sign. Also Bill Gates and many others. Their reasons for not doing so varied, though many complained of being harmed by AI. This led to much discussion about the potential for AI to run amok, leading to disaster. It became so popular that many in the AI ​​field talked about it evaluation of the possibility of destructionwhich is often called the equation: p(doom). However, the work to improve AI has not stopped.

For the record, p (loss) in June 2023 was 5%. This may seem low, but it was not zero. I noticed that the big AI labs were dedicated to testing new models before they were released and providing security tools to use them.

Most AI risk observers rated the current risk higher than 5%, and some even higher. AI security researcher Roman Yampolskiy rated the potential of AI eliminate people at 99%. This said, a learning which was released earlier this year, before the election and represented the opinion of more than 2,700 AI researchers, showed that “the median prediction for the worst possible outcome, such as extinction, was 5%. Would you get on a plane if there was a 5% chance it would crash? This is that’s the challenge AI researchers and policy makers face.

It should go fast

Some have been openly dismissive of concerns about AI, pointing instead to what they see as the most important aspects of the technology. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and the author of “The Master Algorithm“). They argued, instead, that AI is part of the solution. As Ng said, there are risks that exist, such as climate change and future epidemics, and AI can be part of how these are addressed and mitigated.

Ng said that the development of AI should not be stopped, but should go faster. This technical view has also been rejected by others known as “effective accelerationists” or “e/acc” for short. They argue that technology – especially AI – is not the problem, but the answer for much, if not all, of the world. Introduction to the accelerator Y Combinator CEO Garry Tan, along with other prominent Silicon Valley leaders, included the words “e/acc” in their X logo to show their commitment to the vision. Reporter Kevin Roose at the New York Times they took over reality of these racers by saying they have a “full throttle, no brake system.”

A Substack a letter from a few years ago he explained the principles that contribute to the development of speed. Here’s a summary of what they provide at the end of the talk, including comments from OpenAI CEO Sam Altman.

AI speed ahead

The outcome of the 2024 elections could be seen as a turning point, setting an accelerated vision to shape US AI policy over the next few years. For example, the president-elect recently appointed tech entrepreneur and venture capitalist David Sacks as “AI czar.”

Sacks, an outspoken critic of AI regulation and a proponent of market-driven innovation, brings his experience as a technology economist to the role. He’s one of the leading voices in the AI ​​industry, and much of what he’s said about AI is consistent with the accelerating ideas expressed by the emerging platform.

In response to the AI ​​mandate from the Biden administration in 2023, Sacks tweeted: “US politics and economics are hopelessly broken, but we have one unparalleled resource as a country: Innovation in AI driven by the free market and not controlled by software developers. It’s just gone.” While the extent of Sacks’ influence on AI policy remains to be seen, his nomination signals a policy shift in favor of autonomous companies and innovation.

Elections have consequences

I doubt that most of the voting population will give much thought to AI policy when casting their ballots. However, in apparent ways, the accelerators have won an electoral victory, sidelining those urging the federal government to limit the long-term risks of AI.

As accelerationists point the way forward, the stakes can’t be higher. Whether this season will lead to unprecedented progress or unexpected disaster remains to be seen. As the development of AI accelerates, the need for public discourse and oversight becomes even more important. The way we go about this time is not only about technological progress but about our future as well.

As opposed to inaction at the federal level, it is possible for one or more states to adopt different laws, which have already been done at some level. California and Colorado. For example, California’s AI protection bills focus on transparency requirements, while Colorado addresses AI discrimination in hiring, providing examples of state mandates. Now, all eyes will be on the voluntary testing and defense efforts of Anthropic, Google, OpenAI and other AI model developers.

In short, accelerationist success means fewer restrictions on AI development. This increased speed can certainly lead to rapid change, but also carries the risk of unexpected results. Now I reset p(damage) to 10%. What is yours?

Gary Grossman is EVP of technology at Edelman is the global leader of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data professionals, can share insights and innovations about data.

If you want to read about the best and latest ideas, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might think so when giving a story yours!

Read more from DataDecisionMakers



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *