Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

2024 Proves It’s Possible to Reboot in AI


Almost all of the big AI stories this year were about how fast the technology is advancing, the harm it’s causing, and speculation about what it will be like in the near future when people can control it. But 2024 will also see governments making major strides in regulating algorithmic systems. Here’s a breakdown of the most important AI legislation and initiatives from the past year at state, federal, and international levels.

Government

US government lawmakers took the lead on AI regulation in 2024, and launched hundreds of loans-some had small goals such as creating study committees, while others would have given a big directive to AI developers if what they created would cause serious problems for people. Many bills failed, but several states passed reasonable laws that could serve as models for other states or Congress (assuming Congress gets back into business).

As the fallout from AI flooded the pre-election campaign, politicians from both parties pursued anti-falsification laws. More than 20 countries they now have a ban on AI-generated fraudulent ads in the weeks before an election. Bills aimed at cracking down on AI-generated porn, especially images of children, also received strong bipartisan support in states including Alabama, California, Indiana, North Carolina, and South Dakota.

Unsurprisingly, because it’s behind the tech industry, some of the most ambitious AI ideas have come out of California. One popular bill would have forced AI developers to protect themselves and hold companies liable for catastrophic damage caused by their systems. The bill passed both houses of parliament amid lobbying difficulties but it did in the end he voted and Governor Gavin Newsom.

Newsom, however, signed more than a dozen other bills focused on less apocalyptic but more recent AI problems. One new law in California requires health insurers to ensure that the AI ​​methods they use to ensure coverage are fair and equitable. Another requires AI developers to create devices that label themselves as AI-powered. And two bills would prohibit the unauthorized distribution of a dead person’s AI likeness and mandate that contracts for AI likenesses of living people clearly state how the content will be used.

Colorado passed a the first of its kind in US law it requires companies that develop and use AI systems to take steps to ensure that those tools are not discriminatory. Consumer advocates call this law a basic requirements. It is likely that similar bills will be debated in other states in 2025.

And, middle finger to our future robots and planet, Utah he established the law which prohibits any government agency from legally appropriating artificial intelligence, non-living things, water, air, weather, plants, and other non-human resources.

Federal

Congress talked a lot about AI in 2024, and the House ended the year by releasing a A 273-page two-part report to outline guiding principles and recommendations for future improvement. But when it came to law enforcement, federal lawmakers did very little.

Federal agencies, on the other hand, were busy all year round trying to meet the goals set out in President Joe Biden’s 2023 plan for AI. And several regulators, most notably the Federal Trade Commission and the Department of Justice, have cracked down on misleading and malicious AI practices.

The labor unions that took action to implement the AI ​​Act were not persuasive or subject-oriented, but rather laid the foundation for the management of the system. the administration of the administration. For example, government agencies have started recruiting AI talent and creating standards in creating positive examples and reducing harm.

And, in a major step toward increasing public understanding of how the government uses AI, the Office of Management and Budget has challenged (many) other agencies to disclose. much needed about the AI ​​systems they use that may affect human rights and security.

On the enforcement side, the FTC’s Work AI Accordingly companies that want to use AI in fraudulent ways, such as writing fake reviews or providing legal advice, and so on to be allowed AI gun detection company Evolv for misrepresenting what its product can do. Also the organization stability Research by facial recognition company IntelliVision, which has claimed that its technology is not biased and racially biased, and prohibited pharmacy group Rite Aid using facial recognition for five years after an investigation concluded that the company was using the device to discriminate against consumers.

Meanwhile, the DOJ has joined federal attorneys general in a lawsuit against a real estate company. RealPage’s main algorithmic pricing scheme which raised the rent for the entire tribe. It also won several lawsuits against Google, including one related to the company just be quiet about searching the internet which could significantly change the dynamics in the AI ​​search industry.

All around the world

In August, the European Union’s AI Act it started working. The law, which is already working as a model for other authorities, requires that AI systems that perform high-risk tasks, such as support in hiring or medical decisions, to reduce risk and meet certain standards related to high-quality data and human supervision . It also prohibits the use of certain AI systems, such as algorithms that can be used to assign citizens scores that are used to deny rights and privileges.

In September, China issued a major AI security mandate frame. Like the standards published by the US National Institute of Standards and Technology, they are not binding but create common standards for AI developers to follow in identifying and mitigating the risks of their systems.

One of the most interesting aspects of AI policy laws they are from Brazil. In late 2024, the country’s senate passed an AI defense bill. It faces a tough road ahead, but if passed, it will create unprecedented protections for the types of copyrighted material used to train AI systems. Developers must disclose what copyrighted material was included in their training, and developers may have the ability to restrict the use of their work in training AI systems or negotiate payment agreements established, among other things, for AI development. manufacturer and how the product was to be used.

Like the EU’s AI Act, Brazil’s proposed legislation would also require high-risk AIs to follow additional safeguards.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *