Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Artificial intelligence has its limits – which is why human-sourced data can help prevent the collapse of the AI ​​model


Subscribe to our daily and weekly newsletters for the latest updates and content from the industry’s leading AI site. learn more


My, how quickly the tables turn in the tech world. Two years ago, AI was hailed as “the next revolutionary technology to rule over them all.” Now, instead of reaching Skynet levels and taking over the world, the AI ​​is, ironically, despicable.

Once the tool of a new era of intelligence, AI is now running through its code, struggling to deliver the intelligence it promised. But why exactly? The simple fact is that we are starving AI of the one thing that makes it intelligent: human-generated data.

To feed models that require data, researchers and organizations have turned to artificial intelligence. Although this practice has been around for a very long time The development of AIwe are now crossing into dangerous territory by over-reliance, which is slowly leading to the destruction of AI models. And this is not a small problem ChatGPT produce small results – the results are very dangerous.

When AI models are trained on past results, they tend to propagate errors and introduce noise, resulting in reduced output. This iterative process turns the familiar “garbage in, garbage out” approach into a self-limiting problem, greatly reducing the system’s efficiency. As AI advances understand like a person and accuracy, it not only disrupts operations but also raises concerns about the long-term reliance on automated data to continue AI development.

But this is not just a technical breakdown; and the loss of authenticity, identity, and authenticity of data – creating risks for people and society. The ripple effect can be large, which can lead to an increase in the size of the error. When these models lose accuracy and reliability, the consequences can be dire – think medical errors, lost money and even life-threatening accidents.

Another big thing is that the development of AI can be stopped, stopped AI systems unable to absorb new information and “have time”. This stagnation will not stop progress and lock AI into diminishing returns, which can be dangerous for technology and society.

But, honestly, what can businesses do to ensure the safety of their customers and users? Before answering this question, we need to understand how all of this works.

When the model collapses, credibility goes out the window

AI-generated content spreads across the web, rapidly ingesting datasets and, later, models themselves. And it’s happening fast, making it difficult for developers to filter out any negative, human-generated information. In fact, the use of artifacts in training can lead to a phenomenon known as “model collapse” or “autophagy disease (MAD).”

Model collapse is a process in which AI systems gradually stop understanding the distribution of data they are supposed to model. This often happens when the AI ​​is repeatedly trained on what it does, which leads to a number of problems:

  • Loss of nuance: Samples tend to forget outliers or underrepresented information, which is important for a better understanding of any group.
  • Variation has been reduced: There is a clear decrease in the variation and quality of the output produced by the samples.
  • An increase in distractions: Existing bias, especially against fasting groups, may be exacerbated when the model ignores quantitative data that would reduce this bias.
  • The birth of useless results: Over time, models may begin to produce inconsistent or nonsensical results.

Case in point: A study published in Nature showed a rapid decline in language types trained repeatedly on AI-generated speech. By the ninth iteration, these models were found to produce null and void values, indicating a rapid decline in data quality and model utility.

Securing the future of AI: Steps businesses can take today

Businesses have a unique opportunity to shape the future of AI effectively, and there are clear steps they can take to make AI systems more accurate and reliable:

  • Invest in data provenance tools: Tools that look at where each piece of data comes from and how it changes over time give companies confidence in their AI inputs. With transparency in data sources, organizations can avoid feeding unreliable or biased models.
  • Apply AI-powered filters to see what was created: Advanced filters can handle Made by AI or lower quality items before they enter the educational categories. These filters help ensure that the models are learning from real, human-made data rather than artificial ones that have no real problems.
  • Partner with trusted data providers: Strong relationships with rated data providers give organizations the flexibility of real, high-quality data. This means that AI models get real, unbiased information that reflects real-life situations, which increases efficiency and value.
  • Promote literacy and digital literacy: By educating teams and customers about the importance of data intelligence, organizations can help people identify AI-generated content and understand the risks of generated data. Creating knowledge about the proper use of data fosters a culture that values ​​accuracy and integrity in the development of AI.

The future of AI depends on getting it right. Businesses have a real opportunity to keep AI focused on accuracy and integrity. By curating real-time, human-sourced data on shorter paths, prioritizing tools that capture and filter out low-quality data, and promoting awareness about digital realities, organizations can implement AI in a safer, smarter way. Let’s focus on creating a future where AI is powerful and beneficial to people.

Rick Song is the CEO and co-founder Man.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data scientists, can share data-related insights and innovations.

If you want to read about the best and latest ideas, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might think so support the story yours!

Read more from DataDecisionMakers



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *