Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

How to Steal an AI Model Without Stealing Anything


Artificial intelligence can be incredibly hackable – as long as you can sniff out the electronic signature. Although they have repeatedly emphasized that they do not want to help people attack neural networks, researchers from North Carolina State University have described such a method. new paper. All they needed was a power probe, several pre-trained, open-source AI models, and the Google Edge Tensor Processing Unit (TPU). Their method involves analyzing electromagnetic radiation while the TPU device is moving rapidly.

“It’s very expensive to build and train neural networks,” said study author and NC State Ph.D. student Ashley Kurian in a call with Gizmodo. “It’s a company’s intellectual property, and it takes a lot of time and computer use. For example, ChatGPT—it’s made of billions of parts, which are secret. Someone steals it, ChatGPT is theirs. You know, they don’t have to pay for it, and they can sell it.”

Theft is already a big problem in the world of AI. However, it is often the other way around, as AI developers train their models on protected services without permission from their creators. This amazing example is welding cases and even equipment to help artists fight them is a “poison” skill generator.

“Electromagnetic information from the sensor gives us a ‘signature’ of AI behavior,” Kurian said. wordshe calls it “the light part.” But to understand the nature of the model – its architecture and details – they had to compare the electronic data with data captured when other AI models ran on the same type of chip.

By doing this, “they were able to determine the exact architecture and characteristics—known as layer details—we had to make a copy of the AI ​​model,” said Kurian, who added that he could do so with “99.91% accuracy.” To remove this, the researchers had the opportunity to use use the device to search and run other models.They also worked directly with Google to help the company figure out how their chips work.

Kurian speculates that recording colors moving on mobile phones, for example, might be possible – but their highly integrated structure would make it difficult to control electromagnetic fields.

“Transport attacks on edge devices are nothing new,” Mehmet Sencan, a security researcher at AI standards nonprofit Atlas Computing, told Gizmodo. But this method of “removing hyperparameters of all kinds is necessary.” Because AI tools “work transparently,” Sencan explained, “anyone who puts their models on the edge or on any server that isn’t secure has to think that their builds can be taken down by a lot of research.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *