security

How Big Tech’s AI arms race threatens data security – Legal Dive


Danielle Sheer is chief legal and trust officer at Commvault. Views are the author’s own. 

As the boom in artificial intelligence fuels a frantic rush for data, a handful of the world’s largest tech companies are looking for new sources of information. The purpose: use data to train a rapidly growing fleet of large language models to generate the best answer to any questions.

But their quest is raising concerns about privacy, bias and the use of consumer data.

Some data security companies have already begun to use AI to glean insights from their customers’ data that they then can sell back to those customers as a service. Some have even indicated a willingness to partner with Big Tech companies to mine the vast troves of information these security companies are supposed to be protecting for organizations across all sectors, from start-ups to giants. 

These ventures can seem valuable. Turning AI algorithms loose on decades of anonymized and encrypted medical research and healthcare information, for instance, could lead to therapies and cures for diseases through pattern matching beyond human ability. Modeling global weather patterns and agricultural practices along with demographics, distribution systems and economic policies could make progress on ending hunger and malnutrition. 

This kind of data mining, however, poses an existential threat to the integrity of the data security industry.

Danielle Sheer, LLM training, data privacy

Danielle Sheer

Courtesy of Commvault

 

This industry exists to secure our customers’ vital records — full stop. It helps ensure organizations can get back up and running smoothly in the event of a breach. Should our industry be handing over customer data to the latest AI projects, just because we have it?

Read More   Chinese Hackers Target Semiconductor Firms in East Asia with Cobalt Strike

Take healthcare. To help them comply with regulations, organizations typically contract with a third-party vendor to back up all their records, including patient medical histories. If that data is shared with another vendor to train an AI model, it exposes the information to risk. A single cybersecurity breach, like the one Change Healthcare experienced earlier this year, can compromise the personally identifiable information of 100 million people.

Then there’s the issue of consent. When someone signs over permission to share their patient records with doctors and insurers, they’re probably not thinking they’re donating their medical history to AI researchers. But if a healthcare organization supplies information, even anonymously, to an AI algorithm, is that a violation of privacy? Should that require separate consent? 

This is where regulations need to improve, ensuring that enterprises and institutions are transparent with consumers about exactly how their data will be used, and giving consumers an opportunity to opt out.

Consider how safety regulations arose for other important industries. In the late 1800s, the use of electricity spread rapidly, but so did building fires. A group of industry experts crafted the first National Electrical Code, which helped make systems safer with standardized guidelines for wiring methods and materials. 

Less than a decade later, unsanitary practices in the meatpacking industry, famously portrayed in Upton Sinclair’s The Jungle, helped spur the passage of the Pure Food and Drug Act of 1906, leading to the creation of the Food and Drug Administration. And in the 1950s, a mid-air collision between two commercial airplanes that killed everyone aboard presaged the creation of the Federal Aviation Administration to manage civil aviation security. 

Read More   Join security experts at VSS Tech Talk for K-12 school safety insight | Security News - SourceSecurity.com

We’ve never done that for software, which has become as much a part of our lives as electricity, food and air travel. Software is deployed all over the world and underpins everything, from how we do our jobs and get our groceries to the safety and function of our critical infrastructure. And yet we don’t have a comprehensive, peer-reviewed set of rules for how to behave. We need one. 

Perhaps we need a quasi-governmental agency made up of global privacy experts and tech leaders — activists, regulators and practitioners — who could all work toward a goal of creating safeguards for software, like the electrical, food and aviation industries have.

We have many pressing problems that technology can help solve. But AI is a cutting-edge technology that we need to understand and for which we should build in proper safeguards. 

Let’s learn from history and not wait for a crisis or a gruesome exposé before we get serious about data security.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.