In the face of AI-powered surveillance, we need decentralized confidential computing

The following is a guest post by Yannik Schrade, CEO and Co-founder of Arcium.

When Oracle AI CTO Larry Ellison shared his vision for a global network of AI-powered surveillance that would keep citizens on their “best behavior”, critics were quick to draw comparisons to George Orwell’s 1984 and describe his business pitch as dystopian. Mass surveillance is a breach of privacy, has negative psychological effects, and intimidates people from engaging in protests

But what is most disturbing about Ellison’s vision for the future is that AI-powered mass surveillance is already a reality. During the Summer Olympics this year, the French government contracted out four tech companies – Videtics, Orange Business, ChapsVision and Wintics – to conduct video surveillance across Paris, using AI-powered analytics to monitor behavior and alert security. 

The Growing Reality of AI-Powered Mass Surveillance

This controversial policy was made possible by legislation passed in 2023 permitting newly developed AI software to analyze data on the public. While France is the first country in the European Union to legalize AI-powered surveillance, video analytics is nothing new.

The UK government first installed CCTV in cities during the 1960s, and as of 2022, 78 out of 179 OECD countries were using AI for public facial recognition systems. The demand for this technology is only expected to grow as AI advances and enables more accurate and larger-scale information services.

Historically, governments have leveraged technological advancements to upgrade mass surveillance systems, oftentimes contracting out private companies to do the dirty work for them. In the case of the Paris Olympics, tech companies were empowered to test out their AI training models at a large-scale public event, gaining access to information on the location and behavior of millions of individuals attending the games and going about their day to day life in the city. 

Privacy vs. Public Safety: The Ethical Dilemma of AI Surveillance

Privacy advocates like myself would argue that video monitoring inhibits people from living freely and without anxiety. Policymakers who employ these tactics may argue they are being used in the name of public safety; surveillance also keeps authorities in check, for example, requiring police officers to wear body cams. Whether or not tech firms should have access to public data in the first place is in question, but also how much sensitive information can be safely stored and transferred between multiple parties. 

Which brings us to one of the biggest challenges for our generation: the storage of sensitive information online and how that data is managed between different parties. Whatever the intention of governments or companies gathering private data through AI surveillance, whether that be for public safety or smart cities, there needs to be a secure environment for data analytics.

Decentralized Confidential Computing: A Solution to AI Data Privacy

The movement for Decentralized Confidential Computing (DeCC) offers a vision of how to address this issue. Many AI training models, Apple Intelligence being one example, use Trusted Execution Environments (TEEs) which rely on a supply chain with single points of failure requiring third-party trust, from the manufacturing to the attestation process. DeCC aims to remove those single points of failure, establishing a decentralized and trustless system for data analytics and processing.

Further, DeCC could enable data to be analyzed without decrypting sensitive information. In theory, a video analytics tool built on a DeCC network can alert a security threat without exposing sensitive information about individuals that have been recorded to the parties monitoring with that tool. 

There are a number of decentralized confidential computing techniques being tested at the moment, including Zero-knowledge Proofs (ZKPs), Fully Homomorphic Encryption (FHE), and Multi-Party Computation (MPC). All of these methods are essentially trying to do the same thing – verify essential information without disclosing sensitive information from either party.

MPC has emerged as a frontrunner for DeCC, enabling transparent settlement and selective disclosure with the greatest computational power and efficiency. MPCs enable Multi-Party eXecution Environments (MXE) to be built. Virtual, encrypted execution containers, wherein any computer program can be executed in a fully encrypted and confidential way.

In the context, this enables both the training over highly sensitive and isolated encrypted data and the inference using encrypted data and encrypted models. So in practice facial recognition could be performed while keeping this data hidden from the parties processing that information.

Analytics gathered from that data could then be shared between different relative parties, such as security authorities. Even in a surveillance-based environment, it becomes possible to at the very least introduce transparency and accountability into the surveillance being performed while keeping most data confidential and protected.

While decentralized confidential computing technology is still in developmental stages, the emergence of this brings to light the risks associated with trusted systems and offers an alternative method for encrypting data. At the moment, machine learning is being integrated into just about every sector, from city planning to medicine, entertainment and more.

For each of these use cases, training models rely on user data, and DeCC will be fundamental for ensuring individual privacy and data protection going forward. In order to avoid a dystopian future, we need to decentralize artificial intelligence.

The post In the face of AI-powered surveillance, we need decentralized confidential computing appeared first on CryptoSlate.

Read Entire Article


Add a comment