SCC: Confidential Computing. Why Bother?
This is the second instalment of our Super Protocol in-depth series. A previous article explained how big tech hijacked the term cloud and why Web3 is about to reinstate its original meaning.
Today we’ll ponder on the concept of Confidential Computing. First of all, what do these two words mean together? In particular, isn’t it obvious that data (meaning anything really: a picture, a birth date, account number, its balance, or even days since it’s been active) should be confidential and protected, so why stress “computing”?
Sometimes asking obvious questions might yield interesting results, so bear with us on this one. In general, data can be in three states:
- at rest: literally, nothing happens here, it just sits somewhere on a hard drive, and nothing happens to it
- in motion: transferred from point A of a network to point B
- in use: being manipulated somehow
You’ll meet these words quite often in most articles on confidential computing and even in our whitepaper!
The important part here is that in any working system, the data is almost never at rest. Once it’s accessed by any application (even to read), it is considered to be in use since some entity is manipulating it in some way.
For example, you’ve made a transaction, so the algorithm has to take two account addresses to change their balances accordingly (data in use) and publish new balances to the network (data in motion).
Each point of this journey has multiple potential angles for a hacker attack. Most efforts to protect the data had been focused on two states, while it’s at rest and in motion. Of course, the “in use” part hasn’t been left completely unattended, yet only until recent advances in technology, it has been made possible to protect the data in all three states fully.
That is how the term “confidential computing” came to be! It points to a specific set of practices focused on protecting the data while it’s being used (computed).
How does confidential computing work? Since the processor does all the heavy lifting in terms of hardware, it’s imperative to prevent any malicious attempts to tamper with its inner workings. Confidential computing achieves this by creating separate secured “enclaves” — parts of the processor that only authorised applications can access and use.
This way, no unauthorised application can access, perform any computations, or even have any information on what’s happening inside these enclaves on a hardware level. The whole thing is called a Trusted Execution Environment (we’ll cover it in-depth in the next article of these series).
With confidential computing available, it’s much safer to work with any kind of data that might otherwise be at risk, such as personal data, medical records, financial data, sensitive information, ownership records, etc.
Confidential computing enables collaboration and new developments for industries that previously had high-security risks and regulatory problems. For example, medical data is heavily regulated, while it might help unlock new ways to diagnose and treat diseases, regulatory barriers make it almost impossible to aggregate and process necessary amounts of data in one place.
For Web3 projects, confidential computing could mean a much safer infrastructure to execute code and process data off-chain, potentially removing multiple hacker threats by making them impossible.
Why does not every app and cloud provider just switch to a confidential computing model? First, there are still billions of apps that do not require that level of security. Second but most important — confidential computing requires specific hardware that is produced by a few vendors in limited amounts.
That’s where the Super Protocol comes in — we’re building a platform that would enable confidential computing providers and make their services more visible and accessible for the developers. By doing this, we hope to advance the next wave of Web3 applications while making them much safer for the end users.