In Detail: Providing Public Access to the Confidential Computing Enclave Without Compromising Its Security

Super Protocol
3 min readApr 7, 2023

If you’re a constant reader of our blog (or follow us on Twitter), you should know that Super Protocol has launched a Testnet Phase 2. Along with the resource provider marketplace (in private beta, so we verify and approve each provider manually, and you can’t join with your own hardware just yet), you can now use SP to host static applications such as websites and make them publicly available. Being able to host static code that can be accessed via the web is not something one could possibly use to impress the audience in 2023 (you could do this almost from day 1 of Web3 with IPFS), however there it’s a breakthrough when it comes to the confidential computing side of things. So please, let us elaborate.

We’ve done a lot to explain how confidential computing works and why it is absolutely impossible for an attacker to gain access to the data unless they’ve been authorized to do it. The logical question would be, if the data inside a secure confidential computing enclave is so hard to reach, how do we use it for mundane tasks such as application frontend that could be accessed by millions of users in a millisecond, databases, and business logic — all crucial components of modern services? To do that, we’d have to create a secure channel (tunnel) that connects a server with an app inside the enclave and another one for the usual client-server connection.

Chances are, you’ve already heard the words SSL-certificate used for secure HTTPS connections. As we tend to not reinvent the wheel and build on top of already existing and time-tempered technologies, SP uses TLS (Transport Layer Security) and mTLS (m is for “mutual”), which are used in HTTPS and utilized by most of the modern websites and services.

In confidential computing, only the authorized apps with a proper private key can access the enclave. TLS protocol requires a client to have an encrypted certificate and a private key, in order to establish a connection and start exchanging data, while mTLS as the name implies requires both the client and the server to have their certificates and keys. This way we can be sure that the connection is secure and encrypted, so even if an attacker somehow manages to take hold of the data being exchanged, they would be unable to decrypt it to extract any valuable information.

Why do we have to use such extended measures of protection? First of all, why bother using confidential computing, if anyone could then just hijack the data while it’s being transmitted via a non-secure channel? Then, there’s a whole bunch of popular attacks that we can prevent.

For example, not so long ago a popular liquidity pool / DEX had been hacked — the service itself was perfectly fine, however, the attackers had managed to access the hosting provider and replace the original website with a similarly looking one, so when the unsuspecting users attempted to top up their balances, they’ve been actually transferring their money to an attacker’s wallet. SP makes this impossible in two ways: the website code is inside a protected enclave, so no way to temper with it, and the mTLS won’t allow a client’s connection to a replica since it has no proper certificates and keys.

So it’s not the “hosting static code” part that we’re actually testing in the Phase 2 testnet, it’s the secure connection and access to that code from a public domain. Once we’re done here, dynamic apps are coming next: process data, store and search through databases, run sophisticated ML algorithms in real-time — coming soon to Super Protocol, so stay tuned!

--

--

Super Protocol

Super Protocol, the confidential cloud and marketplace for Web3 and AI. Powered by confidential computing.