Data is becoming increasingly important to our modern society – there’s no question about that. At the same time, expectations for security and data protection are rising among companies, regulators, and consumers. Coining examples of this are the GDPR, CCPA, or the recent large-scale data breaches.
Because current technologies do not properly address security and privacy issues, many companies fail to create value from their data. For example, companies today rarely share valuable data because once shared, it is considered lost forever.
Many companies are unable or even unwilling to use the cloud for certain types of data processing, leaving them stuck with inefficient IT. But the days of businesses having to forgo the benefits of the cloud will soon be replaced by the era of confidential computing.
What is Confidential Computing?
Confidential computing is an emerging technology that protects data (and code) when used within hardware-based and secure enclaves called trusted execution environments (TEEs).
The most prominent enclave implementation to date in this regard is Intel SGX. In a nutshell, enclaves enable isolated and verifiable processing of data on untrusted computer systems – this can be the user’s own computer or a machine in the cloud. With Intel SGX, the contents of an enclave remain encrypted in memory even at runtime.
The diagram (see image) contrasts the attack surface of conventional data processing systems with that of TEE-based systems. Components framed in red must be trusted. The use of TEEs thus results in a greatly reduced attack surface and thus a large gain in security.
But confidential computing not only takes general security to a new level, it also enables new types of data-driven applications. The verification aspect of confidential computing is key here: remote parties can verify exactly how data is processed, who provides the input, and who gets access to the results.
Numerous use cases for confidential computing
For example, this enables secure, rule-based sharing of data between potentially distrustful parties (think smart contracts, but with high performance and confidentiality.) Similarly, companies can process their customers’ sensitive data while proving that no one, including their own analysts and administrators, can ever see the raw data. The resulting application areas are diverse and span many industries.
One example is medical research, where multiple hospitals can pool their data to develop a machine learning model. The individual patient data involved remains confidential and secure at every step of the process.
Similarly, sensor data from connected vehicles can be processed securely. Even the vehicle manufacturer and the application operator only get access to the aggregated and filtered output data. It can be mathematically ensured that no relevant conclusions can be drawn from the output data about the data of individual persons and vehicles.
Such approaches can increase customer acceptance for central data processing and could become an important unique selling point for the domestic automotive industry in competition with the American e-car manufacturer Tesla, which is sometimes seen as a “data octopus”.
Further valuable application scenarios are also inherent in another traditional pillar of German industry: mechanical engineering. “Industry 4.0” has been the central topic of the future there since before yesterday. The massive use of software and sensors and the better use of data should further increase productivity and realize new business models.
However, it is not uncommon for industrial data to be of a sensitive nature, as it often contains business secrets and special know-how. Companies are therefore often unwilling to share this data or process it in the cloud. Confidential computing can address these concerns in a sustainable way. The principle of “sharing data without sharing it” will create a lot of value in the future in terms of predictive maintenance, digital twins and other data-driven industry topics.