As organizations increasingly adopt multi-cloud, hybrid cloud, and edge computing, their applications track, store, and analyze sensitive user data across a wide range of environments. To ensure compliance and the privacy of individuals using applications, the data needs to be protected throughout its lifecycle.

We encrypt filesystems and storage drives, and use SSH protocols to keep data at rest and data in transit safe even if stolen, rendering it useless without cryptographic keys. However, data in use is typically unencrypted and vulnerable to attacks and exploits.

To keep applications and data protected at runtime, developers are increasingly turning to Trusted Execution Environments, often referred to as “secure enclaves”.

What are Trusted Execution Environments?

Trusted Execution Environments (TEEs) are CPU-encrypted isolated private enclaves inside the memory, used for protecting data in use at the hardware level.

While the sensitive data is inside an enclave, unauthorized entities cannot remove it, modify it, or add more data to it. The contents of an enclave remain invisible and inaccessible to external parties, protected against outsider and insider threats.

 As a result, a TEE ensures the following:

  • Data integrity
  • Code integrity
  • Data confidentiality

Depending on the vendor and the underlying technology, TEEs can enable additional features, such as:

  • Code confidentiality – Protection of algorithms considered to be intellectual property.
  • Authenticated launch – Authorization or authentication enforcement for launching verified processes only.
  • Programmability – Access to TEE-programming code from the manufacturer or other secure source.
  • Attestation – Measurement of origin and the current state of an enclave for tamper-proofing.
  • Recoverability – Roll back to a previous secure state in case a TEE gets compromised.

These functions give developers full control over application security, protecting sensitive data and code even when the operating system, BIOS, and the application itself are compromised.

How Does a Secure Enclave Work?

To understand how TEEs work, let's look into Intel® Software Guard Extensions (SGX).

With Intel® SGX, the application data is split into its trusted and untrusted parts. The trusted parts of the code are used to run the application inside a protected enclave. The CPU denies all other access to the enclave, regardless of the privileges of the entities requesting it. Once processed, the trusted data is kept inside the TEE and the information provided to the application outside the enclave is encrypted again.

Enclaves are created and provisioned through hardware commands enabling memory page creation and addition, as well as enclave initialization, removal, or measurement.

Secure enclave creation process diagram.

The platform’s firmware uses its configuration settings to set up the area for the TEE. Once the extensions are enabled, the CPU reserves a section of DRAM as Processor Reserved Memory (PRM). The PRM size can be specified through the firmware tools.

Then, the CPU allocates and configures the PRM by setting a pair of model-specific registers (MSRs). Next, Enclave Page Caches (EPC) are created inside the PRM, containing metadata with base addresses, enclave size, and data security information.

Trusted Execution Environment Enclave Components

Finally, the CPU creates a cryptographic hash of the enclave’s initial state and logs other states that follow. This hash is later used for attestation through cryptographic keys and hardware root of trust.

Once initialized, the enclave can host user applications.

phoenixNAP Bare Metal Cloud servers with Intel SGX enable confidential computing for improved security and compliance. Deploy your Bare Metal Cloud Server instance in minutes!

How Secure are Trusted Execution Environments ?

To best explain how secure a TEE is, we first need to address the rings of CPU privilege.

Trusted Execution Environment Rings of Privilege

Encryption keys were traditionally stored within the applications, at the ring 3 level. This model jeopardizes the secrets protected within the application once it is compromised.

With modern architectures, rings of privilege go beyond the kernel and the hypervisor, extending to System Management Mode (SMM) and Management Engine (ME). This allows the CPU to secure the memory a TEE uses, reducing the attack surface to the lowest layers of hardware and denying access to all but the highest levels of privilege.

Another key to the functionality and security of a TEE is attestation. Through attestation, the entire platform and the enclave are measured and validated before any data is shared.

For example, an enclave can request a local report from itself or another enclave on the same platform and use the report for data checking and verification. Similarly, a remote verifier can ask for the attestation report before requesting any sensitive data from the enclave. Once trust is established, they can share session keys and data through a secure channel invisible to external parties.

Trusted Execution Environment Attestation

Since unencrypted secrets never leave the TEE, secure enclaves protect data from:

  • Other applications on the host
  • The host OS or hypervisor
  • System administrators
  • Service providers

Even the infrastructure owner and other entities with physical access to the hardware cannot reach the data.

Secure Enclaves in Confidential Computing

To enable safe and standardized processing of private workloads across different cloud environments, the Linux Foundation formed a community called the Confidential Computing Consortium (CCC) in 2019. Since its founding, CCC members have been working to accelerate cloud computing adoption and enable open collaboration.

Thanks to the high levels of data protection they offer, hardware-based secure enclaves are at the core of this initiative.

Through confidential computing supported by TEEs, organizations can keep everything from a single cryptographic key to entire workloads protected while their applications are being used.

Why Should You Adopt Confidential Computing?

Today, secrets extend well beyond passwords, encompassing highly confidential and irreplaceable information such as medical records or biometric data.

Confidential computing offers businesses a competitive advantage by protecting this data and preventing financial loss or reputation damage. However, there are other use cases for this evolving technology.

Multi-Party Computation

Confidential computing lets organizations process data from multiple sources without exposing its underlying code, intellectual property, or private client information to the parties they partner with. Competition or not, governmental organizations, healthcare, or research institutes can leverage this feature to collaborate and share insights for the purpose of federated learning.

Trusted Execution Environment Confidential Computing

Finance and Insurance

Leveraging confidential computing, financial institutions prevent fraudulent activities such as money laundering. A bank can share a suspicious account with another bank in or outside its network to audit it and minimize the chances of false positives.

Insurance companies can use a similar approach to prevent fraud. They can share a suspicious claim between one another for pattern recognition. With the sensitive data stored in an enclave and data records shared between different sources, results can be obtained without any confidential information being revealed in the process.


Enterprise tools for enhancing security are constantly being developed as confidential computing evolves. This stimulates the adoption, growth, and security of cloud computing, unlocking its full potential.

By offering unprecedented protection of sensitive data and code during execution, Trusted Execution Environments allow organizations to strengthen their security posture and leverage future-ready technology today.