Skip to main content

Last Call Review of draft-ietf-teep-architecture-16
review-ietf-teep-architecture-16-secdir-lc-schwartz-2022-04-04-00

Request Review of draft-ietf-teep-architecture
Requested revision No specific revision (document currently at 19)
Type Last Call Review
Team Security Area Directorate (secdir)
Deadline 2022-04-07
Requested 2022-03-17
Authors Mingliang Pei , Hannes Tschofenig , Dave Thaler , Dave Wheeler
I-D last updated 2022-04-04
Completed reviews Secdir Last Call review of -16 by Benjamin M. Schwartz (diff)
Artart Last Call review of -16 by Russ Housley (diff)
Genart Last Call review of -16 by Paul Kyzivat (diff)
Intdir Telechat review of -18 by Bob Halley (diff)
Iotdir Telechat review of -18 by Ines Robles (diff)
Assignment Reviewer Benjamin M. Schwartz
State Completed
Request Last Call review on draft-ietf-teep-architecture by Security Area Directorate Assigned
Posted at https://mailarchive.ietf.org/arch/msg/secdir/Bx4-8xZbIxjWAwRD_xES37kV0oY
Reviewed revision 16 (document currently at 19)
Result Has nits
Completed 2022-04-04
review-ietf-teep-architecture-16-secdir-lc-schwartz-2022-04-04-00
This draft is (obviously) highly relevant to the Security Area.  It is clear
and well-written, but the complexity of subject matter leads to some
difficulties and oversights.

Section 1:
The use of the term "applications" carries an implication of a client-side
device with installable software, but TEEP seems to extend also to server
software sharing a kernel, hypervisors sharing a mainboard, etc.  A term like
"software" would be more neutral.

"An application component ... is referred to as a Trusted Application": This is
confusing.  A component, explicitly not an entire "application", is referred to
as an "application".  "Trusted Component" would be more consistent.  Also,
"trusted" seems to be the wrong adjective here, as it is the environment, and
not the software, that carries an elevated level of trust.  "Isolated" might be
a better descriptor.

If this is common terminology for the field, a citation would be good.

I would appreciate some discussion of whether the Device Owner needs to trust
the Trusted Application, i.e. interaction between enclaves and sandboxes.

"verify the ... rights of TA developers": "rights" is a loaded term.  Rather
than get into constitutional law, consider "permissions".

"so that the Untrusted Application can complete" -> "so that installation of
the Untrusted Application can complete".

"is considered confidential" -- By whom?  From whom?  Consider "A developer who
wants to provide a TA without revealing its code to the device owner..."

"A TEE ... wants to determine" ... excessive personification.  I suggest
"needs".

Section 2:
"it is more common for the enterprise to own the device, and any device user
has no or limited administration rights": Grammar issue.  Perhaps "and for any
device user to have ...".

Section 3.1
"trusted user interface" ... can you cite an example of a mobile device with a
trusted peripheral that is not accessible to the REE OS?  This seems
theoretical.

Section 3.3
Similarly, are there any examples of IoT devices that prevent the REE OS from
operating certain actuators?

Section 4.1
      the TAM cannot directly contact a TEEP
      Agent, but must wait for the TEEP Broker to contact the TAM
      requesting a particular service.  This architecture is intentional
      in order to accommodate network and application firewalls

This is true in many use cases, but for Confidential Cloud the reverse logic
applies.  In fact, the TAM could be operating on-site inside an enterprise,
requiring a firewall exception to be reachable from the TEEP Broker.  This
architecture is also unnatural: it converts an event-driven "update command"
into a polling loop that adds delay and wastes resources.  Why is this part of
the TEEP architecture?  Surely it could be handled by a reversal-of-control
pattern one layer below TEEP (e.g. Server-sent events)?

I think the real motivation here is (1) installation is presumed to be
triggered locally, by the Untrusted Application, so the TAM must be reachable
as a "server", and the other side naturally should keep the client role; (2)
the TAM is intended to have O(1) state while serving N devices.

     For a TAM to be successful,
      it must have its public key or certificate installed in a device's
      Trust Anchor Store.

This needs discussion of threat model.  What damage can a hostile TAM do?  What
does the device administrator need to know for adding a trust anchor to be safe?

Section 4.4
   Implementations must support encryption of
   such Personalization Data to preserve the confidentiality of
   potentially sensitive data contained within it,

Implementations of what?

   and must support
   integrity protection of the Personalization Data.

Lower-case "must" without explanation.  Why, and is this a normative
requirement?

Section 4.4.2
"e.g., OP-TEE" -> What is this?

Section 5.4

   When a PKI is used, many intermediate CA certificates can chain to a
   root certificate, each of which can issue many certificates.

Intermediate CAs have a troubled history (e.g. [1]), and techniques that make
them safer (e.g. x.509 name constraints) can't be deployed as a retrofit.  Does
TEEP need some rules about supported x.509 extensions?

Section 6.2.1

If an Untrusted Application is summarily deleted, how do you avoid leaking the
TA?

Section 7

TEEP is format-agnostic for attestations, but what about
message-sequence-agnostic?  Can it tunnel arbitrary challenge-response
sequences?

Section 9.3

   We have
   already seen examples of attacks on the public Internet with billions
   of compromised devices being used to mount DDoS attacks.

Citation please.  Also, are you sure it has reached into billions?

Section 9:

Nothing here seems to discuss attacks on the TEE's properties, and the
post-compromise implications of those attacks.  For example, if all instances
of a TA share a secret key, used for decrypting the Personalization Data, then
a single successful attack on a TEE is sufficient to decrypt all
Personalization Data (previous and future).  Given the prevalence of such
attacks (especially via hardware fault injection), it seems likely to be worth
mentioning. [1]
https://arstechnica.com/information-technology/2015/03/google-warns-of-unauthorized-tls-certificates-trusted-by-almost-all-oses/