Internet Draft Editor: Peter Gutmann Category: BCP University of Auckland Expires: June 2006 <<<Many others>>> December 2005 Key Management through Key Continuity <draft-gutmann-keycont-00.txt> Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/1id-abstracts.html The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html This Internet-Draft will expire in June 2006. By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Abstract This memo provides advice and Best Current Practice for implementors and deployers of security applications that wish to use the key continuity method of key management. 1. Introduction <<<Note that the draft in its current form is still very much a strawman for further comment. The text contains a number of notes requesting further input from readers, delimited by angle brackets. Please send comments to the author at email@example.com or the SAAG list>>>. There are many ways of managing the identification of remote entities. One simple but also highly effective method is the use of key continuity, a means of ensuring that that the entity a user is dealing with today is the same as the one they were dealing with last week (this principle is sometimes referred to as continuity of identity). When this principle is applied to cryptographic protocols, the problem becomes one of determining whether a file server, mail server, online store, or bank that a user dealt with last week is still the same one this week. Using key continuity to verify this means that if the remote entity used a given key to communicate/authenticate itself last week, the use of the same key this week indicates that it's the same entity. This doesn't require any third-party attestation, because it can be done directly by comparing last week's key to this week's one. This is the basis for key management through key continuity: Once you've got a known-good key, you can verify a remote entity's identity by verifying that they're still using the same key. This document describes the principles that underly key management through key continuity, and provides guidelines for its use in practice. 1.1. Structure of this Document Section 2 provides background information and a general discussion of the principles of key continuity key management, as well as covering some problems present in existing approaches that need to be addressed. Section 3 contains advice for users of key continuity key management. Section 4 contains a suggested standard format for storing key management data. 1.2. Document Terminology and Conventions The key words "MUST", "MUST NOT", "REQUIRED", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119]. 2. Key Management through Key Continuity In its most basic form, key management through key continuity consists of two steps: Step 1: On the first connection exchange key(s), possibly with additional out-of-band authentication. Step 2: On subsequent connections, ensure that the key being used matches the one exchanged initially. In more formal terms, the key continuity method of key management is a variation of the baby-duck security model [DUCKLING1][DUCKLING2], in which a newly-initialised device (either one fresh out of the box or one reset to its ground state) imprints upon the first device it sees in the same way that a newlyhatched duckling imprints on the first moving object it sees as its mother. SSH [SSH1] was the first widely-used security application that used key continuity as its primary form of key managment. The first time a user connects to an SSH server, the client application displays a warning that it's been given a new public key that it's never encountered before, and asks the user whether they want to continue. When the user clicks "Yeah, sure, whatever" (although the button is more frequently labelled "OK"), the client application remembers the key that was used, and compares it to future keys used by the server. If the key is the same each time, there's a good chance that it's the same server (SSH terminology refers to this as the known-hosts mechanism). In addition to this, SSH allows a user to verify the key via its fingerprint, which can be conveniently exchanged via out-of-band means. The fingerprint is a universal key identifier consisting of the hash of the key components or the hash of the certificate if the key is in the form of a certificate. SSH is the original key-continuity solution, but unfortunately it doesn't provide complete continuity. When the server is rebuilt, the connection to the previous key is lost unless the sysadmin has remembered to archive the configuration and keying information after they set up the server (some OS distributions can migrate keys over during an OS upgrade, so this can vary somewhat depending on the OS and how total the replacement of system components is). Since SSH is commonly used to secure access to kernel-of-the- week open-source Unix systems, the breaking of the chain of continuity can happen more frequently than would first appear. Some of the problem is due to the ease with which an SSH key changeover can occur. In the PKI world, this process is so painful that the same key is typically reused and recycled in perpetuity, ensuring key continuity at some cost in security, since a compromise of a key recycled over a period of several years compromises all data that the key protected in that time unless a mechanism that provides perfect forward secrecy is used (it rarely is). In contrast an SSH key can be replaced quickly and easily, limiting its exposure to attack but breaking the chain of continuity. A solution to this problem would be to have the server automatically generate and certify key n+1 when key n is taken into use, with key n+1 saved to offline media such as a floppy disk or USB memory token for future use when the system or SSH server is reinstalled/replaced. In this way, continuity to the previous, known server key is maintained. Periodically rolling over the key (even without it being motivated by the existing system/server being replaced) is good practice since it limits the exposure of any one key. This would require a small change to the SSH protocol to allow an old-with-new key exchange message to be set after the changeover has occurred. Unlike SSH, SSL/TLS [TLS] and IPsec [IPSEC] were designed to rely on an external key management infrastructure, although at a pinch both can function without it by using shared keys, typically passwords. The lack of such an infrastructure has been addressed in two ways. In SSL, the client (particularly in its most widespread form, the web browser) contains a large collection of hardcoded CA certificates (over a hundred) that are trusted to issue SSL server certificates. Many of these hardcoded CAs are completely unknown, follow dubious practices such as using weak 512-bit keys or keys with 40-year lifetimes, appear moribund, or have had their CA keys on-sold to various third parties when the original owners went out of business [NOTDEAD]. All of these CAs are assigned the same level of trust, which means that the whole system is only as secure as the least secure CA, since compromising or subverting any one of the CAs compromises the entire collection (in PKI terminology, what's being implemented is unbounded universal cross- certification among all of the CAs). The second solution, used by both SSL/TLS and IPsec, is to use self-issued certificates where the user acts as their own CA and issues themselves certificates that then have to be installed on client/peer machines. In both cases the security provided is little better than for SSH keys unless the client is careful to disable all CA certificates except for the one or two that they trust, a process that requires around 700 mouse clicks in the latest version of Internet Explorer. A further downside of this is that the client software will now throw up warning dialogs prophesying all manner of doom and destruction when an attempt is made to connect to a server with a certificate from a now-untrusted CA, although given the figures from the server survey above browsers must already be doing this on many sites anyway, or alternatively ignoring the issue of invalid certificates for fear of scaring users. The same key-continuity solution used in SSH can be used here, and is already employed by some SSL clients such as MTAs, which have to deal with self-issued and similar informal certificates more frequently than other applications such as web servers. This is because of their use in STARTTLS, an extension to SMTP that provides opportunistic TLS-based encryption for mail transfers. Similar facilities exist for other mail protocols such as POP and IMAP, with the mechanism being particularly popular with SMTP server administrators because it provides a means of authenticating legitimate users to prevent misuse by spammers. Since the mail servers are set up and configured by sysadmins rather than commercial organisations worried about adverse user reactions to browser warning dialogs, they typically use self-issued certificates since there's no point in paying a CA for the same thing. Key continuity management among STARTTLS implementations is still somewhat haphazard. Since STARTTLS is intended to be a completely transparent, fire- and-forget solution, the ideal setup would automatically generate a certificate on the server side when the software is installed, and use standard SSH-style key continuity management on the client, with optional out- of-band verification via the key/certificate fingerprint. Some implementations (typically open-source ones) support this fully, some support various aspects of it (for example requiring tedious manual operations for certificate generation or key/certificate verification), and some (typically commercial ones) require the use of certificates from commercial CAs, an even more tedious (and expensive) manual operation. A similar model is used in SIP, in which the first connection exchanges a (typically) self-signed certificate, which is then checked on subsequent connects. Further measures such as the use of speaker voice recognition can be used to provide additional authentication for the SIP exchange. A similar principle has been used in several secure IP-phone protocols, which (for example) have one side read out a hash of the key over the secure link, relying for its security on the fact that real-time voice spoofing is relatively difficult to perform. 3. Using Key Continuity Key Management Section 2 outlined a number of considerations that need to be taken into account when using key continuity as a form of key management. These are covered in the following subsections. 3.1. Key Generation The simplest key-continuity approach automatically (and transparently) generates the key when the application that uses it is installed or configured for the first time. If the underlying protocol uses certificates, the application should generate a standard self-signed certificate at this point, otherwise it can use whatever key format the underlying protocol uses, typically this is raw public key components encoded in a protocol-specific manner. 3.2. Optional out-of-band Authentication If possible, the initial exchange should use additional out-of-band authentication to authenticate the key. A standard technique is to generate a hash or fingerprint of the key and verify the hash through out-of-band means. All standard security protocols have a notion of a key hash in some form, whether it be an X.509 certificate fingerprint, a PGP/OpenPGP [PGP] key fingerprint, or an SSH key fingerprint. The out-of-band verification is done in a situation-specific manner. For example when the key is used in a VoIP application, the communicating parties may read the hash value over the link, relying on speaker voice recognition and the difficulty of performing real-time continous-speech spoofing for security. When the key is used to secure access to a network server, the hash may be communicated in person, over the phone, printed on a business card, or published in some other well-known location. When the key is used to secure access to a bank server, the hash may be communicated using a PIN mailer, or by having the user visit their bank branch. Although it's impossible to enumerate every situation here, applying a modicum of common sense should provide the corerct approach for specific situations. Other distribution mechanisms are also possible. For example when configuring a machine, the administrator can pre-install the key information when the operating system is installed, in the same way that many systems come pre- configured with trusted X.509 certificates. 3.3. Key Rollover When a key needs to be replaced, the new key should ideally be authenticated using forward-chaining authentication from the current key. For example if the key is in the form of an X.509 certificate or PGP key, the current key can sign the new key. If the key consists solely of raw key components exchanged during a protocol handshake, this type of forward-chaining authentication isn't possible without modifying the underlying protocol. Protocol designers may wish to take into account the requirements for forward-chaining authentication when designing new protocols or updating existing ones. 3.4. Key <-> Host/Service Mapping A key will usually be associated with a service type (for example "SSH" or "TLS"), a host, and a port (typically the port is specified implicitly by the service type, but it may also be specified explicitly if a nonstandard port is used). When storing key continuity data, the service/host/port information should always be stored exactly as seen by the user, without any subsequent expansion, conversion, or other translation. For example if the user knows a host as www.example.com then the key continuity data should be stored under this name, and not under the associated IP address(es). Applying the WYSIWYG principle to the name the user sees prevents problems with things like virtual hosting (where badguy.example.com has the same IP address as goodguy.example.com), hosts that have been moved to a new IP address, and so on. 3.5. User Interface The user interface should take care to explain the details and consequences of a new key and key change to the user. When encountering a new key, this would consist of displaying the service type (for example "SSH" or "TLS"), the host name and (if a nonstandard port is being used) port, the key hash/fingerprint, and an indication that this is a new/unknown key for the given service/host/port. The user should be informed of the need for out-of-band authentication, and given the option to accept the key permanently, accept it once for this session, or not accept it at all, with an indication that not accepting it will abort the connection. Implementors should bear in mind that in most cases this will reduce the choices in the user's mind to "Connect without warnings" or "Connect with warnings". 3.6. Key Hash/Fingerprint Truncation The use of the full hash/fingerprint when the authentication process is being performed by humans can be quite cumbersome, requiring the transmission and verification of (for the most common hash algorithm, SHA-1), 40 hex digits. Implementors may consider truncating the hash value to make it easier to work with. For example a 64-bit hash value provides a modest level of security while still allowing the value to be printed on media such as business cards and communicated and checked by humans. Although such a short hash value isn't secure against an intensive brute-force attack, it is sufficient to stop all but the most dedicated attackers, and certainly far more secure than simply accepting the key at face value, as is currently widely done. Implementors should consider the relative merits of usability vs. security when deciding to truncate the key hash/fingerprint. <<<There are better ways than using hex digits, e.g. Cryptographically Generated Addresses, http://research.microsoft.com/users/tuomaura/CGA/, or the XXXXX-XXXXX-XXXXX type encoding commonly used for software registration codes, which have the advantage that users are familiar with them. Is it worth adopting one of these, and if so which one?>>> 3.7. Key Information Storage Configuration information of this kind is typically stored using a two-level scheme, systemwide information set up when then operating system is installed or configured and managed by the system administrator, and per-user information which is private to each user. For example on Unix systems the systemwide configuration data is traditionally stored in /etc or /var, with per-user configuration data stored in the user's home directory. Under Windows the systemwide configuration data is stored under an OS service account and the per-user configuration data is stored in the user's registry branch. The systemwide configuration provides an initial known-good set of key <-> identity mappings, with per-user data providing additional user- specific information that doesn't affect any other users on the system. 4. Key Continuity Data Storage <<<This may be better off in its own RFC, although since it's pretty cross- jurisdictional there's no obvious domain to put it under. This also has problems with automated updates of entries, possibly requiring a sychronisation and remote-access process to update entries. Another approach is to have a directory full of files, one per entry (so you can update them via rsync), but this precludes having the information protected through standard cryptographic means>>> Applications require a standardised means of associating hosts with keys. The following text-based format, inspired by the /etc/passwd format, is recommended for easy exchange of key continuity data across applications. The format of the key data file using using Augmented BNF [ABNF] is as follows. keydata = keydef | comment | blank keydef = algorithm ":" key-hash ":" service ":" host ":" port rfu CRLF comment = "#" *(WSP / VCHAR) CRLF blank = *(WSP) CRLF algorithm = *(ALPHA / DIGIT) key-hash = *(HEXDIG) service = *(ALPHA) host = <<<wherever this is specified>>> port = *(DIGIT) / "" rfu = "" / ":" *(WSP / VCHAR) The algorithm field contains the hash/fingerprint algorithm, usually "sha1". This allows multiple hash algorithms to be used for a fingerprint. For example while the current standard algorithm is SHA-1, some legacy implementations may require MD5, and future implementations may use SHA-1 successors. The key-hash field contains the hash/fingerprint of the key. This value may be truncated as described in section 3. When comparing truncated hashes for equality, the first min( hash1-length, hash2-length ) bytes of the two values are compared. The service field specifies the service or protocol that the hash/fingerprint applies to. For example if both a TLS and and SSH server were running on the same host, the protocol field would be used to distinguish between the key hashes for the two servers. The host-name and (optional) host-port fields contain the host name and port that the key corresponds to. Typically the port is implicitly specified in the service field, but it may also be explicitly specified here. For example a typical key continuity data file might consist of: # Sample key continuity data file sha1:B65427F83CED23A70263F8247C8D94192F751983:tls:www.example.com:443 sha1:17A2FE37808F3E84:ssh:ssh.example.com:22 md5:B2071C526B19F27C:ssh:ssh.example.com:22 The first entry contains the fingerprint of an X.509 certificate used by the web server for www.example.com. The second and third entries contain the (truncated) fingerprint of the SSH key used by the server ssh.example com, first in the standard SHA-1 format and then in the alternative MD5 format. 4.1. Additional Security for the Key Continuity Data The key continuity data is simply a plain text file with no (explicit) additional security measures applied, although in practice it would be expected that OS security measures be used to prevent modification by arbitrary users. In addition to the OS-based security restrictions, the data can be given additional protection through encapsulation in PGP or S/MIME security envelopes, or through the use of other cryptographic protection mechanisms such as cryptographic checksums or MACs. When encapsulated using PGP or S/MIME the key data is no longer a plain text file, and will need to be extracted in order to be used. Alternatively, a PGP or S/MIME detached signature can be stored alongside the key data so that the data to be used directly while still allowing it to be verified. 4.x Discussion The intent of this format is to follow the widely-used and recognised /etc/passwd file format with which many users will be familiar. The format has been kept deliberately simple in order to avoid designing a general- purpose security assertion language such as KeyNote [REF] or SAML [SAML]. While this will no doubt not suit all users, it should suffice for most, while remaining simple enough to encourage widespread adoption. There are two options available for storing the key-continuity data, the single-file format described above, and one entry per file. The latter makes it possible to use mechanisms like rsync to update individual entries/files across systems, but leads to an explosion of hard-to-manage tiny files, each containing a little piece of configuration data. It also makes it impossible to secure the configuration data via mechanisms such as PGP or S/MIME. Finally, the number of users who would use rsync to manage these files, when compared to the total user base, is essentially nonexistent. For this reason the single-file approach is preferred. 5. Security Considerations Publishing a BCP on this topic may make the authors a lightning rod for "this is just pretend security, you really need a <insert sender's favourite authentication system>" complaints. Author Address Peter Gutmann University of Auckland Private Bag 92019 Auckland, New Zealand firstname.lastname@example.org <<<Many others>>> References (Normative) [ABNF] "Augmented BNF for Syntax Specifications: ABNF", RFC 4234, David Crocker and Paul Overell, October 2005. References (Informative) [DUCKLING1] "The Resurrecting Duckling: Security Issues in Ad-Hoc Wireless Networking", Frank Stajano and Ross Anderson, Proceedings of the 7th International Workshop on Security Protocols, Springer-Verlag Lecture Notes in Computer Science No.1796, April 1999, p.172. [DUCKLING2] "The Resurrecting Duckling - What Next?", Frank Stajano, Proceedings of the 8th International Workshop on Security Protocols, Springer- Verlag Lecture Notes in Computer Science No.2133, April 2000, p.204. [IPSEC] "Security Architecture for the Internet Protocol", RFC 2401, Stephen Kent and Randall Atkinson, November 1998. [KEYNOTE] "The KeyNote Trust-Management System Version 2", RFC 2704, Matt Blaze, Joan Feigenbaum, John Ioannidis, and Angelos Keromytis September 1999. [NOTDEAD] "PKI: It's Not Dead, Just Resting", Peter Gutmann, IEEE Computer, August 2002, p.41. [PGP] "OpenPGP Message Format", RFC 2440, Jon Callas, Lutz Donnerhacke, Hal Finney, and Rodney Thayer, November 1998. [SAML] "Security Assertion Markup Language (SAML), Version 1.0", OASIS XML- Based Security Services Technical Committee, April 2002. [SSH1] "The SSH (Secure Shell) Remote Login Protocol", draft-ylonen-ssh- protocol-00.txt, Tatu Ylonen, November 1995 (this draft and the program it was based on introducted the key continuity/known-hosts mechanism, although it was never published as an RFC). [TLS] "The TLS Protocol, Version 1.0", RFC 2246, Tim Dierks and Christopher Allen, January 1999. Full Copyright Statement Copyright (C) The Internet Society (2005). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.