Minutes IETF101: saag
Security Area Open Meeting
||Minutes IETF101: saag
IETF 101 SAAG session, Thursday 22 March, 2018, 1330h-1530h
* WG/BoF Reports and administrivia (10 mins)
[gifts for outgoing AD]
[note well and agenda bashing]
LAMPS meets tomorrow; Russ not yet in the room
Alexey Melnikov: CFRG met on monday; 4 documents ready for IRSG review
Interesting presentations on formal methods, randomness improvements, and
hashing to elliptic curves
Sean Turner: MLS BoF right after this; please come hear about a security layer
for messaging; ratchet trees, merkle trees, come.
* Invited/offered talks
- Verifying Real-World Protocols (30 min)
[cooperation between researchers and TLS 1.3 development]
[characterization of historical (named) TLS attacks]
[how proofs interact with algorithm instantiations and hardness assumptions]
[anatomy of a group downgrade attack]
[rigorous ("proof") analysis can look at the protocol core, the full protocol
including MitM potential, and verifying implementations]
[How protocol designers can help make verification easier]
[deep dive into symbolic protocol analysis of TLS 1.3]
[OAuth 2.0 example]
[key takeaway: writing a formal protocol spec helps a lot]
Hannes: We have had OAuth security workshops, thanks for pointing to some of the key
work that got them started. One of the things we found is that researchers are
motivated to start fairly late, ideally when already widely deployed, since that
has a big impact when flaws are found. But that's not very helpful for us as
protocol designers. Luckily TLS 1.3 is better, and the continuing engagement with
OAuth is working well, too. In general it seems like the only approach that works
is to get IETF participants to do the formal verification themselves. This
raises a challenge in that most of the formal analysis takes more time than the
protocol work itself. It's unclear that people are really prepared to do that.
Also, the tools that we have are slightly different, and if different groups work
with different tools, then reuse is difficult, especially when composing protocols.
We should look at what is needed for the IETF community, which tools are available,
and which are still supported, to avoid frustration. Most IETF protocols are
not at the level where you can immediately produce a formal model, since they
lack a formal description of the security goals. What we produce for getting
interoperability is often different from what is needed for protocol analysis.
Karthik: Thank you. I don't think there was a question there, but I could
make a comment: the way these tools are designed, it's easy to do certain kinds
of proof easily, not necessarily easy to use. With hacspec, we're trying to
write one language that you can use that will compile down to all the various
tools available. Maybe an Informational RFC on that language.
Ted Lemon: Is there a resource for how to learn about the 5 key points that
you listed for designing protocols to be verifiable, or do I have to go get a
Karthik: There's different levels for what needs to be done. Suppose you want
to do crypto proofs for a protocol that you're still developing and aren't done
with yet. If you pick a tool, say EasyCrypt, that will have a library and that
will include the 20 or whatever primitives that we understand very well -- things
like signatures, key exchange, etc. If you pick from the standard constructions
there, the tool will do very well for you. Also, less risk of messing up the
model if you use the standard building blocks! Things like separating the
key exchange from the record layer, good general principles. Maybe we should
write an RFC that documents these best practices.
Ted: That's kind of what I was getting at, yes. What you just said helped me,
but I still feel like I've got a bunch of learning curve to deal with.
It's worth climbing, since we keep getting bitten by these things, but a document
would help. It could at least help someone like me figure out the right questions
to ask someone who is an expert.
PHB: I'm keen on formal methods as a pedagogical exercise; the key thing it teaches
is the value of simplifying your design. It's easier to simplify the design
than construct a proof for a complicated design. I'm cautious in that one thing
I see, is that people go for things we can prove are correct over things that
are robust in the face of a poor implementation, and those are not always the same.
We see a lot of TLS use as "the hammer we know", and it's not always the thing
that we should reach for. Formal methods may help us get away from always using
the same tool, even if it's not the best one. Maybe a new mailing list for
people who embarking on this journey and learning the formal tools.
We want proofs that we can share.
Hannes: Is it worthwhile to discuss different tools and which ones are most suitable
for the IETF context? There are tradeoffs.
Karthik: My experience (based on just a few protocols): we have multiple leading
tools in these zones is that each one does some things better than others.
Even in the IETF context, there's not a clear winner, but maybe we could find
a way to settle on a standard toolkit that will work in most cases.
Hannes: To be specific, if I compare Scyther with F*, those are completely
different levels of complexity and level of understanding needed.
Do we just want formal analysis or best-case generation/code generation?
The learning curve that Ted talks about is significantly impacted by choice of tool.
Karthik: As mentioned earlier, it should be possible to get the easy stuff done early,
without much extra learning, and we should try to help make that happen.
Also, in the research community, yes, sometimes we need impact to get published.
Maybe we could have a way for the IETF WG chairs to send a letter saying "this has
been very useful for our protocol development work".
Hannes: Sure, maybe have the simple stuff done in-WG and the complicated stuff
done by researchers.
Tobias: You have a 97-page model for a browser. How long for an operating system
or a platform?
Karthik: To be clear, that's just a limited model for browser security mechanisms.
Just cookies and CORS, redirects and fetch, etc. There is a project SEL4 out
of Australia, and it's taken them years and years to build a spec for even a
small operating system. You have to pragmatically choose the small system you
care about; I don't really advocate for doing complete system verification at this
time. The more you can get a clean separation from the rest of the system, the
more it means something.
- The Sphinx Packet Format for Mix-nets and Bitcoin
[overview of Sphix goals and packet format]
[multiple incompatible implementations/extensions exist already, so want a standard]
- Capsule: a Protocol for Secure Collaborative Document Editing
[require knowledge of document for access; document is hash chain of diffs]
- draft-nslag-mpls-deprecate-md5 (10 min)
[problem in routing: control plan often over simple UDP or TCP, and is a big target
for attacks. LDP runs over TCP, uses TCP-MD5 for authentication, which is no
longer considered secure. Help! TCP-AO? (Catch-22 with vendors/operators.)]
Jared Mauch: I've got like two decades' experience configuring routers, so I have
a short list of things I brought with me to the mic so that I remember all of them:
0) this is not a defense of md5
1) I mentioned this morning that we've had 5-7 year
old BGP connections, and having that stable is really important.
2) Note that md5 was introduced not just for RST attacks, but also back at that time
it was easier to attack the window as well. The router verndors' implementations
tend to not have rekeying support, so I have a full mesh of IBGP sessions,
with ca. 200 devices, and if I need to change the keys on those sessions, it
would become a global outage, for many of the networks involved. So we need
a method for live rekeying/rollover.
3) Just reiterating that encryption is not needed; we just need authentication.
Things like GTSM are useful, especially on 1-hop links, but when we get multi-hop
scenarios, we may not necessarily know the distance across the network to be able
to set the TTL. And it may change!
4) Time is not available -- routers come up from a cold state, and with no
routing available, we can't get time from a network source. So there's some
interesting dependencies here.
5) Most (edge) network operators don't have automation or orchestration. So there's
no key storage, and the person who provisioned the original session/key may have
left the company, so keys just get copied/pasted into future devices.
6) The time horizon is a 5-7 year window; we need time to upgrade hardware and
cope with needing to regenerate these long-lived sessions. Not all hardware
gets live software upgrades.
Ben Kaduk: thank you for reiterating that list
Steward: I'll definitely be looking for help
Jared: We definitely need something; there are firehose attacks of well-formed
BGP messages at the routers, and they need to be able to protect against those
Dan Harkins: There's a disconnect between what you're doing and what you say
the problem is, and the call for help -- there's a security Catch-22. Maybe
a big hacking incident would give the operators incentive to demand something
from the vendors.
Brian Weis: The draft and slides indicate TCP-AO is fine, just want a new algorithm
for it. Is that so, or are you looking for something more dramatic?
Steward: If we use TCP-AO, we should confirm that it's really the right solution.
We'd need the rekeying stuff to be standardized, which isn't there yet, and
we need an agreement on a common algorithm to implement. We need a complete
package that operators can turn on and forget.
Brian: Remind everyone that TCP-AO was developed for routing protocols, and
took into account many requirements mentioned by Jared, and includes algorithm
agility. The question is, "what is good enough security for TCP-AO?" and what
are the blockages with the current algorithms already there. For example,
with the requirements slide, I suspect that AES-CMAC would mostly meet these.
If the orchestration of TCP-AO sufficiently met the requirements, then it's
just down to what algorithms to use. Is there really a problem for today's
route processors to do AES? I don't think there is, personally, but it would
be good to know if there's a difference of opinions.
Steward: Well, we certainly need to be confident that many generations of
route processors in the field can do it. I don't really know. When I worked
for a router vendor, the processors had rather less CPU than you would like.
Brian: new algorithms would only be on new processors, though?
Steward: I'm not so sure -- we need to get this deployed network-wide,
even on the old stuff. Not necessarily a forklift upgrade.
Brian: Please take this to Ops area and reinforce the point about demanding
from vendors, if you haven't already.
Steward: I'm pleased to see that the new Ops AD is sitting here listening.
Ben Kaduk: and coming up to the mic!
Ignas Bagdonas: To answer that, the fact that something is standardized and
available as an RFC does not necessarily mean that any of the vendors will
implement it. This is exactly the case here. There are operational hygeine
practices that solve this problem to a good-enough extent that there is not
an urgent push from the user community on the vendors to implement new things.
TCP-AO was 2006; to my knowledge there is one commercial implemnetation of
Kyle Rose: TCP-AO seems like the obvious choice from a technical perspective.
Maybe some crypto modernization, but I don't want to get too into detail.
One thing I brought up with Jared at lunch is to consider, if doing a rev
of TCP-AO, making the protocol a little more resilient to individual connection
breakage. I know that you want to bring down sessions when a connection breaks,
but maybe two parallel connections could help, e.g., for key rotation. Some
way to make the protcol more robust to that case.
Steward: Operator shaking head over there.... If we did something like that,
that's definitely in the class of things where we need help. Authors of those
documents are not experts in security protocols.
Kyle: and I'm not an expert in routing protocols.
Steward: good case for two (three?) areas to come together -- ops, routing, and
Kyle: I think TCP-AP is a good path since it avoids a lot of the bootstrapping
issues that a lot of the more complex protocols have -- Jared mentioned timestamps,
TLS is difficult to use, etc. Don't want the bootstrapping problem, e.g., OCSP
EKR: I was partly responsible for AO but will concede it sucks if you say so.
What's not clear to me is what, if any, the perceived technical deficiencies are
as opposed to just lack of demand. As I understand, survey says operators
aren't interested in making it work -- are they just happy with md5, or unhappy
with md5 but also unhappy with AO? Understanding that would be useful.
[off-mic]: chicken and egg?
EKR: If we designed some other protocol with different properties, is there
reason to believe that people would like that better?
Steward: We don't know!
EKR: as a protocol designer, I don't know what to do.
Steward: Nor am I -- that's why I'm here. If it's an impossible problem,
we may just have to write a note "revisit LDP security later" so we don't
get our routing protocols bounced. Let's do the right thing for the internet.
Russ Housley: I was security AD when TCP-AO was done. It started because
Ron Bonica came here and said "we need to change keys without tearing down
the TCP connection"; that led to TCP-AO. It sounds like we're having the
same briefing and the same discussion.
Jeff Hodges: I gave the referenced presentation at IEPG. No offense to
Stewart, but this has been presented completely wrong. Go to the "What Should
We Do?" slide in the backup material. The issue here is not TCP-AO, it's
"we're not security people; we keep getting told that the security in our documents
is not adequate and it keeps blocking the documents". We need to do something
about it, but what? TCP-AO is a perfectly valid set of mechanisms to allow
them to get their job done. The issue here is that we have existing protocols
that need to get security bolted on after the fact; you all know that it's
better if we can do it up front. So that's one problem. A second problem,
is coming up with a common set of mechanisms in the IETF to get past some
of the misunderstandings that we see. This slide is really what I'd like
the group to take as action items. Transport security considerations need
to be discussed up front. Also need help for existing mechanisms to bolt
security on after the fact without rewriting everything. There's also
the operational/deployment considerations -- it has to work and be reliable.
The other thing we need significant help with, is some common boilerplate for
common profiles of how we can use security mechanisms -- IPSEC profiles,
TCP-AO, TLS. Depending on the protocol/mechanism, each of these may be
appropriate for our work. It's least efficient for someone to write something
down and only get security eyes when it comes to the IESG. We'd like to
see common boilerplate to say "given the set of properties we need, here's
what we can do". Early secdir reviews can help, too. [refer to slide with unicorn]
And of course none of this stuff is actually any good if it's not implemented
and deployed. Partially a matter of encouraging vendors -- someone could
get TCP-AO into linux core!
Kathleen Moriarty: Alia (Atlas) says there is one that's close.
Jeff: From a discussion with Stephen Farrell when he was AD; we're note security
people and if there's no API or sysctl or whatnot, it's hard. We just want to
open a socket and bang bits. These things need to be easy to use -- don't
say "use TCP-AO" and expect me to write it in my application.
Rudiger Volk: Where to start... I have been noting the MD5 deprecation and
asking "will you implement?", and there hasn't been much running code for the
final version of draft-bonica. Doing the code years after standardization
is not a good way to actually get things out. If there's, over years, no
big noise about a missing function, one may ask if this is really an issue.
I have been pushing, but after these years I've been wondering if the security
guys should have a look and see if this is really the right thing in the
current circumstances. What could drive a rollout in my environment, would
be a complementary functionality that gets rolled out at the same time for
an overall improvement. For me as a BGP/inter-domain guy, the question would
be "is there a YANG model for key change?" and whether we expect the key
managemnet could be done using that. For BGP 2, I would like very much to
inject a link so that the authentication between domains could be hooked to
good stuff that we have now (RPKI), which is supposed to enable authentication
between ASes. Something using RPKI to authenticate managemnet and would work
well over the next decade, would be a good argument for implementation/adoption.
Ben Kaduk: hopefully we can get work on several things -- boilerplate text,
TCP-AO analysis if it's still the right thing and right algorithms, and a
way to convince the operators that they should be caring about this more.
* Proposal for Refactoring the Keystore Model
[ops/management AD with a call for help]
[background on current YANG "keystore" (trust anchors, really) models]
[ietf-keystore vs. ietf-trust-anchors and ietf-crypto-types]
Sean Turner: had an IPSEC MIB way back in the day. To the best of my
knowledge, no one has done any YANG modelling for anything in the security area.
Yoav Nir: chair of i2nsf. We have an active document doing a model for IPSEC.
Michael Richardson: I tried to use keystore when it was keychain to model the
rekeying of (802.)15.4 networks; it didn't work the way I wanted, which is fine
as it wasn't intended to do that. This is a good approach and a good place to
do it, but I don't know what the endgame looks like. I think that the opt process
will make the ANIMA WG go through and put a lot of YANG on a lot of our pieces.
It's good to know that there's an IPSEC model that we can leverage. This
keystore/etc. stuff is going to be key to what we're doing. For bootstrap
we have enrollment, and with enrollment we have trust anchors and so forth.
We may not need to to updates, but we definitely want monitoring.
Wes Hardaker: Long ago we had an IPSEC security policy WG that was a partner
with a DMTF task force group to do modelling of IPSEC configuration. I'm an
author of one of the RFCs that came out of that. It took 15 pages to be
able to see the lines -- it was pretty complex. It ended up being very complex;
when you have two standards groups trying to help each other, it does not go well.
Ignas: question for security ADs. Do you see a need for working on YANG models
of security aspects? Especially (from a routing perspective) tunnels: IPSEC,
etc. From a management perspective, things that are not automatable are not
deployable. After much work, routing management has gotten to a pretty good
shape. What is your view for security, on the manageability side?
EKR: We're not sure what people want to manage.
Ignas: in routing, we had management do the core framework, but the per-protocol
models were done in the individual routing WGs for thos protocols.
EKR: Ah. I don't see much enthusiasm for the security protocol designers to
design YANG models for them.
Ignas: Management people also need help -- please help guide how security aspects
should be handled in this sort of modelling.
Chris Inacio: SACM chair. There's people developing YANG models as part of
SACM for what it means to be able to do monitoring and security on network
switches, etc., within the architecture. It's not clear it's a great fit
for the WG, but we're allowing it to progress at this point.
* open-mic (10 mins)
Eliot Lear: for Stewart's presentation, TCP-AO, etc. -- where should the discussion
Deborah Brungard: ADs among ops and security will discuss, possibly have a design
team, with all the three areas.