Skip to main content

Minutes for SACM at interim-2015-sacm-3
minutes-interim-2015-sacm-3-1

Meeting Minutes Security Automation and Continuous Monitoring (sacm) WG
Date and time 2015-05-12 07:00
Title Minutes for SACM at interim-2015-sacm-3
State Active
Other versions plain text
Last updated 2015-05-29

minutes-interim-2015-sacm-3-1
============================================================================================================================================
IETF SACM WG Virtual Interim Meeting
10:00 AM EST – 12:00 EST)
May 12, 2015
Minute taker #1: Danny Haynes
======================================================================
Agenda Bashing (Dan Romascanu / Adam Montville)
======================================================================
[Adam Montville]: There was a request to change agenda items #2 (Introduction
to the OVAL Information Model Mapping) and the #3 (Introduction to Endpoint
Compliance). Since there were no objections to this change, the agenda was
updated.

======================================================================
Endpoint Compliance Standard (Jessica Fitzgerald-McKay)
======================================================================
[Jessica Fitzgerald-McKay]: This draft was born out of what SACM partially
requires which is to do collection directly on an endpoint about the state of
an endpoint. Some of this work has already been done in the Trusted Computing
Group and the IETF NEA working group and this draft explains how it could be
appropriate to bring this work to the IETF SACM working group. There are
intellectual property rights associated with many of specifications which are
owned by the Trusted Computing Group. This draft begins to help introduce the
specifications to see if they are something we even want to use before we go
back to the Trusted Computing Group to see what it would take to move the
specifications into the IETF SACM working group.  While I don’t know what they
would say to this request, in the past, the TCG was willing to transfer
specifications to the IETF and other groups. The TNC architecture was
originally used to do the comply-to-connect use case. Initially, the Access
Requestor, a device trying to connect to the network, would have a software
stack with integrity measurement collectors which would gather information from
the endpoint which would then communicate to the TNC Client which would then
communicate to the Network Access Requestor which initiates the connection with
the Policy Decision Point.  The Policy Decision Point could then take this
information, send it up its software stack all the way to the Integrity
Measurement Verifiers (IMVs) which would take this information and decide
whether or not the endpoint was compliant enough with the policy to join the
network.  It then sends the decision down to the TNC Server which either allows
the endpoint to join the network or reject the endpoint and have it go to a
remediation network, quarantine, or don’t let it access the network. Many of
the specifications have already been shared with the IETF in the NEA WG. IF-M
became PA-TNC, IF-TNCCS became PB-TNC, and IF-T became PT-TLS and PT-EAP. Now,
I would like to frame the TNC work in terms of our work.  Instead of doing
Integrity Measurement Collection (IMC) and Integrity Measurement Verification
(IMV) we want to think of Posture Collectors and Posture Validators speaking to
Posture Broker Clients and Servers just they do in the NEA work.  These then
communicate with Posture Transport Clients and Servers.  The parts that we can
borrow from the TNC architecture which are valuable to SACM is the ability to
collect posture attributes from and endpoint like internal collectors in SACM,
evaluation of the posture attributes against policy guidance which is something
that we can already do with the architecture today.  There are secure channels
for exchanging posture assessment information which meets a lot of our
requirements that we specified in our Requirements document.  I think there is
some room for some changes to the basic architecture that we could work in the
SACM work group and better meet some of our use cases.  There is some work we
could do with the SACM evaluator and the SACM internal collector and the need
to separate collection from evaluation.  In the draft, I talked about taking
some of those validator functions and splitting them up.  Maybe on our NEA
Server we can keep the aspects of the Posture Validator that do things that
better align with provider-consumer roles in our architecture and keep the part
of the Posture Validator that allows you to request and report information from
an endpoint.  Then we could keep the part of the Posture Validator that lets
you know if an endpoint can provide you with the information that you are
looking for, whether or not it has the right Posture Collectors on it, and keep
that all of that at the server.  However, we can move some parts of the Posture
Validator of off the NEA Server.  Maybe, we could take the pieces of the
Posture Validator that do the evaluation against the network policy and move
that out to an evaluator role that better meets our SACM use cases.  We have
mapped the TNC/NEA roles to the roles that we have already described in our
architecture and we already have our provider and consumer, the Posture
Collectors and Posture Validators, our Posture Broker Clients and Servers act
as SACM controllers, and through our protocols and interfaces, we have a secure
channel to exchange posture assessment information between the providers and
consumers.  As mentioned before, a lot of the TNC architecture has been worked
in the NEA working group, but, there are still a lot of places where we can use
the standards that still existing solely in TCG in our work to expand our use
cases.  We could standardize the inter-endpoint and intra-component interfaces
as well as standardize the different types of attributes.  For example in the
Internet Draft, I call out SWID Messages and the communication of SWID Messages
between an endpoint and posture server.  I think we can make use of that
because it meets lot of our information model use cases.  We can also
standardize some of the bootstrapping activities like the initial connection
between the endpoint and the collector that will be collecting the data for
evaluation.  Here we list off some of the standards that meet each of these
points.  So, if we could bring IF-IMC and IF-IMV, we could get that endpoint to
component interfaces.  SWID Messages and Attributes for IF-M standardize the
collection and transmission of that specific type of information.  That may not
be the only information SACM wants to collect.  I understand that we have a lot
more data that we will need to collect to meet all of use cases, but, doing
software asset management meets a lot of our needs so we can start there and
see where this architecture, with that particular piece of posture information,
and see if that is something that we can then build upon.  Server Discovery and
Validation is a TNC specification that is in progress and there is a public
draft, available now, which allows you to locate and verify appropriate servers
for sending this posture information.  Reusing standards such as TNC is good
because it is better to not re-invent the wheel and see if we can build upon
them for our particular use cases.  So why do this.  TNC already defines
several parts of security automation which is aligned with NEA and can be
aligned with SACM.  There is no reason to re-invent the wheel for internal
collection, it exists, and is out there.  With TCG’s permission, maybe we can
go ahead and take these specifications, expand on them to meet our specific
needs, and make sure we have unification rather than bifurcation of these
standards.  There is running code as strongSwan has implemented all of these
specs and the code is available open source.  There is also a more recent Java
implementation for the same standards.  We have the running code, now we just
need the rough consensus.  With the group’s agreement, I would like to request
that the TCG submit these standards to SACM.  Within the TCG, everyone is
comfortable with that, but, we would just need to go through some work with the
board there, but, I don’t really foresee that being a huge problem.  Within
SACM, we can then adapt these standards to meet our requirements while we
maintain compatibility with TNC.  I think this represents a huge step forward
in tackling some of our more basic use cases and provides us with an
architecture to build on for further collection of data from endpoints.  I am
happy to discuss this and answer any questions with the remaining time. [Dan
Romascanu]: I have a couple of clarification questions.  First of all, we are
talking about how many standards?  You listed two and I get the impression that
they come in pairs.  So we are talking about two or four documents? [Jessica
Fitzgerald-McKay]: IF-IMC and IF-IMV come as a pair.  They are complimentary. 
One is on the endpoint side and the other is on the server side.  SWID Messages
doesn’t come in a pair.  It is standalone and just rides over IF-M.  Server
Discovery and Validation is just a single specification and doesn’t have a
partner specification to go with it.  It just allows the endpoint to find an
appropriate server to which it can report its posture information.

[Dan Romascanu]: I don’t know anything about the structure of the standards,
but, is there a combination of the protocol mechanisms and data model
attributes? [Jessica Fitzgerald-McKay]: The SWID Messages specification is more
in alignment with collecting a specific type of posture information so, yes, it
is appropriate to think of it as a format to meet our data model needs.  Server
Discovery and Validation is more of a bootstrapping to a protocol and IF-IMC
and IF-IMV are the interfaces. [Dan Romascanu]: Are the standards available for
anybody who wants to to go and read them? [Jessica Fitzgerald-McKay]: Yes, I
link to all of them in the Internet Draft.  Certain ones that are in draft form
have been worked pretty extensively since they were last published for public
review.  Server Discovery and validation is one of them and has been pretty
heavily updated, but, the basic ideas on how you would discover the appropriate
server to report to has been maintained through the revisions.  But again, if
this is something that the group looks at that and decides it is more or less
the right path, we can talk to TNC about getting a more up-to-date copy out
there for people to take a look at. [Dan Romascanu]: Talking about talking to
TNC.  The previous experience with NEA is that they didn’t release the
specifications for change control to the IETF. [Jessica Fitzgerald-McKay]: The
way that I understood it was that they took the specifications from TNC and
completely rewrote them with the same ideas, same content and if you
implemented it, it would be the same specification, but, with different words. 
All of the change control for the IETF was maintained by the IETF and TNC had a
responsibility to keep their specifications in alignment with IETF
standardization.  I am not actually recommending that we take this path, unless
people want to take that route.  I think we would have more success just going
to the TCG board and asking for it and see what they say.  If they say we need
to rewrite it, we will face that if we must, but, I don’t feel uncomfortable
asking them if we can just release the specifications. [Dan Romascanu]: Yes,
that helps under the precedent and the possible parts, but, the first step is
to go and read the specifications and see to what extent they meet our
requirements, compare them with other options if available, but, this is a good
start. [Jessica Fitzgerald-McKay]: Any other questions.  There were no
questions. [Phyllis Lee]: Is the group interested in pursuing this Internet
Draft and getting these specifications over? [Dan Romascanu]: If you are asking
about the immediate next steps, we are interested in reading the specifications
and forming an opinion. [Dave Waltermire]: Would it make sense to have
volunteers read the specifications that Jessica has called out and do some
analysis with respect to the requirements? [Dan Romascanu]: Yes, that would be
a good thing to do and if there are volunteers, please say now or consider
stepping up.  But, we probably need to at least have some level of
understanding about what they are about.  If there are volunteers at this
point, that would be great. [Adam Montville]: We look forward to having some
volunteers step up and we will ask this on the list.
======================================================================
OVAL and the SACM Information Model (Matt Hansbury / Danny Haynes)
======================================================================
[Matt Hansbury]: First thing, I don’t know if everyone knows what OVAL is and
OVAL is the Open Vulnerability and Assessment Language which is an XML-based
language for encoding details about endpoint posture assessment.  So, you could
imagine a natural fit for SACM.  It has been around since 2002 and MITRE has
operated as the moderator for OVAL on behalf of the Department of Homeland
Security.  A couple of other things, OVAL is widely adopted with broad support
and defined as the primary checking language for SCAP which most of you should
be familiar with.  OVAL is certainly supported within SCAP.  A quick note
similar to what Jessica went over, there are some IPR considerations.  We, DHS
and MITRE, are working to figure out what that means, but, we think it
shouldn’t be an issue and we are working on that and will figure that out.

[Dan Romascanu]: Out of the 45 organizations, maybe some of them are
international so there is no border or nation border limitation because it
belongs to DHS? [Matt Hansbury]: You are correct.  There is definitely
international support for OVAL and there are 13 countries that use it.  There
are no international constraints there, it just happened to be funded by the US
Government.  We created this paper to do a few things, with the main objectives
being to take the existing parts of OVAL and map them to the different parts
within the SACM information model and make some concrete recommendations that
we believe are the correct way to make use of them in SACM as well as some
lessons learned from the 12+ years of working on it.  One point that is not
here is that the information model was a moving target and when we started the
paper, there were some additional changes to the information model so if there
needs to be some updating we will do that, but, we wanted to get it out there
to get the conversation started.  The paper goes through the different parts of
the OVAL Language that could be used within the SACM information model, but, I
think the key takeaways are the recommendations and the things that we think
are worth doing within SACM.  One of the things is that OVAL is composed of
several different models each of which do different things and the paper goes
over this in a bit of detail and I hope that you can check this out.  In short,
the model called system characteristics which is the way to the take the
information that you pull off the endpoint and encode it in XML so that you can
pass it around and do things with it.  One of our recommendations is to take
that model and use it as the base for a data model for SACM for data
collection. We don’t think the OVAL Language today will do everything that SACM
needs to do, but, is a good starting point for doing these different things. 
The second recommendation is the OVAL Definition Model which defines the check
itself, what data needs to be collected, and what the data needs to look like
which is combined and a limitation of OVAL.  OVAL wraps together collection and
evaluation guidance into a single thing which is a design feature although in
SACM we want to separate those two things.  So, we would want to separate those
things apart and use them as a base for which we would build the SACM
collection and guidance from.  Our last recommendation is yet another model
OVAL Results which captures the assessment results and we are recommending that
we do not use that for SACM.  It just never quite satisfied the needs for
granularity of the community.  Those are our three main recommendations and the
paper lays this out in more detail.  Another key part of the paper is the
lessons learned and there are more than the four, but, we just wanted to
highlight what we thought were the most important.  One of the key ones is
simplicity.  This goes beyond our lessons learned for OVAL, but, aligns with
some of the other efforts that we have been involved in.  When you are involved
in an information sharing effort, we need to make sure that we are sharing the
right information with the right people.  We found that if you try to share too
much information, you end up with an unsuccessful sharing effort.  The next
thing that the group understands very well is that we need to decouple
collection and evaluation.  The third one is to empower subject matter experts
and this is a key one that we touched on a few times in the paper and the
history with OVAL is that we have, for years, been taking on a very
comprehensive approach including doing all of the research for the OVAL
Language and writing the different tests and basically trying to figure out and
reverse engineer different products and platforms and the more and more we
tried to push to have those to know that best, typically the primary source
vendors, to empower them to share the information.  That is, we don’t try to do
it for them, but, let them do it.  Also, carrots work better than sticks. 
Mainly, we need to give them good business reasons to do it rather than
mandating that they do it.  We would like to continue this discussion on the
mailing list and we will revise the document based on feedback.  We are looking
at the IPR issues and will work that out.  Then, we will plan a schedule for
contributing the data models as it makes sense.  Does anybody have any
questions?

[Adam Montville]: Please review the submission and resources here and offer
suggestions.

[Dan Romascanu]: As co-chair, let me take an action to send a message to the
list to call for volunteers to review both contributions as there may be some
people who could not make the call today.  It would be very good if there would
be a couple of folks for each of the contributions, of course not the authors,
to investigate how it aligns with SACM and hear back at the next virtual
interim meeting.  People should try to find some time to read because that is
the next step.

======================================================================
Endpoint Identity Design Team Update (Dave Waltermire / Adam Montville / Cliff
Kahn / Danny Haynes)
======================================================================
[Dave Waltermire]: We started the Endpoint Identity Design Team back in
December 2014 and have been meeting about once a week since then.  We have come
up with quite a bit of information that we believe will help augment the
information model document.  We have a set of goals that we defined at the
onset of the design team.  We want to provide an assertion model for endpoint
identifying attributes so we want to define effectively a logical model that
can be used to represent a number of attributes that can uniquely identify a
given endpoint.  We want to be able to support correlation between different
sets of assertions over time and are recognizing that there may be multiple
points of view that can be captured as part of collecting posture information
from the endpoint and those points of view will have a set of identifying
attributes and we want to be able to correlate those different points of view
between the posture information that is collected.  We want to focus on direct
observations over network-based or indirect observations.  While we talked some
about network-based observations, looking at traffic on the wire, protocols
activities to identify the endpoint, etc. our primary focus has been collecting
identifying attributes through direct observations such as software on the
endpoint and software that interacts with software that is on the endpoint. 
Secondarily, we have been working to establish a method of confidence
assertions that has not been the largest focus of our work and has been a
fairly minimal focus.  Next, we will go through the different tests cases.
[Adam Montville]: In the following slides, we will go through the different
tests cases that we have done. [Danny Haynes]: At previous Endpoint Identity
Design Team meetings and at the meeting in Dallas, we came up with a set of
tests cases that we originally came up with as the set of identifying
attributes and tried to determine which ones do we really need versus which
ones we can toss to the side.  We really just wanted to get to a core set that
we thought was important and then go from there.  We wanted to use that core
set of attributes to update the information model.  This first test case is the
collector on an endpoint and it really represents the traditional endpoint in
an enterprise environment, your workstations, servers, and things like that
with an internal collector on them.  In this test case, the collector is
responsible for collecting attributes either based on guidance that provides
instructions on what to collect or the collection may be trigged by an event
that happens on the network.  From there, once it collects the attributes off
of the endpoint, it will take those attributes and send it to some consumer
whether it is a data store housing all the attributes on the network or
possibly an evaluator that may take in addition some evaluation guidance, and
come up with some results and then use it later on.  In this test case, we
looked at two specific scenarios.  One using endpoint identity information to
retrieve guidance from a data store which is important because a lot of times
the guidance you are using to collect information from the system can depend on
a variety of factors such as the software of the endpoint, the criticality of
the endpoint with respect to the organization, and things like that.  Knowing
how the endpoint is composed will help figure that out.  The second scenario is
once you have a bunch of attributes collected off the endpoint when you are
going to send them somewhere else, you will need to be able to associate them
with the endpoint from which they were collected and the identifying
information would be used in that respect.  We are not going to go over the
lessons learned slides in the interest of time. [Dave Waltermire]: There is a
lot of information in the slides though so if you are interested in reading in
greater detail about the work that we have done, please review the slides.
[Cliff Kahn]: In the external collector that queries endpoints, we get an
architectural distinction that is in the information model.  There are existing
management protocols and different ways to get information from an endpoint
which are available for SACM.  Here, we are envisioning that the architecture
is extensible enough to get information in whichever way it works and talks to
the rest of SACM using a SACM protocol.  An adapter, proxy, or whatever we want
to call it could be on the endpoint, it could be a NETCONF server, a SNMP
server, and it could be this proxy thing that talks to the endpoint with
SSH-ing into the endpoint and using proprietary or command lines to scrape
information about the state.  It doesn’t matter which is good, but, it matters
a little bit in the information model in which it may not be modeled exactly
the same which is why we wanted to.  The pragmatic way is that this should be
accommodating.  The next test case is the network profiler test case.  While
network observations aren’t focal, it is still important in the practical
reality and to test our information model against the kinds of information that
a network profiler would be able to provide.  A network profiler is a synonym
for characterizing an endpoint’s operating system, version, etc.  We are not
trying to decide whether or not an endpoint is misbehaving as that is something
that we don’t want to get into with SACM right now.  I am talking about this
being iOS version X based on seeing how it acts on the wire and there are
commercial products that do this.  We defined network profiler as a component
that observes network traffic or infrastructure such as ARP caches, etc. and
working at that kind of level.  Within this, there are three ways that a
network profiler can add value into a SACM ecosystem deployment.  One way is to
characterize an endpoint that is not possible to query (e.g. SNMP, NETCONF,
NEA, etc.) because what software is on it, or it is not part of the network
enterprise infrastructure to do these things which is going to be a significant
portion of the world for a significant portion for a while.  We have BYOD that
in principle may be able to participate in some of these protocols, but, in
practice they are not doing it.  We have constrained devices.  I know we said
that they were out of focus, but, the fact is profilers learn about them and it
is potentially good to know what you have on your network.  Maybe you are happy
to have BYOD Windows 7 machines, but, you are not happy to have Windows XP
machines so it would be good to know whether you have any.  This is also useful
for detecting rouge endpoints in the network infrastructure.  Another use for a
network profiler is detecting an endpoint that should be under SACM monitoring,
but, maybe the endpoint should be under more SACM monitoring.  For example, all
Windows systems should have a full-fledge SACM monitor that can provide all
kinds of attributes, but, one of them isn’t.  A profiler could find that. 
Again, we need to be able to correlate what the profiler has found with the
other sacm collectors to determine what is or is not under monitoring by other
collectors.  The third use is to cross-check.  For example, a network profiler
might contradict what another collector is saying, you may want to dig into
that and find out what is happening.  For example, an endpoint may be
compromised and lying, or if the collector is confused, etc. and you may be
able to determine something is wrong.  So, cross-checking seems really
important in a security system.  We also need to correlate identities with the
identities that other SACM collectors observe to achieve any of these
scenarios. [Danny Haynes]: The next test case that we looked at is around the
endpoint identity information in the data store which serves a pretty important
role in housing guidance, attributes, previous assessment information, etc.  It
is also a source for data.  You may be able to get attributes from an internal
collector on the endpoint or depending on the requirements on the data you need
you may be able to query a data store and collect information that was
collected at a previous time.  So, the two big scenarios here were that a lot
of the information that is collected and stored in the data store will be
associated with a particular endpoint and as such the identifying information
will serve as a valuable key for either storing information in the data store
or looking it up.  So, in this test case, we looked at what are those minimum
set of attributes that we need to be able to grab a particular piece of
information from the data store with regards to particular types of endpoints
whether they be traditional such as workstations and servers, mobile devices,
or even network infrastructure devices.  We didn’t really look at constrained
devices because at the IETF meeting in Dallas we decided that was out of focus
for now and we will pick it up at a later iteration.  Again, we have some
examples of things that we may want to look at when further looking at data
stores for SACM.  The last test case was around the evaluation capability. 
During one of the Endpoint Identity Design Team meetings someone described an
evaluator as something that takes input and provides some output which is a
good basic definition.  In this test case, we looked at the input being posture
assessment information collected of an endpoint, possibly using previous
evaluation results and reports to make comparisons about how the state has
changed over time, and obviously evaluation guidance which will guide how the
evaluation is carried out.  In the evaluation of posture assessment
information, the output would be evaluation results.  The endpoint identity
information really comes into play here in that information may be passed to
the evaluation capability or the evaluation capability may have to track it
down itself and knowing the identity for the target endpoint that it is looking
at and needs to make some evaluation about it will need to use the identity
information to find out what posture attributes it needs and things like such. 
We also have some examples.  One last thing that I would say is if there are
some important test cases that we haven’t covered that we should, please let us
know or send it to the list so we make sure that we cover everything. [Adam
Montville]: Basically, I think what we have learned so far is that we don’t
know everything and what one enterprise uses as an identifier might not be
applicable to a different enterprise so we should probably accept the open
world view and say that somebody else might see something as an identifier that
we don’t.  So, extensibility is a must and we might consider a controlled
vocabulary approach as a way to manage these things in the future.  So, what do
we do now?  We need to continue discovering how we talk about identifying
endpoints which is what we are going to introduce here.  There are certain
properties that are interesting: multiplicity, persistence, immutability, and
verifiability.  We also need to further discuss these which will happen in the
Endpoint Identity Design Team and then as a working group as a whole.  Next, we
will go over the spreadsheet which discusses the attributes and how they align
with the different properties. [Dave Waltermire]: One of the challenges that we
are having in the design team, one of the things that we were trying to
accomplish was that we were trying to highlight specific attributes that we
would suggest that be provided in most identifying cases as a way to provide a
baseline for correlation which is one of our goals and we had started with this
concept of primary versus secondary identifying attributes.  The idea was that
the primary attributes would be the ones that we would promote to be provided
as much as possible when reporting the identity of an endpoint.  The challenge
that we ran into was we were largely making qualitative judgements on various
attributes based on personal preferences and we were really struggling with
what should be in that primary versus secondary category.  A couple weeks ago,
we started kicking around a notion of really trying to decompose our analysis
along a number of different dimensions that may allow us to take a more
quantitative approach to characterize various identifying attributes:
multiplicity, persistence, immutability, and verifiability.  We are looking
today for some input into are these the right set of attributes that we should
be considering, are there additional attributes that we should consider
important from an identification and correlation perspective?  So, keep that in
mind as we talk through this and feel free to jump in if you have any questions
or comments as we go through the spreadsheet. [Cliff Kahn]: Just to clarify,
when you talk about attributes, are you talking about the meta-attributes like
persistence?  Or, about the attributes like MAC addresses? [Dave Waltermire]: I
would call the four meta-attributes properties.  What I was talking about was
these are the properties that we are using to describe attributes. [Cliff
Kahn]: You asked people for input on attributes and you meant to say properties
right? [Dave Waltermire]: Yes, we are looking for input on these four
properties of attributes and we are looking for if we are missing any specific
properties that might be important from an identification perspective. [Cliff
Kahn]: No worries.  Thanks for clarifying. [Dave Waltermire]: Thanks for
asking.  So, I am going to go through the specific properties.  The first one
is multiplicity.  We have two values that we have been considering for
multiplicity: one-to-one and one-to-many and what this really means is for a
given attribute value is that value assigned to a single endpoint or is that
value typically assigned to more than one endpoint and this really speaks to
some degree the uniqueness of the attribute value and the ability to use that
attribute value to uniquely identify a given endpoint.  One-to-one means that
attribute is associated with a single endpoint and can help to uniquely
identify the endpoint and the one-to-many is the attribute might be assigned to
more than one endpoint and might not be able to discriminate against a given
endpoint. The next property is persistence which is really how likely an
attribute value is to change and we have been characterizing this property as
1-to-5 with 1 meaning the attribute value is constantly changing meaning that
it is probably not a good attribute for identification because of that constant
change.  The next level up is the value changes on an ad-hoc and often
unpredictable basis and that we cannot predict when it is going to change.  The
middle value is that the attribute value should only change based on an event
and the change is essentially predictable and event driven on some level.  The
fourth level that we have is the attribute value should only change when the
device is re-provisioned or initially provisioned as well.  These tend to be
very persistent values that are configured initially and maybe only
periodically updated when the attribute value expires and needs to be
re-provisioned.  Finally, the highest and the most ideal value for persistence
is the attribute value never really changes and is always the same.  The next
property that we were considering is immutability and this deals with how
difficult it is to change this attribute value.  Typically, what we are talking
about here is software versus hardware rooted types of concepts.  The lowest
level of immutability means that the attribute can be basically changed without
any kind of controlled access and anyone of who effectively has access to the
device can make that change there is really no controls in place to prevent
that change from occurring.  The middle level deals with effectively
implementing some kind of user process level access control to limit the change
of the attribute to some authorized set of users or processes and then the
highest level is effectively that the attribute is hardware rooted and cannot
change.  Granted, hardware rooted is a concept that we have some difficulty
with because in virtualized environments are often virtualizing hardware which
often makes it easier to change values that are embedded in the hardware in
those cases.  One thought that we had there is if you were virtualizing
hardware you effectively cannot have the highest level and are likely looking
at the next level down which can be changed with the appropriate access.  This
is something that we are still trying to sort out, specifically, is how do we
deal with this hardware versus software rooted situation and how do we account
for virtualization.  Finally, the last property is verifiable which is maybe a
synonym for how easy can we corroborate that information using another source
which is actually something that Cliff was talking about earlier when he was
talking about profiling and one method is corroboration where we can
effectively observe that information on the network and corroborate what the
endpoint is reporting about itself as an example.  An alternate way of doing
that may be using various cryptographic mechanisms to verify information that
the endpoint is reporting.  So, we have been considering three different values
for this verifiable property.  The lowest value is the attribute value cannot
be externally verified and there is no method that would allow for
corroboration of the attribute and when it’s reported and we would have to take
the word of the reporter without the ability to externally verify it.  The
middle is the attribute value can be externally verified and corroborated, but,
there is no way to absolutely tie that value back to the source endpoint that
might be asserting that attribute value.  The highest is the ideal case where
it can be externally verified and we can tie that value to the specific
endpoint that it is asserting or that the software on the endpoint is asserting
it.  Any questions on these properties?  Comments?  I probably don’t have time
to go into any kind of detail on the specific attributes.  I guess one thing we
could do if people want to glance at this and raise any questions that they
might have.  I guess I can speak briefly to some of the questions that we are
currently struggling with.  I did mention the immutability question relative to
virtualization.  In our last Endpoint Identity Design Team meeting, we broke
out the various attributes to talk about the attributes where they are hardware
rooted versus software rooted to try and characterizes those differences.  One
thing that came up during the last conversation we had is should we really
consider a difference between MAC addresses that are basically encoded in the
network interface card versus those that have been assigned by the operating
system or networking stack.  From that perspective, we largely are seeing the
effective version of that MAC address and depending on the operating system
capability, it may be difficult to determine the difference between the
interface-assigned MAC address versus the one that is hardware encoded. [Dan
Romascanu]: Typically, the MAC addresses assigned by the operating system or
some other software are those not locally assigned addresses?  There is a
48-bit format and there is a locally assigned bit? [Dave Waltermire]: You are
suggesting there is a bit in the MAC address that should be used when the
address is locally assigned? [Dan Romascanu]: I don’t claim that I know all the
operating systems, but, I assume the operating system will use the software
algorithms and use the locally assigned space. [Dave Waltermire]: I see and
that goes in the vendor-associated segment of the MAC address? [Dan Romascanu]:
Yeah.  Basically, the vendor has a mask out of which the 24 bits, which are
tied to OUI, and actually only 22 are used for vendor assignment because one is
indicating the network card and the other is used for local assignment. [Dave
Waltermire]: I see. [Dan Romascanu]: Having said this, I should point to the
IEEE which just started work to provide even more structure within the
global-assigned space and locally-assigned space.  So, there will be some rules
that will not necessarily accommodate virtual machines and other virtual
entities, but, it is supposed to be backwards compatible so everything else
should remain in place. [Dave Waltermire]: That is something that we will need
to investigate.  There are also some protocols. [Dan Romascanu]: The person who
knows a lot about this mechanism is Donald Eastlake in both those working
groups IEEE 802.11 and the IETF groups that are related to mobility because
many of those mechanisms are used in mobile networks. [Dave Waltermire]: Maybe
we can invite Donald to the Endpoint Identity Design Team meetings? [Dan
Romascanu]: That would be a good idea. [Henk Birkholz]: I just tried Linux,
UNIX, and BSD operating systems and they can differentiate at the software
level between the software-assigned address and the hardware-assigned address.
[Dan Romascanu]: They can. [Henk Birkholz]: My question to the team is that, is
it possible that some systems can get the original address of the operating
system? [Dan Romascanu]: That’s exactly my question.  I don’t know if all
operating systems support it.  The probably expect the IEEE standard, but, they
may use some other mechanism which I am not aware of.  So, I don’t know if that
is the only mechanism in place.  An enterprise can always buy a space which
would be out of fixed space and apply some rules within this space and this is
what the new IEEE standard tries to avoid and actually tries to provide some
more structure within the locally-assigned space. [Dave Waltermire]: There has
been some protocol work that I have seen that allows you to rotate assigned MAC
addresses dynamically to be disruptive to malicious actors on a given network. 
That is something that could impact the persistence of the assigned MAC
address. [Dan Romascanu]: The main person you may want to talk with is Juan
Carlos Zuniga.  He is the guy who runs the privacy experiments at the last
couple of IETF meetings and he is actually very much involved with the new work
in the IEEE.  So, because some of those mechanisms actually have privacy
implications and Kathleen Moriarty may actually know a lot about this, but,
basically the idea is that you need to protect against attacks and
vulnerabilities so those systems if they use a certain, but, fixed space they
are also easier to attack from a privacy point of view.  Now, Juan Carlos
Zuniga is using and deploying at the last couple of IETF meetings an
anonymization algorithm which basically blurs the MAC address to disallow the
identification on a network which is kept in the internal layer. [Kathleen
Moriarty]: Good point Dan.  Thanks for bringing that up.  I think it is just
going to be the continuing trend of looking for places to obscure any type of
identifier so that privacy is more protected. [Dan Romascanu]: So probably the
observation is, as you are looking at your structure right now, don’t assume
there is a finite set of types or you may want to add something that is more
generic that is kind of anonymized MAC address because those things are already
starting to be deployed especially for privacy purposes. [Dave Waltermire]: Ok.
[Dan Romascanu]: Inviting Juan Carlos Zuniga and/or Donald Eastlake to one of
the design team meetings is a very good idea. [Dave Waltermire]: Ok. [Kathleen
Moriarty]: So, if other identifiers are in use, things that are coming through
are schemas or other information models for other areas within the IETF where
you have the option to put in the identifier, but, the optimal option is to
have it obscured or left out or generalized in some way so that it is not the
identifier that we have known in the past.  But, they are showing up in schemas
with privacy options included. [Dave Waltermire]: Ok. [Kathleen Moriarty]: So
on local network, the local network might decide it is fine with the use of
identifiers, but, they may not. [Dave Waltermire]: Right and I think this
further justifies our initial thoughts that we need some type of controlled
vocabulary approach because this situation is going to continue to evolve and
we may need to characterize different kinds of attributes and things will need
to be added dynamically to whatever models we end up developing.  So, it seems
to me a reasonable way to account for that.  Maybe something like an IANA
registry would benefit us. [Adam Montville]: Just a quick time check as we
still need to go through requirements. [Dave Waltermire]: I am happy to stop
here.  This has been a good discussion and we have some actions that we need to
take.  So, if we could defer further discussion to the list then we can get
started on requirements.
================================================
SACM Requirements (Nancy Cam-Winget)
================================================
[Nancy Cam-Winget]: So one logistical question that I have, and part of it is
that I just have not had time to play with GitHub, once I was responding to all
of the issues I couldn’t figure out how to go back and look at how to go back
and look at the threads that I had already responded to.  Do you know? [Adam
Montville]: I do not know, but, that is something that we will have to look at.
[Dan Romascanu]: This sounds like an Aziz question, but, he is not on the call
so send to the list and copy him to make sure he pays attention. [Nancy
Cam-Winget]: Ok, I will shoot him and Jim Schaad a message because the other
thing that I noted was that I thought part of the reason why we used GitHub and
I couldn’t figure out how to put the proposed changes of the actual draft text
in GitHub.  Those are logistic questions that would be good for the group to
know moving forward and I wanted to improve as I started addressing comments on
the Architecture draft and not being able to see when I respond to the issues
to the original messages.  Given that, I did post the updated draft version -05
so I don’t know how many on the call have had time to look at it since I posted
it not that long ago.  If you go onto the next slide Adam.  So, I received a
really good set of comments from both Jim Schaad, Chris Inacio, and some follow
up from Lisa Lorenzin as well.  So, if you guys can go in GitHub, you will see
all of the issues that Jim opened and then Chris basically sent a long list on
the SACM mailing list that I responded to so as far as I can tell all of them
but one is closed by Jim.  I haven’t heard back from Chris yet.  On to the next
slide, so there is one last editorial issue from Jim’s point of view, but, in
the process of going through all of the comments from Chris and Jim and follow
up from Lisa, I thought it would be good for us to review some of the bigger
ones that I required for the whole group to be aware of so the two biggest ones
that I put on this slide was that we had a long thread with respect to how many
information models are there and I couldn’t quite tell from the SACM mailing
list whether or not we reached consensus and that there would be one and only
one and the interpretation that I had taken from the charter and the thread and
the interpretation that I had been going along with is that the information
model is the abstract definition of the types of attributes and potentially the
types of operations as well to allow us to do the collection, guidance, and
evaluation of posture information by which that would provide the guidance by
which we could then instantiate one or more data models.  So, let me stop there
and see if there are any comments or questions on that one. [Henk Birkholz]:
One question from me is in some side comments I tried to be funny and was
answering that some operations may be part of the architecture.  I saw that you
were adopting that again and I wasn’t sure what the result of that was. [Nancy
Cam-Winget]: That’s why I am posting it here because if I didn’t make it clear
in the requirements I need to make that language stronger, but, in both the
requirements and architecture I have been going with the assumption that there
would be one and only one SACM information model, but, then there would be
proposal by the SACM group then accept and up to the group whether we say
applicability or standardize on one or more data models. [Dan Romascanu]: So, I
believe that we have had this discussion a couple of years ago and maybe some
of us have forgotten it and new people joined, but, we are using the
terminology that has been defined in RFC 3444 or something around it which is
about the information model and data models which kind of define the
information model as the abstract schema which tries to be protocol independent
and defines the semantics of the object while whatever goes on the wire and the
information model is instantiated by the data models which are language
specific.  Now, from a point of view of the architecture, the assumption that
we made is that we meant to define one information model because we want for
any given information element to have one definition to work with.  That’s why
we are creating consistent dictionary that is why we are working on one schema.
 Now, this may be the right instantiation on multiple data models and those
data models may be in some cases language specific, in some cases it may be
classes of applications, or they may be broken up.  Data models at the end of
the day such as Yang modules, SMI MIB modules, or schemas and you break them
into multiple models for reason of readability and modularity.  So, that is why
we took this approach and I am yet to understand what would be the problem with
this approach. [Dave Waltermire]: I have been kind of on the fence on this
topic which is why I haven’t dropped in on the conversation, but, one of the
things that came up on our Endpoint Identity Design Team discussions maybe a
month or two ago was a lot of discussion around capturing provenance
information around certain endpoint identifying attributes.  As an example, we
started looking at W3C work in that area and we recognized that it was a pretty
big space and it would take some time to develop some information model around
provenance and I think that is a really good example in this case.  Although
provenance is important to be able to characterize an endpoint identity and to
really make that identity really understandable and how it originated it may
not necessarily be at the core of the work that we are trying to do to enable
endpoint assessment.  I liked what you said on the list and that maybe what we
are looking at is an umbrella information model that may point to other
hierarchical information models that may address more specific concepts. [Dan
Romascanu]: Exactly.  So, for example, to take the example that you have just
given provenance is one dimension, but, it may not be the core dimension in
describing the identity or in other words you can work with a subset of objects
that do not necessarily reflect provenance or it may not apply to our classes. 
So, there is certainly a hierarchy and modeler approach, but, when I say on
information model, I am saying that we must not have multiple definitions for
the same concept. [Nancy Cam-Winget]: Yeah, I think you said it well initially
where the initial concept when we went through the clarification of information
versus data model.  I thought we went through the exercise of the clarifying it
and the information model is meant to be that abstraction that does provide the
semantic interpretation and meaning of the attributes.  So, I agree with you if
that is the case, which I thought it was, then there should only be one. 
Otherwise, we would get into confusion if we ended up with multiple meanings.
[Dan Romascanu]: That’s the case I believe. [Dave Waltermire]: I think in doing
that, we have to be very disciplined as to what we choose to do in detail about
in that information model within that abstraction versus what we leave as
future work.  I think there is risk that we indefinitely work on an information
model without making too much forward progress. [Dan Romascanu]: Yes. [Nancy
Cam-Winget]: I agree. [Ira McDonald]: It’s ok to extend an information model. 
For instance Dave, with modular work on provenance and it’s ok after doing a
data model, or two, or three.  Let’s be realistic.  To come back and say there
are fundamental operations or semantic attributes that we should add to the
information model that kind of recursion does normally occur in a work group
over a span of years, but, I think it is also important Dave to avoid
rat-holing on provenance for instance although we recognize its importance to
the validity and verification because otherwise we could spend the next two
years doing an information model while other people are doing private data
models which is not helping. [Dan Romascanu]: No.  I agree such a thing and I
am actually will say that even go one step further.  I think it is important as
we are writing our first generation of the information model to write if not in
parallel, immediately soon after.  At least one data model to verify the
assumptions implemented in real work using a data modeling language in order to
have a sense if what we defined at construct level works. [Dave Waltermire]: I
am in favor of doing that. [Nancy Cam-Winget]: So Dan and Adam, how do you want
me to proceed?  Jim pretty much said that based on the clarification he was
fine and he closed the issue, but, I still felt that I needed to open.  So, are
we ok then as a group?  Or, do you want to go back from the mailing list and
get consensus? [Dan Romascanu]: We haven’t got to our way forward slide, but, I
believe the milestones call for a WGLC in May so I would try to close as many
as possible for the open issues or be prepared to close them and reopen them at
last call.  That is why we have last call; so people may kind of jump against
the concepts or find themselves on the wrong side, but, they certainly do it
using last call.  People could participate in the discussion and they can also
join the discussion so I would go to closing the issue and being ready to
reopen it during last call. [Nancy Cam-Winget]: The good news is that Jim has
closed out that issue so I think we are clean.  The next one that I want to
bring up was raised kind of through different comments both by Jim and by Chris
in that in the architecture, we talk about the communication between the
broker, the controller, and the provider and the consumer that all of the
communication that go between them.  I think that I described them as
interfaces in the architecture draft.  In the requirements draft, I think there
were one or two places where, especially in the architecture, I called out the
interfaces, but, then I actually could have explicit sections for transport and
operations which seemed to be confusing to them one in part because we don’t
have these terms in the terminology draft and two I don’t know if we needed to
make and pose it to the group.  My interpretation of an interface is the way in
which two architectural components communicate and that communication means
transport to me which is that network layer transport and then would send that
network layer transport for transporting data and operating on the data. So,
that is where I make the distinction in the requirements and I have that
explicitly as one of the comments in the next couple of slides.  So, I wanted
to kick this back up to the group to get some general guidance.  And again, Jim
was fine with the clarifications that I made both through the responses and the
draft -05.  So for him, this should be closed, but, I thought I would raise it
up with the group as well. [Dan Romascanu]: My only question is that we have a
document right now, where would you find our meaning or definition of
transport?  Is it in the architecture draft or in the terminology? [Nancy
Cam-Winget]: It is not in the terminology.  So, since I am going to work on the
architecture draft next, I can make sure that it is clear there. [Dan
Romascanu]: Yeah, we should probably be clear because the risk here is that is
actually having somebody in the general area review, for drafts, getting a
working group last call and someone that is not as familiar encountering a
familiar term that is used in a non-familiar manner. [Nancy Cam-Winget]: Well,
right.  I was trying to stay consistent, from my view with my thinking, with
the IETF understanding of these terms.  You are absolutely right though.  I
need to go back and look at the architecture draft and make sure they have been
made clear. [Dan Romascanu]: Exactly. [Nancy Cam-Winget]: Let’s go on to the
specific issues.  I sprinkled a bunch of editor notes in draft -05 that I would
like to clean up and now that I know Jim is going to close out all of them
except for one, I can go ahead and close out some of the editor notes as well. 
So, specifically, there was some questions and I put some of the suggested
actions on the right column.  For the remaining issues that we kind of went
back and forth on and Lisa also provided some feedback, I don’t know that they
are major, but, I thought that it would be good for me to review them here.  If
I could go through them individually.  In Section 2, where we lay out the
potential tasks that applications could act as providers, consumers, or both
would have to go through.  The first task that we actually list is defining the
asset, but in essence, Jim kind of put a question as to whether this isn’t
really included as part of the definition of the information model and should
it be removed.  So, my suggested proposal is a couple of options is to remove
it and add in the introductory paragraph, prior to the enumerating the task,
that I make clear that the assets are defined in the composition of an asset. 
It is defined in the information model. [Ira McDonald]: I much prefer that
Nancy. [Nancy Cam-Winget]: Ok. [Ira McDonald]: Otherwise, I think we have a
constant tug between the information model and the architecture and the
requirements. [Nancy Cam-Winget]: Yeah, Jim is right in that it makes the tasks
seem to be more fluid. [Ira McDonald]: Yeah.  Right.  More dynamic. [Nancy
Cam-Winget]: Yes.  Ok.  Going once, going twice.  Alright, the second task is
to actually map the assets so Jim posed the question of should there be a
subtask to actually create that mapping and then thinking and reading the task
list again.  To me, it felt like that is what the task was about, to actually
create that mapping.  Given that he has closed the issue, it is too bad that
Jim is not here.  I am not sure that I need to do anything for this one, but,
if anyone has looked at it and has feedback, you should let me know. Going
once, going twice.  So, I think I will let this one be as well.  On the third
row, for the first general requirement, there is the paraphrase for the first
requirement which goes to the query language specification and we list what we
mean by the query language and then one of the phrases, we also note as well as
the expression of the specific tasks to follow which now includes for
inspection.  Jim noted that potentially from an IETF perspective means
routability.  So, he questioned whether or not that was really in scope for
SACM to sort out or not.  So, I don’t know.  I think this is something that the
group needs to agree on.  Again, he closed it, but, I think since he mentioned
it, I think I left the phrase still in the draft, but if it is in scope, then
we really need to put another requirement in the operation of the data to
ensure that routability is there.  Otherwise, I should just remove that phrase.
[Henk Birkholz]: Routability is not about layer 3, yes?  It is routability in
another context? [Nancy Cam-Winget]: It could be layer 3 or above. [Henk
Birkholz]: I think the expression of a specific path to follow is referring to
a graph model and that may be the context of the expression and I am not really
sure it is about routing. [Ira McDonald]: But, as it is written, routability is
inherently ambiguous.  Nancy, I suggest, and others should chime in, that
routing paths and the internal behavior of routers and forwarding metal boxes
is right out of scope for SACM.  We have got NETCONF and Border Gateway
Protocol, and all kinds of protocols for configuring routers. [Nancy
Cam-Winget]: Ira, my personal opinion is that not only that I agree with that,
but, Henk to your point of how we traverse information, should be out of scope
that perhaps that has played on the specific data model and how they choose to
implement their own traversal schemes. [Henk Birkholz]: Ok, I just wanted to
make sure that this is really about routing. [Nancy Cam-Winget]: Yeah. [Dan
Romascanu]: It is more about transport to be precise, but yes, your observation
is usually part of the protocol.  For example, NETCONF, in order to be capable
to ensure each ability of a service that may be placed behind a firewall
implemented something that is like a mechanism which can be added later into
some base protocol. [Nancy Cam-Winget]: Dan, to me, that seems specific to the
data model and operations on the data model. [Dan Romascanu]: It is specific to
the protocol that carries the data model, yes?  Some protocols mix the two with
the data model within the protocol.  Some others actually separate, for
modularity, the data model from the protocol itself.  In practice, because the
Yang data model right now is supported by NETCONF, but, there is a light
version which is designed for RESTful entities.  It will be like two protocols
that can work with the same data modeling language and the same set of data
models because of the modular design that has been the approach from the start.
[Nancy Cam-Winget]: In the interest of time, I am going to give myself to the
end of the day or maybe tomorrow to get a draft -06 out.  To clarify, some of
these and hopefully that draft, Dan and Adam, should be ready for last call,
but, it is up to you guys.  I will try and take a stab at either removing it or
clarifying the intent behind that phrase.  Alright? [Dan Romascanu]: Right and
you know it is only the first working group last call so you should assume that
you may not reach the perfect document in the first round. [Nancy Cam-Winget]:
I am guaranteed to get lots of feedback right because we are going to go
through a lot of review. [Dan Romascanu]: So, if you can get it by the end of
the week or the beginning of the next week, version -06, then we will make a
sanity check Adam and myself and if it is sane enough we will send it to
working group last call. [Nancy Cam-Winget]: Sounds good. Ok, in the interest
of time, let me accelerate if I can.  So, there were questions and I tend to
agree with Jim that I mean a lot of the drafts that we put out in the IETF
presume that we are defining enough specificity to ensure interoperability. 
So, he put into question whether we really needed to make this an explicit
requirement or not.  This was G-002 and we have gone back and forth, Lisa has
asked to keep it so I have kept it for now and then, in the interest of time,
unless I hear others we can go to last call and I can keep it in there for now.
 On the next row, G-006, we seek data integrity, but in the description, I only
speak of it in terms of the transport and again transport to me meant that the
network layer transport and again Dan that goes to your point there are some
instances in which they decouple the data protocol from data transport and
network transport so the question is whether we need to do something.  So, I
may defer this until we get more comments and again because I am giving myself
the pass since Jim has already closed this, but, we can leave this open as we
go into the next version. [Ira McDonald]: A quick comment.  I don’t know what
the word is, but, I suggest that we need a word, not transport, because to
other IETF participants that means the whole stack below the application layer
operation whether it is a fat stack running on top of HTTP or a thin stack
running at layer 2 or layer 3 and that is it.  The operations go right there,
they are transported operations and in either case we don’t really care.  We
mean everything below the application operation; the SACM operation. [Nancy
Cam-Winget]: I will try to find a better way to clarify it. [Ira McDonald]: I
don’t have any idea on a different word to use besides transport, but, we
wrestle with this problem in TCG regularly. [Nancy Cam-Winget]: Ok. [Henk
Birkholz]: One could use the word extradition somehow because components
acquire data from other SACM components in order to build a chain in a process
and maybe something in that area could be applicable. [Nancy Cam-Winget]: I
didn’t quite understand Henk, are you suggesting that we say anything post
acquisition? [Henk Birkholz]: It’s an extradition procedure or either is it
requested by another component or it is initiated by guidance.  Effectively,
components acquire relevant data from other SACM components.  I think transport
is what everybody is talking about here. This extradition process should be
data integrity ensured. [Nancy Cam-Winget]: I am not sure that I quite
understand that.  There are protocols and the protocols can be layered on the
network transport or they could be intertwined right?  So, if they are
decoupled then the question is: is it sufficient that the protocol rely on the
network transport integrity only?  Or, do you want to impose that if the
protocol is separate from the network layer transport that is being allowed to
have an integrity security mechanisms in there as well?  So, I will try to
noodle over this and I notice that I only have five minutes left, but, I just
wanted to go through this table and let the group know that these were the last
ones that I thought and I am trying to raise to the group, but, I will try to
work on the clarification for some of the suggestions for the next draft and
then you guys can take a closer scrutiny and comment in the next revision.  So,
let me go to the next one, G-009 and G-010.  G-009 speaks to the general
discovery and the intent there is the description of that discovery is
referring to the discovery of the specific component, architectural component,
that a consumer, data broker, or the control plane and capabilities within the
endpoint as an endpoint could be playing multiple SACM components if you will. 
G-010 speaks to the discovery of the actual target endpoint so the quickest for
me to do now is to clarify G-010 to say target endpoint instead of just
endpoint.  In the question that Jim posed was that requirement should be a
subset of G-009 and I think for now I may just leave them as separate so that I
don’t have to battle the requirement numbering and so on.  G-012, we already
spoke to interface versus transport versus data operations and protocol.  I
will work on that one as well.  The question is whether or not we need to
include it.  I will make sure that we have those clarifications in the
architecture.  The question to the group is whether we should also include this
in the terminology draft.  Next slide please.  The last two.  In Section 2.4,
DM-010.  I think some of this data model definitions and operations which is
already described in the information model in the first requirement of the
information model and I recall that I was asked to make them explicitly because
the information model speaks to it abstractly and the data model speaks to the
actual instantiation and use of that data model.  For now, I left it and in a
future draft it would be great to get feedback as to whether or not you really
care about removing it or not.  Jim was fine with my response so I am going to
let it be.  Then, in Section 2.6, which is the whole of the transport layer
section, when I wrote the description of the requirement was to speak to the
network layer so I need more feedback.  I will work on the clarification that
the intent was for the network transport and not necessarily inclusive to the
data transport as the data transport and protocol has its own section which I
titled operations on the data model.  Those were the remaining notes.  Dan, my
intent was to remove all of the editor notes for the next draft presuming that
those editor notes ended to be removed so that we could go to the working group
last call. [Dan Romascanu]: The editor notes may be removed and you may want to
include something like changes from -05 to -06 where basically you describe the
principal changes where the issues were addressed.  That is how you keep track
of the history and the end of the process meaning before submission to the IESG
and in any case before publication will be removed. [Nancy Cam-Winget]: Ok.
[Dan Romascanu]: We don’t need the editor note history in the document any
longer.  Adam, would you go through the way forward? [Adam Montville]: Yes, we
can do that now. [Dan Romascanu]: The main question is basically future
meetings and we should probably have another interim meeting five or six weeks
from now is something that would be useful.  If we go to WGLC, which will be in
something like three weeks, will you be able to address the last call comments.
[Nancy Cam-Winget]: I will be able to work over email, but, I am flipping too
many time zones between now and then. [Dan Romascanu]: What about 6/24 or 6/25?
[Nancy Cam-Winget]: I don’t know yet.  I am trying to firm up where I will be
that week. [Dan Romascanu]: I believe the submission deadline for IETF 93 is
July 6th so what about Monday 6/29 or Tuesday 6/30? [Nancy Cam-Winget]: That’s
good for me as well. [Adam Montville]: That is all good with me. [Dan
Romascanu]: For IETF 93, I have asked for two meetings like we did at the last
IETF meeting.  Please be prepared for a Monday morning or Tuesday morning
session and a Wednesday second session.  Plan your trips accordingly.  I will
send an email for a call for comments for the two submissions that we discussed
at the beginning of the call and of course we will hopefully have material to
discuss at the interim meetings. [Adam Montville]: Great. [Dan Romascanu]: The
Endpoint Identity Design Team meeting every Friday right. [Adam Montville]:
Yes, until we don’t need to meet anymore.  Then, we can meet about something
else. [Dan Romascanu]: Ok.  Thank you everyone. [Adam Montville]: That
concludes the meeting.  Thanks everyone.