Skip to main content

Minutes interim-2019-icnrg-03: Sun 09:30
minutes-interim-2019-icnrg-03-201907210930-00

Meeting Minutes Information-Centric Networking (icnrg) RG
Title Minutes interim-2019-icnrg-03: Sun 09:30
State Active
Other versions plain text
Last updated 2019-07-22

minutes-interim-2019-icnrg-03-201907210930-00
ICNRG Interim meeting - Sunday 21 July 2019
=============================

Note taker: Ken Calvert

Names:
    CT = Christian Tschudin
    DO = Dave Oran
    BO = Borje Ohlman
    MM = Marc Mosko
    LZ = Lixia Zhang
    ES = Eve Schooler
    TS= Thomas Schmidt
    GW = Greg White
    JS = Jan Seedorf
    RL = Ruidong Li
    DK = Dirk Kutscher

CT's presentation: "Push It"
---------------------

Discussion:
DO - one of the tricky things in systems I've built is history-pruning. Thought
about that? Requires that you know what the set of participants at all times so
you know what's safe to prune. This seems to not have any info about who is
participating. CT: yes, the model is sliding window.  Might be
application-specific, need to have some knowledge about what is important. DO -
metacomment: most realistic apps will want that. Would be nice if state
tracking needed to do that is part of the log. LZ - on the last slide re: "NDN
mantra" about cache. Better way to state it is the following - by fetching
securely named data, NDN makes network-based storage reasonable.  It's really
that NDN gave the ability to fully utilize in-network storage, because it could
come from anywhere. CT - yes, whereas I am forced to have storage here. Dave -
followon from Lixia's comment.  Can produce a lot of confusion to say
"in-network storage".  What you really mean is "storage of logs by people who
are not participants".  Because "in-network" has lots of interpretations - in
the router, in a dumb switch, in a repo. E.g., it should live topologically
distributed enough to be robust against partitions or earthquakes, etc. CT -
you are carving out something that is networking, and something other, but I am
trying to squash it all in. LZ - Ref. Dave Clark's book. Quotes one of his
SIGCOMM '88 paper, specific comment regarding datagrams. Confusion with virtual
circuits was big - people thought you designed networks to support v.c. b/c
that's what applications want. DARPA went with Datagrams because it matched the
network, brought advantages for the network service - more resilient, etc. 
When I saw this (debate about push/pull), I think maybe a similar statement
could be made. There was work that inspired Van to put everything together in
this architecture. Some of it was IP multicast. What I'm trying to say is
push/pull seem to me to be more higher-level concepts. TS (?BO?): I don't see
security mentioned. App-level security with push? CT: every log entry is
signed. You do have to watch for forks. LZ: pub-sub is really application-level
semantics. DK: clarification question - you started by talking about
application reqs. When you want to do push/append-only logs, etc., would you
say it's a mistake to use NDN as network layer, use Psync, etc. Do you need to
do something at the network layer? CT: I would like to do a routing layout...we
have long-lived interests. In the end if we want to achieve the same effect
with the same state, there shouldn't be a difference. DaveO: some folks think
long-lived interests are an exceedingly bad idea. MM: define what you mean by
push and pull. Multicast sometimes called push, but I had to subscribe to it.
CT: yes, we have to be careful. MM: how often you have to detect changes does
introduce latency. [Discussion about cost of recovery, especially when there
are packet losses.]

======================

MM's presentation: Selectors in CCNx 1.0
---------------------------------

Discussion -
DO: Recursion: how do I discover discovery agents.  Need something in the
architecture to enable you to find the root discovery agent for that part of
the namespace. [Discussion about getting back a list of names that satisfy the
query] DO: When we talk about name resolution services they tend to have a
global aspect. This seems more limited - you already need to know something
about the portion of thenamespace you are going to search. BO: what's the
benefit? DO: Limits scope... DoS attacks against ANY... Eve: Sometime ago I
talked about a use case with sensors...ubiquitous witness. Proposed that we
need fuzzy names - more upper layers, not L3. Is there a way to bound/scope
these queries? How to do Cong Ctl if multiple agents respond? MM: One diff is
not asking for content, asking for a table of contents. You'll still get the
in-network filtering - one response per interface. IoT example is a bit weird
b/c the data may be smaller than the query response about the data. CT: What is
the delta [...] is this strict subset? What about recursion? MM: only way you
can get recursion if it gets to ... CT: would NFN make this obsolete?

=======================

GW's presentation "IPoC: IP over CCN for seamless 5G mobility"
--------------------------------------------------

"NOT the intent to map IP semantics into the named data space." Just making use
of mobility functionality.

DK (Chairs' perspective on where this draft sits) - got positive feedback on
adoption of prev version of draft. Chairs would now like to adopt the draft -
usual reconfirmation on mail list. Any opinions/concerns? BO: Security
considerations to be added. (Not blocking.) ES: I sometimes drop in on DMM WG -
is this the same part of the arch space that hICN is targeting? GW: think so.
ES: you still get the benefits of CCN, the question is really about targeted
usage. GW: yes. ES: would be interesting to hear discussion of pros and cons of
what you are doing, since they are one of five proposals being discussed in
both IETF and 3GPP. GW: ICNRG has a draft on 5G as well... DO (suddenly
confused): I think these things have nothing to do with each other. I think
hICN is trying to use some elements of the network layer to provide an ICN
service. This is the opposite, using ICN to provide an IP service.  This is
carrying IP e2e, on an ICN underlay. ES: are they really solving the LTE-EPC
problem to get rid of GTP tunnels? DK: IPoC makes GTP tunnels obsolete, hICN is
orthogonal to that. ES: both are making mobility actually seamless, both aiming
at same high-level problem. DK: IMO could be interesting to do a survey for
some kind of application venue.

DK: OK, moving forward with adoption, to be confirmed (NAK protocol) on the
mailing list.

[Recap of how IPoC works]

JS: is GTP payload encrypted? [Yes] Is the payload encrypted here?
GW: could be, draft doesn't discuss now.

CT: you have counter corrections that are relative. Is there an absolute
correction, in case there are losses? GW: gateway is trying to maintain the
client interests ... no absolute number b/c that would require reliable. CT:
isn't it about expressing the rate? You say "I want that amount of interests
coming to me..." then you can improve very quickly. You are limited to that
game of many small steps - is that by design? GW: when client receives a CO, it
always responds with an interest. +1/-1 enables you to move that up/down in a
simple way.

MM: doing anything with Stds Orgs now?
GW: not right now. As it develops we'll see.
MM: the "Data-award networking" folks at ITU have morphed into ICN. Now there
are documents in 5G world. This is similar to stuff Chris Wood and I did for
CCN. We were looking at it in proxy situations...

==========================

RL's presentation on Hop-by-Hop Authentication in NDN/CCN
-------------------------------------------------

[no discussion]

==========================

TS's presentation on QoS for ICn in the IoT
----------------------------------

Fun thing is that these very simple QoS mechanisms have effects not known from
IP, b/c it doesn't have these tradeoffs among different resources.

AA: priority forwarding for interests vs data?
TS: prompt class = both.
AA: But interests should not need to wait.
DO: in an environment like this it's a bad idea to forward interests for which
data may not come back. AA: need to distinguish between interest and data
packets. If you already spent resources to send the interest, you should
prioritize data, because may be other resources downstream.

DO: Conventional wisdom is that it's a badly-designed system if you run out of
memory before you run out of bandwidth. Carofiglio et al showed you don't need
a BDP to avoid running out of PIT sizes. So does this setup have a higher link
speed than you could actually use before running out of PIT space? TS: linear
relation between link BW and memory... DO: conventional wisdom is square
root... TS: link is 15.4 so low BW. Point is that you have a relatively high
prob of losing packets on the link.  Several reasons: no PIT slot, no content
store (thus no recovery). Point made is that QoS mechanisms can coordinate
these resources easily.  If you believe you are prone to retrans, so put it in
CS.

MM: You are extracting traffic classes from the name, not marking packets.
TS: true but not relevant.
MM: How would you do this going forward - get the mapping to all nodes?
TS: M2M applications you could just distribute the table, it should be rather
static. DO: If that table gets big, it may become a limitation.  App designer
may be constrained in namespace design. That is coming up in other venues... we
should think consider this.

DK: this is providing tools for managing these resources. You have shown that
here in this very simple constrained situation there really is a visible effect
from coordinating the management. TS: yes, as long as everything is
over-provisioned

DO: think the question of separating cache-filling from cache-eviction is going
to be very interesting. Filling is driven by promptness, eviction driven by
importance.

DK: [connection to other QoS docs. Ways to generalize, dynamic classes etc.] 
This is very promising. DO: There's a time horizon of QoS - once queues are
empty, and you're not going to get another arriving interest, there's no reason
to have hard state.  Advantages to soft state - learn from initiation I-D
exchange, then time it out.  May be fairly cheap to do a whole lot better than
this static table.

=============================

Announcements:
-------------

    AA:ACM conference call for posters/demos. Also student travel grants.
    AA: NDN Community Meeting at NIST in Gaithersburg, Maryland, USA.
    DK: Regular meeting Tuesday. ANRW tomorrow.
    BO: ACM ICN Conference soliciting proposals for 2020.