ICNRG Meeting 2020-12-04
ICNRG Online Meeting – 2020-12-01, 15:00 to 18:00 UTC
Meeting notes here
Recap of Recent FLIC discussion (Christian Tschudin)
Christian Tschudin reviewed current state of FLIC draft-see status: - https://datatracker.ietf.org/meeting/interim-2020-icnrg-04/materials/slides-interim-2020-icnrg-04-sessa-recap-of-recent-flic-discussion-00 - Chris Wood general agreement on direction - DaveO - want to wrap this up for RG last call by mid-January - DaveO - solicit more input and help on the spec from more RG participants.
Data-Centric Ecosystems for large scale Data-internsive Science (Edmund Yeh)
- Q(DaveO): - what was the bottleneck at 6.7Gbps
- A(Edmund): The forwarder - was running single-threaded
- Q(Christian): Elastic or inelastic traffic?
- A: Will look at congestion control - applications are elastic. Also a computation network - how to place computations and deliver data to match
- Q(Ken): what’s the structure of the data - individual files?
- A: There are data sets, files, data blocks and events organized hierarchically. Question is what granularity you want to do the caching. Population distribuion falls off with data granularity - can track few data blocks and still do well. On names, there’s already a single nice hierarchial naming scheme (parrtly easy becasue everything comes from CERN). Easy to translate from HEP to NDN names. The genomics case is a lot harder, not just static but also dynamic. How do you do discovery of new data sets. Susmit has been collaborating in the genomics area.
- Q(Eve): what kinds of computation are yy placing in the network
- A: vast majority of raw data thrown away, so lots of initial processing. Lots of different alorithms used to find interesting events in the data. Need to schedule computation time, then pull data, etc. Filtering, learning, inference all being run by different people situated at diffeent places on the network. Focusing now on data delivery part.
from Dave Oran to Everyone: 4:34 PM q: what was the bottlneck when getting 6.7Gbps? from Junxiao Shi to Everyone: 4:45 PM SC19 was using a previous version of NDN-DPDK, that is not as fast as the version benchmarked in ICN2020 publication. Also, SC19 was using only one thread. from Colin Perkins to Everyone: 4:45 PM Nice talk from Ken Calvert to Everyone: 4:46 PM What is the structure of the data? single files? And are you doing anything interesting with names? from Ken Calvert to Everyone: 4:49 PM Thanks! Excellent talk. from Eve Schooler to Everyone: 4:50 PM Can you share more about what kinds of computation you place in or throughout the network?
This is the paper on NDN-DPDK in case people haven’t seen it: https://dl.acm.org/doi/10.1145/3405656.3418715
Broker-based Pub/Sub System for NDN (Namseok Ko)
- Q(DaveO): How is deletion accomplished? By republishing topic manifest without the data you want to delete? Do you use manifest versions for this?
- A: readvertisement then search the topic tree again
- Q(DaveO): How are subscribers notofied there is new data in the brokers?
- A: check via polling
- Q(Christian) Brokers introduce a centralistic and thus fragile element into the system. Can each node be a broker i.e., make it a peer-to-peer system? Would this be able to survive a network partition?
- A: Yes, each node could be a broker
- Q(Dirk): Why is this more scalable than distributed dataset synchronization à la psync?
- A: in Psync using bloom filters in the name, so can the name length be controlled in PSync to scale with size of data sets, and constrain false positives?
- Q(Bastiaan): on slide 17, it seems that it is requesting a manifest, needs to parse the manifest and then needs to request the individual segments, right? How is that defined as subscribing? Does it then automatically receive subsequent data?
- A: ? (did not catch it)…
from Dave Oran to Everyone: 5:14 PM q: How is deletion accomplished? By republishing topic manifest without the data you want to delete? Do you use manifest versions for this? from Dirk Kutscher to Everyone: 5:14 PM Q: How are subscribers notofied there is new data in the brokers? from Christian Tschudin to Everyone: 5:14 PM Brokers introduce a centralistic and thus fragile element into the system. Can each node be a broker i.e., make it a peer-to-peer system? Would this be able to survive a network partition? from Dirk Kutscher to Everyone: 5:15 PM Q: Why is this more scalable than distributed dataset synchronization à la psync? from Bastiaan Wissingh to Everyone: 5:16 PM Q: on slide 17, it seems that it is requesting a manifest, needs to parse the manifest and then needs to request the individual segments, right? How is that defined as subscribing? Does it then automatically receive subsequent data?
Dirk: Thanks for a nice talk coming from an inconvenient TZ!
NDN-based Ethereum Blockchain (Quang Tung Thai)
- Q(Dirk): Would be helpful to write up the design as a paper - some thought that gossip-based protocols for blockchain are chatty compared to an ICN approach.
- A: Paper in prep
- Junxiao Shi: There are two Go client libraries, very similar.
Producer Anonymity based on Onion Routing (Toru Hasegawa)
- Q(DaveO): is the onion name the same for everybody? If so, can an adversary learn anything by seeing requests for same onion name?
- A:-> Yes, the same onion name is used for everyone. Even so, adversaries have no way to link the onion name to the corresponding producer (I mean that adversaries cannot identify who publishes the content).
- Q(DaveO): how do you expect the consumers to learn the onion names?
- A: We need some out-band mechansims, like a DB for onion names, as the hidden serive in IP does.
- Q(Dirk): Have you considered CCNx key exchange for setting up the “connections” between ARs?
- A: will look into it
A Data-centric View on the Web of Things (Cenk Gündoğan)
- The original paper: https://dl.acm.org/doi/10.1145/3405656.3418718
- and the video: https://www.youtube.com/watch?v=S2x5UU4jVzA&feature=youtu.be
- Q(Dirk): Are we comparing apples to apples in the COAP network, where the proxy chain is pre-configured, whereas in NDN the forwarding plane would be dynamic.
- A: for the paper chain was pre-configured, but could have a discovery protocol (which exist in COAP) that might make forwarding more dynamic.
- Q(Dave): to what extent are the immutability properties we hold deae in ICN affect the design here
- A: Some OSCORE properties are there specifically for mutable content.
- more…everything we’ve used for OSCORE is RFC-compliant. Designers didn’t really have the idea of proxies on every hop - which is more like ICN - but may work out well.
- (Thomas) One interesting perspective is here from the converse side: The CoAP world is actually adopting building blocks that were designed and discussed in the ICN world: Interesting impact.
- (Dirk) Yes, only took them 10 years to adopt object security. ;-)
- (Thomas) well, they did!