Skip to main content

Joachim Fabini. Objective and subjective network quality
slides-interim-2021-mnqeuws-01-sessa-joachim-fabini-objective-and-subjective-network-quality-00

Meeting Slides IAB Workshop on Measuring Network Quality for End-Users (mnqeuws) Team
Date and time 2021-09-14 14:00
Title Joachim Fabini. Objective and subjective network quality
State Active
Other versions plain text
Last updated 2023-02-06

slides-interim-2021-mnqeuws-01-sessa-joachim-fabini-objective-and-subjective-network-quality-00
Call for Papers:
https://www.iab.org/activities/workshops/network-quality/

Position Paper: Network Quality from an End User Perspective
Author: Joachim Fabini
Affiliation: Institute of Telecommunications, TU Wien
Contact:  Joachim.Fabini@tuwien.ac.at

When designing and implementing today's protocols and networks, focus is
commonly on ressource sharing and overall network capacity optimization (in
other words: revenue maximization) rather than on determinism from an end
user's perspective. Quantifying the measured network quality from a user's
perspective or predicting it for a future point in time is challenged by a huge
amount of influencing parameters. Capturing these influencing parameters for a
life network is impossible, which is why the concepts of measurement
repeatability and continuity have been questioned as impractical in RFC 7312
[1] as an update to the IETF's IP Performance Metrics (IPPM) framework.

The list of requirements for objective network quality measurements varies
depending on the perspective. In first place, these measurements must yield
results that are (a) representative and meaningful to the user, (b) allow users
to infer on future expected network quality, and (c) be "fair", meaning that
metrics and methodologies should reflect user's perspective, match her
perception, and not exhibit bias for specific operators and/or technologies
deployed in the networks. These high-level requirements can be mapped to a
series of low-level technical requirements that have a high potential to
conflict with each other.

At a technical level, the main challenge in quantifying network quality is that
the network abstraction of a stateless copper wire does no longer hold true
([2], [3]). The OSI layer model enables interoperability between various
network technologies at the cost of increased complexity. Protocols at various
layers now include redundant functionality that may even cause conflicts and
performance penalties. For instance retransmissions on loss or in the case of
network congestion detection may be replicated at physical, transport and
application layer. Network links along the path allocate resources to users
based on parameters that users can't influence on. Therefore, singular events
may lead to cascading actions: an application that detects an end-to-end
congestion and lowers the sending rate may trigger the cellular access link
scheduler to de-allocate ressources for the user's access link below the level
that was needed for handling the congestion. In the worst case, an aggressive
timing of networks (result of overall network capacity optimization) combined
with vertical OSI layer interaction can trigger network oscillations. The
complexity can be increased at discretion by adding more uncertainty factors
like active user count in a cell, multi-path aggregation for heterogeneous
network technologies at various layers (for instance MPTCP or SCTP at transport
layer), SDN, server virtualization, etc. And the complexity in terms of
parameters increases beyond manageable when it comes to theoretical radio
provisioning models vs. practical radio coverage (potentially inside buildings)
and user mobility.

A fundamental dilemma with respect to objective network quality assessment
becomes obvious when reviewing early implementations of mobile cellular
networks. Years ago, some operators deployed transparent compression at link
level in their networks. Mainly mobile cellular access links compressed user
data transparently at link ingress and decompressed it at link egress. This
raises the question how to conduct fair network quality measurements and
comparisons, as the technology exhibits substantial bias for the actual data
used for measurements. For text user data, the compression results in an n-fold
increase in transfer capacity from a user perspective. For binary data, the
unconditional compression may even cause performance (delay) penalties. An
obective quality measurement for such a use case is virtually impossible
without testing an actual user traffic.

Structuring these observations, network quality from a user perspective depends
in the first place on (1) The past and momentary traffic (includes amount of
data, as well as packet content) generated by applications that are active on
the user's device(s) (2) Network technologies and protocols in use at the
end-user device and on (potentially redundant/aggregated) end-to-end network
path(s) (3) Network conditions and parameters (including: users-in-the-cell,
competing traffic, timing, network configuration, roaming agreements, radio
provisioning, mobility, etc.)

In isolated systems and static scnearios it may be feasible to boil down
network quality to quantifiable metrics like one-way delay, one-way loss, and
link capacity. However, in today's networks all of these metrics depend on the
traffic that a user and her active applications generate. Deployed
applications, application use, and resulting user traffic differ across
societies, cultures, geographies, age groups, tariffs, etc. It's actually
impossible to define a "prototypical traffic" or a mixture of traffic patterns
with local or global validity. If done accurately (the ivory-tower-way), it
would end up in dozens of "typical" user or application profiles. Worse, even
the typical target audience (users interested in their network quality)
typically can't tell the profile of their traffic as it's hidden within
applications and lower layers. And finally, to complicate matters, even
specific applications may select or switch their protocols in use at run time,
potentially transparent from a user's perspective. For instance a web browser
may either use http on top of TLS/TCP or quic as transport. A metric and
methodology that claims to map performance to one value can't capture this
indeterminateness and is, therefore, highly overselling.

Therefore, quantifying network quality by one single value seems to be doomed
to fail - in particular, whenever this value aims at predicting expected
network quality for the future without knowing the user's application
requirements and specific traffic. An accurate knowledge of a specific user's
traffic - for instance through passive measurements - may support a
post-analysis of the network quality ("my network quality for the past session
was x"). But even this solution is highly intrusive and, therefore,
questionable: the detailed analysis requires user traffic and data to be
collected and analyzed, which raises substantial concerns with respect to bias
(opt-in expected to be used by techies) and privacy (GDPR). Whereas its
representativity is limited to the specific data set.

Conclusion:
From a technological point of view there are too many uncertainty factors that
exhibit heavy bias on network quality (at OSI Layer 1-7 + user). This makes it
impossible to map network quality to one easily comprehensible, objective,
unbiased, representative, predictive value or scale. Underlying reason is that
the copper wire abstraction used for most user experience models no longer
holds true as networks react to the user traffic and depends on a huge set of
uncertainty parameters. Objective and subjective network quality (mapped to
technical network parameters like delay, capacity, loss, ...) depend to a large
extent on the user's traffic and data.

One potential solution could be to define a framework (mechanisms and
protocols) that (A) supports users in monitoring and evaluating their effective
traffic in order to map it to an abstract and privacy-preserving user traffic
profile, (B) adds application-level and network-provider-interfaces to monitor
events in order to include them into the analysis, and (C) considers in its
design potential privacy concerns and GDPR regulations. Visual perception of
humans being excellent, an option could be to map multiple benchmark results to
one multidimensional diagram representing network quality. The remaining
question is whether these mechanisms can be designed and implemented such that
tests can be easily run and results comprehended by the broad audience of
(potentially) non-technical Internet users. Should we target an LMAP 2.0 that
focuses on end-user- perspective instead of operator-perspective? Or should the
lmap complexity and its imho limited acceptance be a warning?

[1] J. Fabini and A. Morton, “Advanced Stream and Sampling Framework for IPPM,”
Network Working Group RFC 7312, 2014. [2]: J.Fabini: "Access network
measurements", Lightning talk, IRTF Workshop: Research and Applications of
Internet Measurements (RAIM) 2015, Yokohama, Japan.
https://irtf.org/raim-2015-slides/fman/fabini.pdf [3]: J.Fabini: "Delay
measurement tools: RDM", Lightning talk, IRTF Workshop: Research and
Applications of Internet Measurements (RAIM) 2015, Yokohama, Japan.
https://irtf.org/raim-2015-slides/mpt/fabini.pdf