Skip to main content

Minutes IETF116: rasprg: Thu 04:00
minutes-116-rasprg-202303300400-00

Meeting Minutes Research and Analysis of Standard-Setting Processes Proposed Research Group (rasprg) RG
Date and time 2023-03-30 04:00
Title Minutes IETF116: rasprg: Thu 04:00
State Active
Other versions markdown
Last updated 2023-03-29

minutes-116-rasprg-202303300400-00

RASPRG Meeting at IETF-116

Date-time: 2023-03-30, 13:00 to 14:30 JST (04:00 to 05:30 UTC)
Notetaker(s): Caspar Schutijser, Michael B

Talk: The Hard Work of the Hum: using ethnography to study power and politics in the IETF

Speaker: Corinne Cath

In this talk, Dr. Corinne Cath will explain how she used ethnography--a
qualitative method that studies people in their natural environment-- to
study the IETF. She will outline the basics of ethnography, how to apply
it to the high-tech and elite nature of Internet standards bodies, and
what unique data about politics and power can be drawn from
understanding the IETF's running code through the people that craft it.
This talk draws on her Ph.D. research on the IETF's culture and
practices, University of Oxford (2016-2021).

Discussion

DKG (Daniel Gillmor?): (joking comment about how a hum should/could
be run) a hum should have more than two options

Susan Hares: How critical was the hum to your research? I have been
in the IETF for over 30 years, and as a WG chair for most of that time,
we do not try to judge consensus on the hum of the room, but from the
mailing list.
Corrine Cath: For that reason, I did not focus on it so much. But
it's something you don't know about until you come here to see it.
Susan Hares: Good to discuss offline.
Corrine Cath: This is a really good point. This is an organisation that
prides itself in abrasiveness, which you can trace back to masculine ???

Stéphane Bortzmeyer: We're not supposed to benchmark IETF against
other SDOs. But I'm interested in whether these are specific to IETF.
Unofficial collaboration is often more important than official.
Corrine Cath: This is a really good point. This is an organisation
that prides itself in abrasiveness. Some of the informal practices
within the IETF are exclusionary because they are masculine practices.
This makes the organisation very unattractive for participants who don't
identify as male.

Alastair Woodman I'm not surprised by your findings. I gave feedback
to you on this. Most people who attend the IETF, and can afford to
follow the IETF around (paid for by companies). Most civil society can't
afford to, and the work here is arcane, so why would they bother? The
fact that IETF has a charter that says the opposite is something that
people probably laugh about in private. It's something that could be
taken up to IETF seniors to change from the top. Or change how the
'engine room' works. But that's how the latter works. Companies spend
lots of money to send people here to ship product. It doesn't make sense
ot build something proprietary because consumers won't buy it. Were you
expecting anything different?
Corrine Cath: This presentation speaks differently to two different
audiences. Nowhere apart from my PhD will you find someone saying that
IETF is procedurally open by in practice quite thorny. Civil society
come here with a misalignment, thinking that they can equally
participate, when that's not the case.

Dan Harkins: What other standards bodies did you investigate, for
how they reach consensus?
Corrine Cath: Mostly IETF, but also IEEE, ICANN, Senelec,
**Dan Harkins
: You mentioned that the hum is bad because of the
reasons you stated. But it prevents block voting you see in other SDOs.

Talk: Data-driven Reviewer Recommendations

Speaker: Stephen McQuistin

Discussion

Corinne Cath: how can this account for social dynamics? In
academics, you can specifically name people who can't review your work
due to professional or personal tension between you and the reviewer.

Susan Hares: Corrinne mentioned something very practical. I tried
out your tool on one of my drafts, which is contentious. It turned up
someone like that. It also turned up people who were contributors. I
look forward to using your tool in my WG.

DKG (Daniel Gillmor?): Neat tool. Are you going to build a tool that
gives a reviewer the ability to find a draft that I as the reviewer
should look at?
Stephen McQuistin: We've got the data, we can do that.

Mallory Knodel: That was very similar to my question. About mailing
list data - do you have data as to who is saying what? Even humans have
a hard time figuring out what is a quote, and what isn't?
Stephen McQuistin: Yes, we worked hard to account for this.
Mallory Knodel: Some people review a lot of drafts. And we tell new
people to review drafts. Will it make it harder for those people to
become reviewers?
Stephen McQuistin: Yes that's a really good point [missed the rest
of the answer]

Alexander Railean: Can you tell us more details about the nature of
categories of errors that were found? Are we talking about typos or
style or did you misplace the minus and did the rocket blow up.
Stephen McQuistin: [missed this] [me too]

Stephen McQuistin: In response to Corrinne, that's a good point. In
general, we think it's better to be driven by data than things other
than data.

Talk: The Expanding Universe of BigBang

Speaker: Sebastian Benthall

Discussion

No questions or comments

Talk: Some Research and Methodologies from IETF Data

Speaker: Priyanka Sinha

Discussion

Corrine Cath: practical/methodological question. Problem these tools
always run into is that the things you measure are in the eye on the
beholder. Example: gender. Result can be very binary ("you are male or
female") but that may not be the reality. Also, whether something is
offensive is also highly cultural.
Priyanka Sinha: This was mentioned at my defence. I am not labelling
people as male/female. I am using unsupervised learning, and it will
cluster people itself, without me labelling them. I wouldn't say that
that's bias-free, but it's better than coarse gender attribute as a
feature. For toxicity, it's a current research problem. I haven't made
any contributions there yet.

Susan Hares: Wonderful work. Two questions. First, having been in
some of those WGs, I think you might have some skewing by the fact that
some poeople are chairs. I'm not sure if you took that into
consideration. Secondly, how did you get around legal problems in usage
or classification of this data? When I did some of this work, it was
recommended that I didn't go into some of this work. Are the constraints
legal or not?
Priyanka Sinha: For the skew part, that's a technical thing. I
havent handled the skew from WG chairs. But on the other hand, at the
lower end of participation, I haven't considered people who have sent
only one email in 10 years. On the data, I have had a lot of problems
and issues. I am very hopeful of being part of IETF and RASPRG, as as
long as it's in the public interest, I can continue this research.

Ignacio Castro: When you do toxicity analysis, they are trained on
public models. E.g. technical terms like 'kill switch' might be
construed as very negative.

Talk: Large Language Models in Standards Discourse Analysis

Speaker: 'Effy' Xue Li

Discussion

No time for questions

RASP RG going forward