Skip to main content

Indicators of Compromise (IoCs) and Their Role in Attack Defence
draft-ietf-opsec-indicators-of-compromise-04

Yes

Warren Kumari

No Objection

John Scudder
(Alvaro Retana)
(Andrew Alston)

Note: This ballot was opened for revision 03 and is now closed.

Paul Wouters
Yes
Comment (2023-01-18 for -03) Sent
Thanks for this document.

I just have a few nits/comments:

Why is figure 1 a pyramid ? I understand the "more and "less"
modifiers to "precise", "fragile" and "pain", but I don't understand
why one layer sits on top of another smaller layer. Especially seeing
domain names at 25M and SHA256 hashes at 5M, with domain names
being a smaller layer on top of SHA256 hashes ?

        This outlined recent exploitation of Fortinet [FORTINET]

I would change "Fortinet" to something not identifiable to a single
vendor. No need to shame them for a decade while this RFC is current.

        28 million of them were for domain generation algorithms (DGAs)

It wasn't for the algorithms but for the domains generated by such algorithms.
Perhaps:  28 million of these were for domains generated by known domain
generation algorithms (DGAs)

        Other IoCs, like Server Name Indicator values in TLS or the
        server certificate information, also provide IoC protections.

This sentence is in a section that talks about DNS and blocking at an
upstream DNS layer, but that does not apply to SNI/TLS. Perhaps this needs
its own section or be moved elsewhere? It now appears SNI can be blocked
by a DNS server.

        Note too that DNS goes through firewalls, proxies and possibly
        to a DNS filtering service; it doesn't have to be unencrypted,
        but these appliances must be able to decrypt it to do anything
        useful with it, like blocking queries for known bad URIs.

I find this paragraph weak and alluding to 'DoH is evil'. In an enterprise,
surely one can force DNS through their own DNS infrastructure, and HTTPS
through their own middleware proxy, and one would certainly want to block
any direct connections to the outside world that are encrypted regardless
of whether it is encrypted payload is DNS or something else.
Warren Kumari
Yes
Éric Vyncke
(was Discuss) Yes
Comment (2023-02-03) Sent
Thank you for addressing my previous DISCUSS point (see https://mailarchive.ietf.org/arch/msg/opsec/GHbje1_9SRFgd5F_TmBO6qygDeg/) and Dave Thaler's int-dir review ones.

More important: thanks to the authors and the OPSEC WG for this useful and easy to read document.

Regards

-éric
Erik Kline
No Objection
Comment (2023-01-16 for -03) Not sent
# Internet AD comments for draft-ietf-opsec-indicators-of-compromise-03
CC @ekline

## Nits

### S2

* "at a network, endpoint or application levels" ->
  "at the network, endpoint and application levels", perhaps?

### S3.2.3

* "Whomever they are" -> "Whoever they are", I believe?
Francesca Palombini
No Objection
Comment (2023-01-19 for -03) Not sent
Thank you for this document.

Many thanks to Rich Salz for his ART ART review: https://mailarchive.ietf.org/arch/msg/art/LgfN3Yv1TahOJOsicEZ6Mfp_jBY/. Authors: please consider addressing Rich's comments (which are minor) before publication.
John Scudder
No Objection
Roman Danyliw
No Objection
Comment (2023-01-18 for -03) Sent
Thank you to Kathleen Moriarty for the SECDIR review.

** Abstract
.  It
   highlights the need for IoCs to be detectable in implementations of
   Internet protocols, tools, and technologies - both for the IoCs'
   initial discovery and their use in detection - and provides a
   foundation for new approaches to operational challenges in network
   security.

What “new approaches” are being suggested?  It wasn't clear for the body of the text.
 
** Section 1.
   intrusion set (a
   collection of indicators for a specific attack)

This definition is not consistent with the use of the term as I know it.  In my experience an intrusion set is set of activity attributed to an actor.  It may entail multiple campaigns by a threat actor, and consist of many attacks, TTPs and intrusions.  APT33 is an example of an intrusion set.

** Section 1.  Editorial. s/amount intelligence practitioners/cyber intelligence practitioners/

** Section 2.  Editorial.
   used in malware strains to
   generate domain names periodically.  Adversaries may use DGAs to
   dynamically identify a destination for C2 traffic, rather than
   relying on a list of static IP addresses or domains that can be
   blocked more easily.

-- Isn’t the key idea that these domains names are algorithmically generated on a periodic basis?   
-- Don’t adversaries computer not identify the C2 destination?
-- Be cleared on the value proposition of dynamic generation vs hard coded IPs

NEW
used in malware strains to periodically generate domain names algorithmically.  This malware uses a DGAs to compute a destination for C2 traffic, rather than relying on pre-assigned list of static IP addresses or domains that can be blocked more easily if extracted from the malware.

** Section 2.  Kill chains need not be restricted to the seven phases defined in the original Lockheed model.

** Section 3.2.1  Editorial.
   IoCs are often discovered initially through manual investigation or
   automated analysis.  

Isn’t manual or automated the only two options?  Perhaps s/IoCs are often discovered/IoC are discovered/

** Section 3.2.1.
   They can be discovered in a range of sources,
   including in networks and at endpoints

What is “in networks” in this context?  Does it mean by monitoring the network?

** Section 3.2.1.
   Identifying a particular protocol run related to an
   attack
What is a “protocol run”? Is that a given session of a given protocol?

** Section 3.2.1

   Identifying a particular protocol run related to an
   attack is of limited benefit if indicators cannot be extracted and
   subsequently associated with a later related run of the same, or a
   different, protocol.  

-- Is this text assuming that the indicators to identify the flow need to come from the network?  Couldn’t one have reversed engineering a malware sample and that be the basis of the IOC to watch for?  

-- Wouldn’t there be some residual value in identifying known attack traffic as a one-off, if nothing more to timestamp the activity of the threat actor?

** Section 3.2.3.  In addition to ISACs, the term ISAO is also used (at least in the US)
OLD
   often
   dubbed Information Sharing and Analysis Centres (ISACs)
NEW
   often
   dubbed Information Sharing and Analysis Centres (ISACs) or Information Sharing and Analysis Organizations (ISAOs)

** Section 3.2.3.  s/intel feeds/intelligence feeds/

** Section 3.2.3. s/international Computer Emergency Response Teams (CERTs)/internal Computer Security Incident Response Teams (CSIRTs)/

** Section 3.2.3
   Whomever
   they are, sharers commonly indicate the extent to which receivers may
   further distribute IoCs using the Traffic Light Protocol [TLP].

Perhaps weaker that TLP is the common way pass the redistribution guidance, unless there is a strong citation to support the claim.

** Section 3.2.4
   For IoCs to provide defence-in-depth (see Section 6.1), which is one
   of their key strengths, and so cope with different points of failure,
   they should be deployed in controls monitoring networks and endpoints
   through solutions that have sufficient privilege to act on them.


I’m having trouble unpacking this sentence.

-- Even with the text in Section 6.1, I don’t follow how IoCs provide defense in depth.  It’s the underlying technology/controls performing mitigation that provide this defense.

-- what is a “controls monitoring networks”?  

-- could more be said about the reference “solutions”

** Section 3.2.4
   While IoCs may be manually assessed after
   discovery or receipt, significant advantage may be gained by
   automatically ingesting, processing, assessing, and deploying IoCs
   from logs or intel feeds to the appropriate security controls.

True in certain cases.  Section 3.2.2. appropriately warned that IoCs are of different quality and that one might need to ascribe different confidence to them.  Recommend propagating or citing that caution.

** Section 3.2.4.

   IoCs can be particularly effective when deployed in security controls
   with the broadest impact.

-- Could this principle be further explained?  What I got from the subsequent text was that a managed configuration by a vendor (instead of the end-user) is particularly effective.

-- It would be useful to explicitly say the obvious which is that “IoC can be particularly effective _at mitigating malicious activity_”

** Section 3.2.5.

   Security controls with deployed IoCs monitor their relevant control
   space and trigger a generic or specific reaction upon detection of
   the IoC in monitored logs.

Is it just “logs” being monitored by security controls?  Couldn’t a network tap/interface be used too?

** Section 4.1.1.  Editorial. This section has significant similarity with Section 6.1.  Consider if this related material can be integrated or streamlined.

** Section 4.1.1.  Editorial.

   Anti-Virus (AV) and Endpoint Detection and
   Response (EDR) products deploy IoCs via catalogues or libraries to
   all supported client endpoints

Is it “all support client endpoints” or “client endpoints”?  What does “all” add?

** Section 4.1.1.

   Some types of IoC may be present
   across all those controls while others may be deployed only in
   certain layers.  

What is a layer?  Is that layer in a protocol stack or a "defense in depth" layer?

** Section 4.1.1.  I don’t understand how the two examples in this section illuminate the thesis of the opening paragraph t that almost all modern cyber defense tools rely on indicators.

** Section 4.1.1.  What is “estate-wide patching”?  Is that the same as “enterprise-wide”?

** Section 4.1.2.  With respect, the thesis of this section is rather simplistic and fails to capture the complexity and expertise required to field IoCs.  No argument that a small manufacturer may be a target.  However, there is a degree of expertise and time required to be able to load and curate these IoCs.  In particular, I am challenged by the following sentence, “IoCs are inexpensive, scalable, and easy to deploy, making their use
particularly beneficial for smaller entities ...”  My experience is that small business even struggle with these activities.

IMO, the thesis (mentioned later in the text) should that the development of IoCs can be left to better resourced organizations.  Organizations without the ability to do so could still benefit from the shared threat intelligence. 

Additionally:
   One reason for this is that use of IoCs does not require the same
   intensive training as needed for more subjective controls, such as
   those based on manual analysis of machine learning events which
   require further manual analysis to verify if malicious.  

-- what are “subjective controls”?  The provided example of a “machine learning event” is the output of such a system?

** Section 4.1.4.  This section has high overlap with Section 3.2.3.  

-- Can they be streamlined?  

-- Can the standards to shared indicators be made consistent?

-- (author conflict of interest) Consider if you want to list IETF’s own indicator sharing format, RFC7970/RFC8727

** Section 4.1.4

   Quick and easy sharing of IoCs gives blanket coverage for
   organisations and allows widespread mitigation in a timely fashion -
   they can be shared with systems administrators, from small to large
   organisations and from large teams to single individuals, allowing
   them all to implement defences on their networks.

Isn’t this text conveying the same idea as was said in the section right before it (Section 4.1.3)?

** Section 4.1.5  Isn’t the thesis of automatic deployment of indicators already stated in Section 3.2.4.

** Section 4.1.5

   While it is still necessary to invest effort both to enable efficient
   IoC deployment, and to eliminate false positives when widely
   deploying IoCs, the cost and effort involved can be far smaller than
   the work entailed in reliably manually updating all endpoint and
   network devices.

What is the false positive being referenced here?  Is it false positive matches against the IoC?  If so, how is that related to manually updated endpoints?  

** Section 4.1.7.  No disagreement on the need for context.  However, I’m confused about how this text is an “opportunity” and the new material it is adding.  In my experience with the classes of organizations named as distributing IoCs in Section 3.2.3. (i.e., ISACs, ISAO, CSIRTS, national cyber centers), context is “table stakes” for sharing.  How does a receiving party know how to act on the IoC otherwise?  

** Section 5.1.1

   Malicious IP addresses and domain names can also be
   changed between campaigns, but this happens less frequently due to
   the greater pain of managing infrastructure compared to altering
   files, and so IP addresses and domain names provide a less fragile
   detection capability.

Please soften this claim or cite a reference.  How often an infrastructure changes between campaigns can vary widely between threat actors.

** Section 5.1.2
   To be used in attack defence, IoCs must first be discovered through
   proactive hunting or reactive investigation.  

Couldn’t they also be shared with an organization too?

** Section 5.3.

   Self-censoring by sharers appears more prevalent and more extensive
   when sharing IoCs into groups with more members, into groups with a
   broader range of perceived member expertise (particularly the further
   the lower bound extends below the sharer's perceived own expertise),
   and into groups that do not maintain strong intermember trust.  

Is there a citable basis for these assertions?

** Section 5.3.

   Research
   opportunities exist to determine how IoC sharing groups' requirements
   for trust and members' interaction strategies vary and whether
   sharing can be optimised or incentivised, such as by using game
   theoretic approaches.

IMO, this seems asymmetric to call out.  In almost every section there would be the opportunity for research.

** Section 5.4.  

   The adoption of automation can also enable faster and easier
   correlation of IoC detections across log sources, time, and space.

-- Does “log sources” also mean network monitoring?
-- what is “space” in this context? Is it the same part of the network?

** Section 6.1.  The new “best practice” in this section isn’t clear.  “Defense-in-Depth” has been previously mentioned.

** Section 6.1.  Editorial.

   If an attack happens, then you hope an endpoint solution will pick it
   up.  

Consider less colloquial language.

** Section 6.1.  It isn’t clear to me how the example of NCSC’s PDNS service demonstrated defense in depth.  What I read into was a successful, managed security 
offering.  Where was the “depth”?

** Section 6.1.

  but if the IoC is on PDNS, a consistent defence is
   maintained. This offers protection, regardless of whether the
   context is a BYOD environment

In a BYOD why is consistent defense ensured.  There is no assurance that the device will be using the PDNS?

** Section 6.2.  It seems odd to next the Security Considerations under best practices.  Especially since it is recommending speculative and not performed research.  Additionally, per the “privacy-preserving” researching, the privacy concerned noted in Section 5.3 don’t seem clear enough to action.
Robert Wilton Former IESG member
Yes
Yes (2023-01-09 for -03) Sent
Hi,

Thanks for this informative read.

When sharing IoCs, is there ever a concern that the attackers themselves may make use of an IoC feed, particularly one that is generated in a machine readable format, to automatically modify their attacks to mitigate the defenses?  Are steps taken to mitigate this, or is this not really a practical concern at this time?

Regards,
Rob
Alvaro Retana Former IESG member
No Objection
No Objection (for -03) Not sent

                            
Andrew Alston Former IESG member
No Objection
No Objection (for -03) Not sent

                            
Lars Eggert Former IESG member
No Objection
No Objection (2023-01-16 for -03) Sent
# GEN AD review of draft-ietf-opsec-indicators-of-compromise-03

CC @larseggert

Thanks to Vijay Gurbani for the General Area Review Team (Gen-ART) review
(https://mailarchive.ietf.org/arch/msg/gen-art/f4qDRffPWyGDKXuxNbrb5UVwU38).

## Comments

### Inclusive language

Found terminology that should be reviewed for inclusivity; see
https://www.rfc-editor.org/part2/#inclusive_language for background and more
guidance:

 * Term `master`; alternatives might be `active`, `central`, `initiator`,
   `leader`, `main`, `orchestrator`, `parent`, `primary`, `server`

## Nits

All comments below are about very minor potential issues that you may choose to
address in some way - or ignore - as you see fit. Some were flagged by
automated tools (via https://github.com/larseggert/ietf-reviewtool), so there
will likely be some false positives. There is no need to let me know what you
did with these suggestions.

### URLs

These URLs in the document did not return content:

 * https://cert.europa.eu/static/WhitePapers/UPDATED-CERT-EU_Security_Whitepaper_2014-007_Kerberos_Golden_Ticket_Protection_v1_4.pdf

### Grammar/style

#### Section 1, paragraph 1
```
nce: the activity of providing cyber security to an environment through the
                               ^^^^^^^^^^^^^^
```
The word "cybersecurity" is spelled as one.

#### Section 2, paragraph 5
```
twork defenders (blue teams) to pro-actively block malicious traffic or code
                                ^^^^^^^^^^^^
```
This word is normally spelled as one.

#### Section 3.2.2, paragraph 1
```
roups to national governmental cyber security organisations and internationa
                               ^^^^^^^^^^^^^^
```
The word "cybersecurity" is spelled as one.

#### Section 3.2.7, paragraph 1
```
rce malware can be deployed by many different actors, each using their own T
                               ^^^^^^^^^^^^^^
```
Consider using "many".

#### Section 4.1.1, paragraph 3
```
security controls monitoring numerous different types of activity within net
                             ^^^^^^^^^^^^^^^^^^
```
Consider using "numerous".

#### Section 5.1.3, paragraph 1
```
the ongoing legitimate use. In a similar manner, a file hash representing an
                            ^^^^^^^^^^^^^^^^^^^
```
Consider replacing this phrase with the adverb "similarly" to avoid wordiness.

#### Section 5.2.1, paragraph 2
```
member expertise (particularly the further the lower bound extends below the
                                   ^^^^^^^
```
It appears that a comma is missing.

#### Section 5.2.1, paragraph 2
```
rust. Trust within such groups appears often strongest where members: intera
                               ^^^^^^^^^^^^^
```
The adverb "often" is usually put before the verb "appears".

#### Section 5.2.2, paragraph 2
```
uational awareness is much more time consuming. A third important considerati
                                ^^^^^^^^^^^^^^
```
This word is normally spelled with a hyphen.

#### Section 5.2.2, paragraph 3
```
C, as anything more granular is time consuming and complicated to manage. In
                                ^^^^^^^^^^^^^^
```
This word is normally spelled with a hyphen.

#### Section 5.3, paragraph 2
```
of call for protection from intrusion but endpoint solutions aren't a panacea
                                     ^^^^
```
Use a comma before "but" if it connects two independent clauses (unless they
are closely connected and short).

#### Section 6.1, paragraph 4
```
out of scope for this draft. Note too that DNS goes through firewalls, proxie
                                  ^^^^^^^^
```
Did you mean "to that"?

## Notes

This review is in the ["IETF Comments" Markdown format][ICMF], You can use the
[`ietf-comments` tool][ICT] to automatically convert this review into
individual GitHub issues. Review generated by the [`ietf-reviewtool`][IRT].

[ICMF]: https://github.com/mnot/ietf-comments/blob/main/format.md
[ICT]: https://github.com/mnot/ietf-comments
[IRT]: https://github.com/larseggert/ietf-reviewtool