Tuesday, May 4, 1999
12:00-1:30 p.m.
3085 EU II

In attendance:
Jason Schatz (JS), Jeff Rowe (JR), Chris Wee (CW), David Klotz (DK) and Karl Levitt (KL)


News Items
IDIP Paper
Security Metric
Handout Notes
  • News Items
    1. We're free until September – demo is off
    2. KL: New proposal from Boeing – cash in hand. Trust management among different administrative domains. Formal trust model.
  • IDIP Paper for Jeff and Jason
    1. JR – Global aggregation, little local things you can use. Current IDIP subnets separated by IDIP routers
    2. Start with an IDIP-based paper, but include simple ideas that make it better. The way it’s done is terrible. CW: Protocol is flawed. JS: Implementation is terrible.
      1. JR: Loops in typology – infinite loops in broadcast.
      2. How do you get intelligent global response only use local info? What traffic have neighbors seen and told you about?
      3. JS: IDIP blocks all along attack – there is enough information to block near the attacker - based on topographical distance - number of hops. You end up with linear set of subnets and router, because we only deal with routers along path of attack. Draw simple diagram.
        1. CW: Data collection and response tied to same topology. Why not respond along a different path?
        2. JR: Only relying on local data/local response from neighbors. Boeing hasn’t pushed idea to see what you can do with information from neighbors only. In the aggregate, get optimized response.
          1. If you’ve seen the traffic, don’t block. Aggregate response to worm – isolate source of worm – end up with boundary.
        3. KL: Focus is local info; key is what we can do?
          1. JR: Global response with only info from neighbor. KL: Isolated with minimum number of routers. JR: Depends on router metric. It costs so much to block here.
          2. DK: Global cost – getting duplicates – local cost – not getting full value.
        4. JR: We’ve got 4-5 additions to IDIP to include. More attractive than discovery coordinator. Only using local information, get some total aggregation within the domain. Does work for worm case.
      4. KL: Submit it to the Applications conference or IEEE Network – describe IDIP with embellishments – cost model etc. Don’t throw away IDIP – it's worth half a paper
    3. CW: Analogous to routing based on information from and through neighbors. The first algorithm had global views of routing through flooding.
      1. JR: Basic architecture is same, metric is different – network throughput.
      2. CW: Original algorithm only dealt with connectivity. Dealing with lack of connectivity here. The assumption is that attacks are worms or connections based. It won't work for the Melissa virus. I’m not tied to being dependent on local information. Depending on non-local information, there is an authentication problem. You need an efficient way of getting global view.
        1. DK: One advantage is that we know that this will scale. Might be able to prove that it’s effective.
        2. CW: If attacker can always go around block…
          1. JS: Not necessarily, not dynamically rerouting depending on protocols.
          2. JR: It's a question of speed. Not stopping future attack, just current one.
          3. CW: Lots of variations – maybe if you stick with local information and control, when attack is rerouted, then you’ve got same system and it’ll block that. Attack can proceed, but not once all blocks are set up.
          4. JR: Section on building up global network.
    4. IDIP's winning points.
      1. Academic interest – finding spoofers, have no central control, blocking nearest attacker.
      2. JR: Works for denial of service.
      3. KL: Works for containment, blocking future paths, taxonomy of responses. It shows, depending on speed, if we’re faster than protocols and routers.
      4. JR: Concentrate on simple extensions to IDIP.
      5. KL: Write to Dan, telling him we’re doing this.
  • CW: Security Metrics and Costs (see handout notes below).
    1. JR: Fill up disk very fast.
      1. CW: Slow connection to site; infinite disk capacity.
      2. KL: Purpose is to decide to imagine would intruder want to attack? JS: Metric of attractiveness?
    2. JS: Spiteful model of security – protect everything including things that are not important to us. CW: Intruder will pick site that is easiest to break into, based on vulnerabilities and data intruder wants.
    3. CW: How do we evaluate your risk of intrusion – likeliness of being attacked? JS: Lots of Linux boxes broken into. CW: List what attacker might have in resources. If no way of exploiting the vulnerability, who cares? Minors can attack; not prosecuted. Senior attackers write scripts for junior attackers to implement.
    4. KL: Cost of defending it an attack. Attackers also put resources into attacking it. CW: Regulation at a future date – government standards set minimal level of protection or require minimum insurance to operate a website – security and insurance.
    5. JR: Can you rank sites? CW: Site A has better info, more vulnerabilities. Metric is totally useless – how to collect and quantify information. 3 &4s get attacked instead of 6&7s.
      1. CW: Economist Krugman – loss of info as you formalize a model – maps of Africa example. As mapping techniques improved, got the coastline right, but Africa became a void because they weren't willing to put inaccurate information on. This will suffer the same problem. If we assemble the metric, it’s going to be totally wrong, We need to figure which parameters are wrong – get feedback. DK: Run experiments, get results.
      2. CW: Government sites have different risk assessments from commercial sites..
        1. Jane Win, Professor of Law, Government or military site has no down side of failing to secure site. Commercial entity can be sued – breach in security.
          1. JR: University not on list – The University is worried about its reputation and bad press. CW: They lose a dollar number, but not concerned about that.
          2. JR: Mrak policy doesn’t have influence on how we set up . CW: You could set up any website on campus, not any more. JS: University far beyond business for security – less liability.
          3. Other Organizations worried about reputation: CIA, newspapers, banks. CW: SCC wants to publish it on paper, before put it on the wire. Opposite of how things have been progressing. KL: Run a scanner, tools, maybe inadequate. CW: Scanners won’t report all – just some defenses and vulnerabilities – won’t tell you value of services/data.
          4. CW: What other factors would affect me? DK: Visibility. NY Times more likely to be attacked because of visibility. JS: Sociology.
    [Handout Notes]

    A Simple Security Metric
    Assessing a site's risk of intrusion
    Christopher Wee
    29 April 1999

    A first order security metric is a weighted sum of

    1. value of data contained at the site
    2. value of services provided at the site (how to deal with denial of service?)
    3. value of computation resources at the site (e.g., disk, CPU)
    4. security defenses & vulnerabilities
    5. risk of getting caught by breaking into the site
    6. The site's policy and procedures determines whether or not intrusions or attempted intrusions are monitored, their response and whether or not they are prosecuted.
    7. what applicable laws pertain to breaking into the site
    8. If there was an open invitation, is this value zero?
    9. the bandwidth to the site (and hence the speed at which an attack may be mounted)
    10. We can affect this by using delay response
    But an attacker does not decide to attack a site simply by quantifying the security of a site. Instead, the attacker also has finite resources to expend on attacking sites (time, connectivity, tools that exploit the vulnerabilities) and definite needs that must match the site in question.

    I assume that an intruder will evaluate several sites and attack the one with the lowest metric (hence highest risk of intrusion). So a site's risk of intrusion evolves and changes depending on how well are other like-sites (by industry or services offered) protected.

    Attacker's resources

    1. Time
    2. Need for valuable data
    3. need for services (or for services to be denied)
    4. need for computation resources
    5. tools that exploit vulnerabilities
    6. legal standing (minor, repeat offender etc.)
    7. authorization to enter the site
    8. Connectivity and bandwidth (higher in the U.S. and Western Europe than in Africa, most parts of Asia, Russia and India)
    9. number of sources from which to mount the attack

    What is the motivation for a security metric?
    Cost model for:

    how to determine the value of the factors
    how to determine the weighting used by intruders?
    Can we judge several sites
    1. military and government sites (CDC, Army, CIA, NSA)
    2. defense contractors (e.g., Boeing)
    3. large e-commerce sites (yahoo, eBay, Amazon, Disney, Warner Brothers)
    4. infrastructure site (Network Solutions, AOL)
    5. certificate authorities (Verisign)
    6. financial institutions (bankAmerica, security and exchange commission,)
    7. portals (AOL,
    8. Internet service providers, earthlink, AOL
    Assume that the security defenses of a site may be completely described by an external probe. This is not a good assumption because
    1. cannot judge the security procedures and policy from the outside view
    2. cannot determine what defenses are deployed inside the site
    3. is the site being monitored? some monitoring may be visible, e.g., responses to tickling certain ports and services
    The defenses of a site may be reported in an audit of the site by internal auditors
    The current cost-model ignores all but the value of the services of the particular site. Perhaps for the military environment, many of these factors are static (e.g., prosecution is mandated, connectivity is high and bandwidth is not an issue)