Thursday, February 5, 2009

NETWORK MANAGEMENT AND INFORMATION SYSTEMS-Access Control


January-2004 [12]
3.
a) Explain briefly about Mandatory Access Control and Discretionary Access Control.


Mandatory Access Control allows new access control modules to be loaded, implementing new security policies. Some provide protections of a narrow subset of the system, hardening a particular service
discretionary access control means that each object has an owner and the owner of the object gets to choose its access control policy. There are loads of objects in Windows that use this security model, including printers, services, and file shares. All secure kernel objects also use this model, including processes, threads, memory sections, synchronization objects such as mutexes and events, and named pipes

b) Describe briefly the Bell-La Padula model and its limitations. [6]

The Bell-Lapadula model is designed to facilitate information sharing in a secure manner across
information domains. Within the model a hierarchy of levels is used to determine appropriate access rights. For example, using conventional DND document labeling standards, SECRET is treated above CONFIDENTIAL. The Bell-Lapadula model uses axioms of “read-down” and “write-up”. Therefore, assuming appropriate need-to-know, an individual in a SECRET domain is authorized to “read-down” into the CONFIDENTIAL domain since personnel with sufficient clearance for SECRET are also cleared for CONFIDENTIAL. However, the user in the SECRET domain may never be authorized to “writedown”.
This occurs because the clearance in the CONFIDENTIAL domain is not sufficient to handle the
SECRET information.
Similarly, an individual in a SECRET domain is not authorized to “read-up” from a TOP SECRET domain. This happens because the SECRET domain does not include a sufficient clearance. However, an individual in the SECRET domain may be authorized to “write-up” to the TOP SECRET domain. This happens as a result of the inherent ability for all personnel in the TOP SECRET domain to have sufficient clearance to read the lower domain information.
Limitations
• Restricted to Confidentiality.
• No policies for changing access rights; a complete general downgrade is secure; intended for systems with static security levels.
• Contains covert channels: a low subject can detect the existence of high objects when it is denied access.
• Sometimes, it is not sufficient to hide only the contents of objects. Their existence may have to be hidden, as well.

July-2004 [11]
1.
e) What are access control lists and capability lists? In what ways they differ in their organization? [4]

A file attribute that contains the basic and extended permissions that control access to the file.
One way to partition the matrix is by rows. Thus we have all access rights of one user together. These are stored in a data structure called a capability list, which lists all the access rights or capabilities that a user has. The following are the capability lists for our example: Fred --> /dev/console(RW)--> fred/prog.c(RW)--> fred/letter(RW) --> /usr/ucb/vi(X) Jane --> /dev/console(RW)--> fred/prog.c(R)--> fred/letter() --> /usr/ucb/vi(X)
When a process tries to gain access to an object, the operating system can check the appropriate capability list
Comparing ACLs and Capabilities

Two topics come up repeatedly on the EROS mailing lists:
1. How are capabilities and access control lists different?
2. Is one better than the other? If so, why?
They come up so frequently because people use these questions as a way to sharpen their understanding of the issues.
One thing that I learned in my own attempts to answer these questions is that the arguments are actually quite complex, and that the first mistake is to oversimplify them when you are trying to get a handle on them. You can simplify the explanation later; first be sure what it is you are trying to explain.
Since it's very easy to miss important details, this note tries to give my current answer to the first question. It describes how capabilities and access control lists actually work in practice, and therefore how they differ. It may tell you more then you feel you wish to know, but it is as accurate as I can make it without appealing to mathematics.
I clearly have an opinion on the second question, or I wouldn't have built EROS. This note tries not to be partisan, because it is better to understand the basis of the discussion before debating the merits of the outcome.
1. Access Control Lists
An ACL system has at least five namespaces whose relationships need to be considered:
1. The namespace of file names: /tmp/foo
2. The namespace of unique object identifiers: (dev 22, inode 36, type file)
3. The namespace of user identities (uid 52476)
4. For each object type (file, disk, terminal, ...), the namespace of operations that object can perform.
5. The namespace of process identifiers (process 719)
In an access list system, it is assumed that there are two global mappings:
principal: process identity -> user identity
fs_lookup: file name -> object identity
That is, every process has an assigned user identity and every file name can be translated into a unique object identifier. Hanging off of every unique object is a further mapping:
acl: (object identity, user identity) -> operation(s)
Given a process proc that wishes to perform an operation op on an object object, the protection mechanism in an access list system is to test the following predicate:
op in acl(object, principal(p))
In the special case of the "open" call, this test is modified to be:
op in acl(fs_lookup(filename), principal(p))
2. Capability Systems
A capability system has at least four namespaces whose relationships need to be considered:
1. The namespace of unique object identifiers: (dev 22, inode 36, type file)
2. For each object type (file, disk, terminal, ...), the namespace of operations that object can perform.
3. The namespace of process identifiers (process 719)
4. The namespace of capabilities (object 10, operation set S)
In a capability system, it is assumed that there is one local mapping for each process
cap: (process identity, index) -> capability
That is, every process has a list of capabilities. Each capability names an object and also names a set of legal operations on that object.
There are also two "accessor" functions:
obj: capability -> object identity
ops: capability -> operations
Given a process proc that wishes to perform an operation op on an object object, the process must first possess a capability naming that object. That is, it must possess a capability at some index i such that
object == obj(caps(p,i))
To perform an operation, the process names the "index" i of that capability to be invoked from the per-process list. The protection mechanism in a capability system is to test the following predicate:
op in ops(caps(p,i))
Capability systems typically do not have a distinguished "open" call.
3. Some Differences
This section is incomplete.
Simply comparing the predicates shows that there is a significant difference between the two systems:
ACL: op in acl(object, principal(p))
Capability: op in ops(caps(p,i))
An obvious difference is that the capability model makes no reference to any notion of "principal".
Another obvious difference is that the capability model has a parameter "i". This allows the process to specify which authority it wants to exercise, which is why only the capability model can solve the confused deputy problem.
Access Rights and Persistence
What happens when the computer shuts down and all of the processes disappear?
In an access control list system, this is not a problem, because the login sessions disappear too. The user identity for a process is derived from who starts it, which is in turn derived from the login session. There is no need to record permissions on a per-process basis.
In a capability system, there is a definite problem. Solutions vary. Some systems provide a means to "pickle" a process or associate an initial capability list with each login. EROS makes all processes persistent.
Least Privilege
Capability systems allow a finer grain of protection. Each process has an exactly specified set of access rights. In contrast, access control list systems are coarser: every process executed by "fred" has the same rights. If you could always trust your programs, the coarser protection is fine. In an era where computer viruses are front page news, it is clearly necessary to be able to run some programs with finer restrictions than others.
Revocation
In an access control list, you can remove a user from the list, and that user can no longer gain access to the object. In a capability system, there is no equivalent operation. This is (arguably) a problem. Users come and go on projects, and you'ld like to be able to remove them when they should no longer have access to the project data. There are mechanisms to manage this in capability systems, but they are cumbersome.
Rights Transfer
In general, an access control list does not (in theory) allow rights transfer. If "fred" obtains access to an object, he cannot give this access to "mary" by transferring the object descriptor after the object has been opened. I say "in theory" because fred can still proxy for mary.
In a capability system, capabilities can be transferred. If a process running on behalf of fred can speak to a process running on behalf of mary, then capabilities may be transferred between the two processes. This can be useful: it allows you to hand a capability to a particular document to an editor. It can also be dangerous: if the program you are running is malicious, it can leak your authority.
4. The Equivalence Fallacy
There is an old claim that started appearing very early in papers on protection. The claim is:
Capabilities and access control lists are equivalent. They are simply two different ways to look at a Lampson access matrix. Any protection state that can be captured with one can be captured with the other.
People who have heard of capabilities almost universally believe this claim. Unfortunately, the claim is untrue. Worse, it obscures understanding of protection.
By way of debunking it, let me first explain what is meant by this statement. Then let's look at why it is incorrect.
The Lampson Access Matrix
The Lampson access matrix provides a way to look at the protection state of a system. It describes the access rights of the system at some instant in time. Each subject (a subject is always a process) in the system has a row in the table and each object (which can be either a process or an object) has a column. The entries in the table describe what access rights subject S has on object O:
O1 O2 O3
S1 r r,w x
S2 r r,w
The idea behind the claim is that if you look at a row of the access matrix, you are looking at a capability, and if you look at a column of the access matrix, you are looking at an access control list entry:
O1 O2 O3 O1 O2 O3
S1 r r,w x S1 r r,w x
S2 r r,w S2 r r,w
Unfortunately, this is wrong.
A Problem of Terminology
In the early security literature there was some sloppy use of the term "subject." In some papers the term "subject" was used to mean "process" while in others it was used to mean "principal" (i.e. a user). If we take subject to mean "principal", then the red column is not a capability; capabilities do not have anything to do with principals. If we take subject to mean "process" then the cyan row is not an access control list. ACLs do not refer to processes.
I have pointed this out to theorists who work on formal verification, and I have seen some good ones wave their hands and say "That's not a problem -- just expand all the processes, discard the notion of user, and it all works just fine, and the two models both fit in the matrix."
This is true in some mathematical sense. The problem is that after you do this you haven't got access control lists any more. Access control lists are specified in terms of users, not processes. One can argue (and I do) that specifying things in terms of processes is the right thing to do, but once you do this expansion you have lost a level of indirection that was crucial in understanding how access control lists work. There are useful properties you can prove about a system with all processes expanded that are not true if the user identity indirection is retained. It all depends on what operations are permitted, which brings me to the second, more serious problem:
A Problem of Evolution
While the terminology problem is fatal, there is a more subtle and more damaging error in the claim: it is a static view of a dynamic system.
If you freeze a computer system in mid-execution, you can draw an access matrix that describes it's current protection state, and you can ask if that state is okay. This is a very useful thing to be able to do. In practice, however, we aren't so much interested in what the instantaneous state of a system is as in how that state can evolve.
At a very high level of abstraction, proofs about security mechanisms all work the same way:
1. First you define what a "safe" state is. That is, you specify what it means for the policy to be enforced.
2. Second you establish an initial, "safe" condition. This is done by setting up the initial access matrix that you want. Since you are in control of how the access matrix is initialized, you should be able to establish just about any condition you would like.
The operative word is "should". In some protection systems, however, there are constraints on what can legally be placed in the matrix. A classical access control list system, for example, requires that every process owned by the same subject must have the same permissions. This means that if P1 and P2 are both owned by S1, their rows must be identical.
Unfortunately, it turns out that this is a fairly damaging restriction. It can make setting up the desired initial conditions extremely difficult (sometimes impossible), and it can make the verification of security policies mathematically undecidable -- which means you can't prove the security policy.
3. Third, you specify what the rules are for how the system operates. What are the steps that the machine will perform? How does each step affect the access matrix (if at all)?
4. Finally, you prove that if you start from the specified initial condition, and you take a sequence of steps, you always end up in a "safe" state. Actually, you have to show that this is true for all possible sequences of steps.
2.c) Discuss no read up and no write down security policies and the tranquility principle in Bell – La Padula security model. [7]


The Bell-LaPadula model focuses on data confidentiality and access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity.
In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell-LaPadula model is built on the concept of a state machine with a set of allowable states in a system. The transition from one state to another state is defined by transition functions.
A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and one discretionary access control (DAC) rule with three security properties:
1. The Simple Security Property states that a subject at a given security level may not read an object at a higher security level (no read-up).
2. The *-property (read star-property) states that a subject at a given security level must not write to any object at a lower security level (no write-down). The *-property is also known as the Confinement property.
3. The Discretionary Security Property uses an access matrix to specify the discretionary access control.
The transfer of information from a high-sensitivity paragraph to a lower-sensitivity document may happen in the Bell-LaPadula model via the concept of trusted subjects. Trusted Subjects are not restricted by the *-property. Untrusted subjects are. Trusted Subjects must be shown to be trustworthy with regard to the security policy.
This security model is directed toward access control and is characterized by the phrase: no read up, no write down. Compare the Biba model, the Clark-Wilson model and the Chinese Wall.
With Bell-LaPadula, users can create content only at or above their own security level (Secret researchers can create Secret or Top-Secret files but may not create Public files): no write-down. Conversely, users can view content only at or below their own security level (Secret researchers can view Public or Secret files, but may not view Top-Secret files): no read-up.
The Bell-LaPadula model explicitly defined its scope. It did not treat the following extensively:
• Covert channels. Passing information via pre-arranged actions was described briefly.
• Networks of systems. Later modeling work did address this topic.
• Policies outside multilevel security. Work in the early 1990s showed that MLS is one version of boolean policies, as are all other published policies.
[edit] Strong * Property
The Strong * Property is an alternative to the *-property in which subjects may write to objects with only a matching security level. Thus, the write up operation permitted in the usual *-property is not present, only a write to same operation. The Strong * Property is usually discussed in the context of multilevel database management systems and is motivated by integrity concerns.[5]
This Strong * Property was anticipated in the Biba model where it was shown that strong integrity in combination with the Bell-La Padula model resulted in reading and writing at a single level.
[edit] Tranquility principle
The tranquility principle of the Bell-LaPadula model states that the classification of a subject or object does not change while it is being referenced.
There are two forms to the tranquility principle:
1) The "principle of strong tranquility" states that security levels do not change during the normal operation of the system.
2) The "principle of weak tranquility" states that security levels do not change in a way that violates the rules of a given security policy.
--Another interpretation of the tranquility principles is that they both apply only to the period of time during which an operation involving an object or subject is occurring. That is, the strong tranquility principle means that an object's security level/label will not change during an operation (such as read or write); the weak tranquility principle means that an object's security level/label may change in a way that does not violate the security policy during an operation.
[edit] Limitations
• Restricted to Confidentiality.
• No policies for changing access rights; a complete general downgrade is secure; intended for systems with static security levels.
• Contains covert channels: a low subject can detect the existence of high objects when it is denied access.
• Sometimes, it is not sufficient to hide only the contents of objects. Their existence may have to be hidden, as well.
January-2005 [4]
1.b) With respect to an operating system, what is the primary security benefit of access control lists? [4]

Access Control Lists (ACLs) allow you to control what clients can access on your server. Directives in an ACL file can
o Screen out certain hosts to either allow or deny access to part of your server,
o Set up password authentication so that only users who supply a valid login and password may access part of the server,
o Delegate access control authority for a part of the server (such as a virtual host's URL space, or individual users' directories) to another ACL file,

January-2006 [11]
2.
c) What is the importance of "no read up" plus "no write down" rule for a multilevel security system? [3]
January-2007 [6]
6.
c) Why are each initiator and each target assigned to one or more security groups in an access control scheme based on security labels? [6]
7.a) Mandatory Access Control (MAC) is an access policy determined by the system, not the owner. Is it true or false? Justify. [3]

Mandatory Access Control
Mandatory access control (MAC) is an access policy determined by the system, not the owner. MAC is used in multilevel systems that process highly sensitive data, such as classified government and military information. A multilevel system is a single computer system that handles multiple classification levels between subjects and objects.
• Sensitivity labels: In a MAC-based system, all subjects and objects must have labels assigned to them. A subject's sensitivity label specifies its level of trust. An object's sensitivity label specifies the level of trust required for access. In order to access a given object, the subject must have a sensitivity level equal to or higher than the requested object.
• Data import and export: Controlling the import of information from other systems and export to other systems (including printers) is a critical function of MAC-based systems, which must ensure that sensitivity labels are properly maintained and implemented so that sensitive information is appropriately protected at all times.
Two methods are commonly used for applying mandatory access control:
• Rule-based access controls: This type of control further defines specific conditions for access to a requested object. All MAC-based systems implement a simple form of rule-based access control to determine whether access should be granted or denied by matching:
o An object's sensitivity label
o A subject's sensitivity label
• Lattice-based access controls: These can be used for complex access control decisions involving multiple objects and/or subjects. A lattice model is a mathematical structure that defines greatest lower-bound and least upper-bound values for a pair of elements, such as a subject and an object.
Few systems implement MAC. XTS-400 is an example of one that does.
4. Security Policy Design

January-2004 [10]
1.
b) Explain what is challenge response system? [4]
A challenge-response system is a program that replies to an e-mail message from an unknown sender by subjecting the sender to a test (called a CAPTCHA) designed to differentiate humans from automated senders. The system ensures that messages from people can get through and the automated mass mailings of spammers will be rejected. Once a sender has passed the test, the sender is added to the recipient's whitelist of permitted senders that won't have to prove themselves each time they send a message.
Challenge-response systems take a number of different approaches to the task of separating humans from machines. Typically, when a message is received, the system sends a reply that includes a URL linking the user to a Web site. At the Web site, the user is asked to perform some task that, while easy for a human, is beyond the capabilities of an automated spamming program. The system might ask the answer to a simple question, for example, or require the user to copy distorted letters or numbers displayed in an image.
Companies that provide free e-mail accounts often use a challenge-response system to ensure that their accounts aren't given out to spammer's programs. According to Carnegie Mellon's CAPTCHA Project, computerized programs can create thousands of new e-mail accounts per second, each of which can be used to send out reams of spam.

3. What are the essential components of a corporate security policy? [6]
3.c) he Three Components of an Effective Security Policy
While an information security policy is commonly referred to in the singular, an actual policy includes a suite of living documents: the security policy document, a standards document set and a procedures document set. While the policy itself gets the most attention, it often is the shortest document, sometimes taking up only two full pages.
An information security policy makes up for its brevity with the importance of its content. There are usually four key elements to a successful security policy document: to whom and what the policy applies, the need for adherence, a general description, and consequences of nonadherence. These four tenets of the policy provide the foundation for the remaining documents. Once this document is finished, it must be approved by the most senior manager in the organization and then made available to all employees.
The standards document set defines what needs to be done to implement security, including descriptions of required security controls and how those controls apply to the corporate environment. The document set should address a variety of security issues, including, but not limited to, the following: roles and responsibilities of security personnel, protection against malicious code, information and software exchange, user responsibilities, mobile computing, and access control. In addition to the common security concerns, the standards document set outlines compliance issues, government regulations and industry standards.
Much like the security policy document, the information security standards document does not usually need to be changed. Only if new systems, applications or regulations are introduced will the document set need to be modified.
The procedures document set makes up the final component of the corporate information security policy suite. This document should be the biggest of the three components, and it will also be the most flexible. This document set specifically outlines how security controls will be implemented and managed. The procedures should match accompanying standards, making sure that any given standard requires many tasks to be completed to achieve full compliance. This document provides many of the details that can make or break an effective information security policy.
Making the Policy Count—Enforcement
Once the hard work of creating an information security policy and getting it approved is finally done, the enforcement of the policy begins. All the effort put into creating the policy is of little worth unless the policy is followed by the corporation and sufficiently enforced. A compliance program or a policy assessment can be instrumental in assisting an organization’s attempts to enforce an information security policy.
A policy compliance review reveals whether a designed security control is employed and used correctly. Policy compliance reviews differ from traditional vulnerability assessments in many ways. For example, IT and security auditors should handle policy compliance reviews, while security operations personnel should handle vulnerability assessments. Also, policy compliance reviews are used to determine compliance of systems and applications under the new policy, while vulnerability assessments pinpoint specifically the vulnerabilities in systems and applications. Finally, policy compliance reviews use standards and regulations such as ISO 17799 and HIPAA as baselines for measurement, while vulnerability assessments traditionally use security incident and other vulnerability databases for tracking.
Together, policy compliance reviews and vulnerability assessments are critical first-line tactics to proactively defend against escalating security threats. Policy compliance reviews ensure that policy objectives are being met, and vulnerability assessments contribute to overall resiliency by identifying vulnerabilities.
Security compliance tools are available to help corporate systems comply with information security policies and regulatory standards. These compliance tools also aid in discovering, containing and fixing unpatched vulnerabilities. The tools are able to define a policy online in a database and automatically measure compliance across the network. In some cases, policy compliance data can be correlated with other security event data from a wide range of other security sources, including antivirus software, firewalls, intrusion detection systems and vulnerability assessment products.
The Policy Serves as the Foundation
With security threats inundating IT administrators and government regulations forcing corporate compliance, organizations can streamline their security efforts by creating and enforcing strong information security policies. As Internet threats increase and as government regulations become more stringent, the importance of solid security policies will also increase.
July-2004 [7]
1.c) A Data entry firm experiences on an average a loss of 10 files of 1000 bytes each per day due to power failures. The loss probability is 0.9. The cost of keying in a character is Rs. 0.005. At what cost burden the firm should consider putting in a loss prevention mechanism? [4]
2.
b) What is the basic purpose of a security model for computer systems? [3]
computer security model is a scheme for specifying and enforcing security policies. A security model may be founded upon a formal model of access rights, a model of computation, a model of distributed computing, or no particular theoretical grounding at all.

January-2006 [10]
1.d) State four primary functions of CERT. [4]
7. Write short notes on any three:
iii) Risk Assessment (RA) [6]

The Risk Assessment (RA) Policy document establishes the activities that need to be carried out by each Business Unit, Technology Unit, and Corporate Units (departments) within the organization. All departments must utilize this methodology to identify current risks and threats to the business and implement measures to eliminate or reduce those potential risks.
A risk assessment is performed in four distinct
Step 1: Data Compilation and Evaluation

Objective: To verify that the data are appropriate for use and are considered to be representative of current conditions.



Compile all
available data

Sort by
environmental medium
Evaluate data relative
to established criteria
Evaluate data relative
to established criteria
Step 2: Exposure Assessment
Objective: To estimate the type and magnitude of exposures from the chemicals of potential concern that are present at or migrating from a site/facility.
• Characterization of the Exposure Setting
o Characterizing the physical environment
o Identifying potential landuse scenarios

• Identification of Exposure Pathways

Components of an Exposure Pathway


• Quantification of Exposure
Toxicity Assessment


• Hazard Identification
determines whether exposure to a chemical can increase the incidence of a particular adverse health effect and determines the likelihood of occurrence in humans


• Dose-response assessment
presents the relationship between the magnitude of exposure and adverse effects
Risk Characterization


• Review toxicity and exposure
assessment output
• Quantify risks
• Combine risks across all pathways
• Assess & present uncertainties
• Consider site-specific human studies,
if available
• Summarize & present baseline risk
assessment characterization results


July-2006 [10]
1.
g) What are main services provided by Computer security incident response teams? [4]

1. A Computer Security Incident Response Team (CSIRT) is a service organization that is responsible for receiving, reviewing, and responding to computer security incident reports and activity. Their services are usually performed for a defined constituency that could be a parent entity such as a corporation, governmental, or educational organization; a region or country; a research network; or a paid client.
A CSIRT can be a formalized team or an ad hoc team. A formalized team performs incident response work as its major job function. An ad hoc team is called together during an ongoing computer security incident or to respond to an incident when the need arises.
1. A CSIRT may perform both reactive and proactive functions to help protect and secure the critical assets of an organization. There is not one standard set of functions or services that a CSIRT provides. Each team chooses their services based on the needs of their constituency. For a discussion of the wide range of services that a CSIRT can choose to provide, please see section 2.3 of the Handbook for CSIRTs.
Whatever services a CSIRT chooses to provide, the goals of a CSIRT must be based on the business goals of the constituent or parent organizations. Protecting critical assets are key to the success of both an organization and its CSIRT. The CSIRT must enable and support the critical business processes and systems of its constituency.
A CSIRT is similar to a fire department. Just as a fire department "puts out a fire" that has been reported, a CSIRT helps organizations contain and recover from computer security breaches and threats. The process by which a CSIRT does this is called incident handling. But just as a fire department performs fire education and safety training as a proactive service, a CSIRT can also provide proactive services. These types of services may include security awareness training, intrusion detection, penetration testing, documentation, or even program development. These proactive services can help an organization not only prevent computer security incidents but also decrease the response time involved when an incident occurs.

5.
b) What are the procedures involved in Quantitative Risk Assessment? How is the Annualized Loss Expectancy (ALE) calculated? [6]
Quantitative Risk Assessment (QRA) is a formalised specialist method for calculating numerical individual, environmental, employee and public risk level values for comparison with regulatory risk criteria.

Satisfactory demonstration of acceptable risk levels is often a requirement for approval of major hazard plant construction plans, including transmission pipelines, offshore platforms and LNG storage and import sites.

Each demonstration must be reviewed periodically to show that risks are controlled to an acceptable level according to applicable legislation and internal company governance requirements.
The Annualized Loss Expectancy (ALE) is the expected monetary loss that can be expected for an asset due to a risk over a one year period. It is defined as:
ALE = SLE * ARO
where SLE is the Single Loss Expectancy and ARO is the Annualized Rate of Occurrence.
An important feature of the Annualized Loss Expectancy is that it can be used directly in a cost-benefit analysis. If a threat or risk has an ALE of $5,000, then it may not be worth spending $10,000 per year on a security measure which will eliminate it.
One thing to remember when using the ALE value is that, when the Annualized Rate of Occurrance is of the order of one loss per year, there can be considerable variance in the actual loss. For example, suppose the ARO is 0.5 and the SLE is $10,000. The Annualized Loss Expectancy is then $5,000, a figure we may be comfortable with. Using the Poisson Distribution we can calculate the probability of a specific number of losses occurring in a given year:
Number of Losses
in Year Probability Annual Loss
0 0.6065 $0
1 0.3033 $10,000
2 0.0758 $20,000
≥3 0.0144 ≥$30,000
We can see from this table that the probability of a loss of $20,000 is 0.0758, and that the probability of losses being $30,000 or more is approximately 0.0144. Depending upon our tolerance to risk and our organization's ability to withstand higher value losses, we may consider that a security measure which costs $10,000 per year to implement is worthwhile, even though it is more than the expected losses due to the threat.

7.c) What is network management performance? What are the factors that affect the performance of network? [6]

Network performance management is the discipline of optimizing how networks function, trying to deliver the lowest latency, highest capacity, and maximum reliability despite intermittent failures and limited bandwidth
Factors affecting network performance
Unfortunately, not all networks are the same. As data is broken into component parts (often known frames, packets, or segments) for transmission, several factors can affect their delivery.
• Latency: It can take a long time for a packet to be delivered across intervening networks. In reliable protocols where a receiver acknowledges delivery of each chunk of data, it is possible to measure this as round-trip time.
• Packet loss: In some cases, intermediate devices in a network will lose packets. This may be due to errors, to overloading of the intermediate network, or to intentional discarding of traffic in order to enforce a particular service level.
• Retransmission: When packets are lost in a reliable network, they are retransmitted. This incurs two delays: First, the delay from re-sending the data; and second, the delay resulting from waiting until the data is received in the correct order before forwarding it up the protocol stack.
• Throughput: The amount of traffic a network can carry is measured as throughput, usually in terms such as kilobits per second. Throughput is analogous to the number of lanes on a highway, whereas latency is analogous to its speed limit.
These factors, and others (such as the performance of the network signaling on the end nodes, compression, encryption, concurrency, and so on) all affect the effective performance of a network. In some cases, the network may not work at all; in others, it may be slow or unusable. And because applications run over these networks, application performance suffers. Various intelligent solutions are available to ensure that traffic over the network is effectively managed to optimize performance for all users. See Traffic Shaping

January-2008 [14]
1.
b) What are the different types of messages define in SNMP? [4]
What are the Basic Components of SNMP?

An SNMP-managed network consists of three key components: managed devices, agents, and network management systems (NMS).
• Managed devices
– Contain an SNMP agent and reside on a managed network.
– Collect and store management information and make it available to NMS by using SNMP.
– Include routers, access servers, switches, bridges, hubs, hosts, or printers.
• Agent—A network-management software module, such as the Cisco IOS software, that resides in a managed device. An agent has local knowledge of management information and makes that information available by using SNMP.
• Network Management Systems (NMS)—Run applications that monitor and control managed devices. NMS provide resources required for network management. In the case study, the NMS applications are:
– UCD-SNMP
– MRTG
– HPOV
– CW2000 RME
Figure 1 illustrates the relationship between the managed devices, the agent, and the NMS.
Figure 1

An SNMP-Managed Network
About Basic SNMP Message Types and Commands
There are three basic SNMP message types:
• Get—NMS-initiated requests used by an NMS to monitor managed devices. The NMS examines different variables that are maintained by managed devices.
• Set—NMS-initiated commands used by an NMS to control managed devices. The NMS changes the values of variables stored within managed devices.
• Trap—Agent-initiated messages sent from a managed device, which reports events to the NMS.
The Cisco IOS generates SNMP traps for many distinct network conditions. Through SNMP traps, the Network Operations Center (NOC) is notified of network events, such as:
– Link up/down changes
– Configuration changes
– Temperature thresholds
– CPU overloads


Note For a list of Cisco-supported SNMP traps, go to http://www.cisco.com/public/mibs/traps/

Figure 2

SNMP Event Interactions Between the NMS and the Agent
What are SNMP MIBs?
A Management Information Base (MIB):
• Presents a collection of information that is organized hierarchically.
• Is accessed by using a network-management protocol, such as SNMP.
• References managed objects and object identifiers.
Managed object—A characteristic of a managed device. Managed objects reference one or more object instances (variables). Two types of managed objects exist:
• Scalar objects—Define a single object instance.
• Tabular objects—Define multiple-related object instances that are grouped together in MIB tables.
Object identifier (or object ID)—Identifies a managed object in the MIB hierarchy. The MIB hierarchy is depicted as a tree with a nameless root. The levels of the tree are assigned by different organizations and vendors.
Figure 3

The MIB Tree and Its Various Hierarchies
As shown in Figure 3, top-level MIB object IDs belong to different standards organizations while low-level object IDs are allocated by associated organizations. Vendors define private branches that include managed objects for products. Non standard MIBs are typically in the experimental branch.
A managed object has these unique identities:
• The object name—For example, iso.identified-organization.dod.internet.private.enterprise.cisco. temporary variables.AppleTalk.atInput
or
• The equivalent object descriptor—For example, 1.3.6.1.4.1.9.3.3.1.
SNMP must account for and adjust to incompatibilities between managed devices. Different computers use different data-representation techniques, which can compromise the ability of SNMP to exchange information between managed devices.
What is SNMPv1?
SNMPv1 is the initial implementation of the SNMP protocol and is described in RFC 1157 (http://www.ietf.org/rfc/rfc1157).
SNMPv1:
• Functions within the specifications of the Structure of Management Information (SMI).
• Operates over protocols such as User Datagram Protocol (UDP), Internet Protocol (IP), OSI Connectionless Network Service (CLNS), AppleTalk Datagram-Delivery Protocol (DDP), and Novell Internet Packet Exchange (IPX).
• Is the de facto network-management protocol in the Internet community.
The SMI defines the rules for describing management information by using Abstract Syntax Notation One (ASN.1). The SNMPv1 SMI is defined in RFC 1155 (http://www.ietf.org/rfc/rfc1155). The SMI makes three specifications:
• ASN.1 data types
• SMI-specific data types
• SNMP MIB tables
SNMPv1 and ASN1 Data Types
The SNMPv1 SMI specifies that all managed objects must have a subset of associated ASN.1 data types. Three ASN.1 data types are required:
• Name—Serves as the object identifier (object ID).
• Syntax—Defines the data type of the object (for example, integer or string). The SMI uses a subset of the ASN.1 syntax definitions.
• Encoding—Describes how information associated with a managed object is formatted as a series of data items for transmission over the network.
SNMPv1 and SMI-Specific Data Types
The SNMPv1 SMI specifies the use of many SMI-specific data types, which are divided into two categories:
• Simple data types—Including these three types:
– Integers—A signed integer in the range of -2,147,483,648 to 2,147,483,647.
– Octet strings—Ordered sequences of zero to 65,535 octets.
– Object IDs— Come from the set of all object identifiers allocated according to the rules specified in ASN.1.
• Application-wide data types—Including these seven types:
– Network addresses—Represent addresses from a protocol family. SNMPv1 supports only 32-bit IP addresses.
– Counters—Nonnegative integers that increase until they reach a maximum value; then, the integers return to zero. In SNMPv1, a 32-bit counter size is specified.
– Gauges—Nonnegative integers that can increase or decrease but retain the maximum value reached.
– Time ticks—A hundredth of a second since some event.
– Opaques—An arbitrary encoding that is used to pass arbitrary information strings that do not conform to the strict data typing used by the SMI.
– Integers—Signed integer-valued information. This data type redefines the integer data type, which has arbitrary precision in ASN.1 but bounded precision in the SMI.
– Unsigned integers—Unsigned integer-valued information that is useful when values are always nonnegative. This data type redefines the integer data type, which has arbitrary precision in ASN.1 but bounded precision in the SMI.
The SNMPv1 SMI defines structured tables that are used to group the instances of a tabular object (an object that contains multiple variables). Tables contain zero or more rows that are indexed to allow SNMP to retrieve or alter an entire row with a single Get, GetNext, or Set command.
SNMPv1 Protocol Operations
SNMP is a simple request-response protocol. The NMS issues a request, and managed devices return responses. This behavior is implemented by using one of four protocol operations:
• Get—Used by the NMS to retrieve the value of one or more object instances from an agent. If the agent responding to the Get operation cannot provide values for all the object instances in a list, the agent does not provide any values.
• GetNext—Used by the NMS to retrieve the value of the next object instance in a table or list within an agent.
• Set—Used by the NMS to set the values of object instances within an agent.
• Trap—Used by agents to asynchronously inform the NMS of a significant event.
What is SNMPv2?
SNMPv2 is an improved version of SNMPv1. Originally, SNMPv2 was published as a set of proposed Internet standards in 1993; currently, it is a Draft Standard. As with SNMPv1, SNMPv2 functions within the specifications of the SMI. SNMPv2 offers many improvements to SNMPv1, including additional protocol operations.
SNMPv2 and SMI
The SMI defines the rules for describing management information by using ASN.1.
RFC 1902 (http://www.ietf.org/rfc/rfc1902) describes the SNMPv2 SMI and enhances the SNMPv1 SMI-specific data types by including:
• Bit strings—Comprise zero or more named bits that specify a value.
• Network addresses—Represent an address from a protocol family. SNMPv1 supports 32-bit IP addresses, but SNMPv2 can support other types of addresses too.
• Counters—Non-negative integers that increase until they reach a maximum value; then, the integers return to zero. In SNMPv1, a 32-bit counter size is specified. In SNMPv2, 32-bit and 64-bit counters are defined.
SMI Information Modules
The SNMPv2 SMI specifies information modules, which include a group of related definitions. Three types of SMI information modules exist:
• MIB modules—Contain definitions of interrelated managed objects.
• Compliance statements—Provide a systematic way to describe a group of managed objects that must conform to a standard.
• Capability statements—Used to indicate the precise level of support that an agent claims with respect to a MIB group. An NMS can adjust its behavior towards agents according to the capability statements associated with each agent.
SNMPv2 Protocol Operations
The Get, GetNext, and Set operations used in SNMPv1 are exactly the same as those used in SNMPv2. SNMPv2, however, adds and enhances protocol operations. The SNMPv2 trap operation, for example, serves the same function as the one used in SNMPv1. However, a different message format is used.
SNMPv2 also defines two new protocol operations:
• GetBulk—Used by the NMS to efficiently retrieve large blocks of data, such as multiple rows in a table. GetBulk fills a response message with as much of the requested data as fits.
• Inform—Allows one NMS to send trap information to another NMS and receive a response. If the agent responding to GetBulk operations cannot provide values for all the variables in a list, the agent provides partial results.
About SNMP Management
SNMP is a distributed-management protocol. A system can operate exclusively as an NMS or an agent, or a system can perform the functions of both.
When a system operates as both an NMS and an agent, another NMS can require the system to:
• Query managed devices and provide a summary of the information learned.
• Report locally stored management information.
About SNMP Security
SNMP lacks authentication capabilities, which results in a variety of security threats:
• Masquerading—An unauthorized entity attempting to perform management operations by assuming the identity of an authorized management entity.
• Modification of information—An unauthorized entity attempting to alter a message generated by an authorized entity, so the message results in unauthorized accounting management or configuration management operations.
• Message sequence and timing modifications—Occurs when an unauthorized entity reorders, delays, or copies and later replays a message generated by an authorized entity.
• Disclosure—Results when an unauthorized entity extracts values stored in managed objects. The entity can also learn of notifiable events by monitoring exchanges between managers and agents.

3.
b) What is a SNMP? Explain the SNMP model of a managed network with block diagram showing all the components. [10]
SNMP is used in network management systems to monitor network-attached devices for conditions that warrant administrative attention. It consists of a set of standards for network management, including an Application Layer protocol, a database schema, and a set of data objects.[1]
SNMP exposes management data in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried (and sometimes set) by managing applications.
1. SNMP basic components
An SNMP-managed network consists of three key components:
2. Managed devices
3. Agents
4. Network-management systems (NMSs)
A managed device is a network node that contains an SNMP agent and that resides on a managed network. Managed devices collect and store management information and make this information available to NMSs using SNMP. Managed devices, sometimes called network elements, can be any type of device including, but not limited to, routers, access servers, switches, bridges, hubs, IP telephones, computer hosts, and printers.
An agent is a network-management software module that resides in a managed device. An agent has local knowledge of management information and translates that information into a form compatible with SNMP.
A network management system (NMS) executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network
The SNMP concepts
The SNMP model defines two entities, which works in a client-server mode.
The server is called an agent and is located on the device to supervise. The client part is the SNMP manager, in charge of data collection and display. The version 3 names the client entity instead of agent. The agent listens to requests coming from the manager on the UDP port 161, while the manager listens to alarms “trap” coming from the agent on port UDP 162.

The SNMP manager
The SNMP manager should be installed on a powerful system connected to the enterprise network. Another common name for it, is management station.
His job is to acquire with SNMP requests, information about devices connected to the network. Gathered information are then processed and displayed in tables, graphs, gauges, histograms, for an easier interpretation by human being.
The management station includes the following components:
The graphical user interface
Use to display in a friendly manner collected data.
The database
The database is used by the manager to store collected data.
The transport protocol
Protocol is used to communicate between manager and agent.
Le SNMP engine
This is the kernel of the application. It manages all the tasks like an orchestral chief.
Agent management profiles
This is a set of rules that defines how to access to the agents. All this profiles help to build the topology map.
The SNMP agent
The agent is a mix of software and hardware or only software and is located in the device. Most of network devices are equipped by default, other systems having a standard operating system are able to behave like an agent by running a simple process. Most of them, Windows platform, Novell Server, Unix and Linux systems own their agents. Hubs and MAU, for most of them are also manageable.
Agents are composed of:
A transport protocol stack Responsible for sending and receiving SNMP packet
An SNMP engine Process requests and formats data.
A management profiles These are the rules that control access to MIB variables and manage which requests are authorized.
Objects and MIBs
OID
Each agent has a set of associated objects that could be interrogated by the management station. An object is the abstraction of a physical or logical device component. A table object is a set of objects grouped in a table.
Examples of physical elements in a device: power supply, fans, boards, probe …
Examples of logical elements: processes, buffers, file …
Objects are defines in MIB (Management Information Bases ) files. Agents compatible with SNMP version 1 should be able at least to support the objects defined in RFC1155 and RFC 1213 which are in fact standard MIB files.
To simplify the object reading contained in an agent, a standard hierarchy structure is used.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.