Thursday, February 5, 2009

NETWORK MANAGEMENT AND INFORMATION SYSTEMS- Web Security & Application Security & Firewalls & Instrusion Detection Systems & 11.Law & investigation

9. Web Security & Application Security

January-2004 [10]
5.
a) Briefly explain how cookies pose security threat? [4]

Cookies: websites store these on your PC to remember data about you, so companies can use the information to identify you on subsequent visits to their sites. It is best to disable the cookies and enable them only if you visit a site that requires them
A cookie is nothing more than a variable that can hold some value. For example, if I am surfing a website that contains regional content - at some point I will be asked what city I live in. If I type in the answer, then a cookie called "city" may be stored on my local computer. Now there is a variable called city holding the text "Victoria". This data is stored on my local computer and the only website that may read it - is the website that stored it.
Cookies can only be read by the same website that created the cookie. So, the website that creates the cookie is the website that reads the cookie - and websites can not set up cookies for each other. For your information to get spread around - the site that collected the information (the website you typed answers into) has to misuse it. You will never hit a site by accident and have all your cookies read.
Can your cookies be stolen?
Yes, there are two ways to steal cookie information. Because cookies are stored on your computer as small text files a person can sit down at your computer and copy the cookie files along with all the other data on your computer. Alternatively, you may unwittingly download and run a Trojan application.
If your corporate website uses cookies to differentiate between surfers - then the surfer could alter the cookies (because cookies are stored on their local machine, not on your server) and;
• pretend to be someone else,
• create errors on your server (by replacing variable data with serverside executable code),
To solve this problem many companies encrypt their cookies so that tampering with a cookie will just break the cookie. Your system will then ignore the cookie, or deny that person access.
7. Write short notes on the following:
b) Secure Socket Layer. [6]

Secure Sockets Layer (SSL) technology protects your Web site and makes it easy for your Web site visitors to trust you in three essential ways:
1. An SSL Certificate enables encryption of sensitive information during online transactions.
2. Each SSL Certificate contains unique, authenticated information about the certificate owner.
3. A Certificate Authority verifies the identity of the certificate owner when it is issued
You need SSL if...
• you have an online store or accept online orders and credit cards
• you offer a login or sign in on your site
• you process sensitive data such as address, birth date, license, or ID numbers
• you need to comply with privacy and security requirements
• you value privacy and expect others to trust you.
How Encryption Works
Imagine sending mail through the postal system in a clear envelope. Anyone with access to it can see the data. If it looks valuable, they might take it or change it. An SSL Certificate establishes a private communication channel enabling encryption of the data during transmission. Encryption scrambles the data, essentially creating an envelope for message privacy.
Each SSL Certificate consists of a public key and a private key. The public key is used to encrypt information and the private key is used to decipher it. When a Web browser points to a secured domain, a Secure Sockets Layer handshake authenticates the server (Web site) and the client (Web browser). An encryption method is established with a unique session key and secure transmission can begin. True 128-bit SSL Certificates enable every site visitor to experience the strongest SSL encryption available to them.
How Authentication Works
Imagine receiving an envelope with no return address and a form asking for your bank account number. Every VeriSign® SSL Certificate is created for a particular server in a specific domain for a verified business entity. When the SSL handshake occurs, the browser requires authentication information from the server. By clicking the closed padlock in the browser window or certain SSL trust marks (such as the VeriSign Secured® Seal), the Web site visitor sees the authenticated organization name. In high-security browsers, the authenticated organization name is prominently displayed and the address bar turns green when an Extended Validation SSL Certificate is detected. If the information does not match or the certificate has expired, the browser displays an error message or warning.
Why Authentication Matters
Like a passport or a driver’s license, an SSL Certificate is issued by a trusted source, known as the Certificate Authority (CA). Many CAs simply verify the domain name and issue the certificate. VeriSign verifies the existence of your business, the ownership of your domain name, and your authority to apply for the certificate, a higher standard of authentication.
VeriSign Extended Validation (EV) SSL Certificates meet the highest standard in the Internet security industry for Web site authentication as required by CA/Browser Forum. EV SSL Certificates give high-security Web browsers information to clearly display a Web site’s organizational identity. The high-security Web browser’s address bar turns green and reveals the name of the organization that owns the SSL Certificate and the SSL Certificate Authority that issued it. Because VeriSign is the most recognized name in online security, VeriSign SSL Certificates with Extended Validation will give Web site visitors an easy and reliable way to establish trust online.
July-2004 [6]
6. a) What are the different levels in TCP/IP at which WEB security may be implemented? Illustrate with examples. [6]
Level3 : Basic IP/URL filtering on router/firewall. Usually implemented as a whitelist, though can be implemented as a blacklist too. Port filtering is another example of an L3 security measure.

Level 4: Application Layer. This can be a lot of things, from only allowing web viewing through an authenticating filtered proxy server to monitoring programs which plug into IE and block offensive content.


10. Firewalls & Intrusion Detection Systems

January-2004 [18]
1.
e) What is firewall? State briefly how it works. [4]
1. A firewall protects networked computers from intentional hostile intrusion that could compromise confidentiality or result in data corruption or denial of service. It may be a hardware device (see Figure 1) or a software program (see Figure 2) running on a secure host computer. In either case, it must have at least two network interfaces, one for the network it is intended to protect, and one for the network it is exposed to.
A firewall sits at the junction point or gateway between the two networks, usually a private network and a public network such as the Internet. The earliest firewalls were simply routers. The term firewall comes from the fact that by segmenting a network into different physical subnetworks, they limited the damage that could spread from one subnet to another just like firedoors or firewalls. Figure 1: Hardware Firewall
Hardware firewall providing protection to a Local Network
Figure 2: Computer with Firewall Software
Computer running firewall software to provide protection
1. There are two access denial methodologies used by firewalls. A firewall may allow all traffic through unless it meets certain criteria, or it may deny all traffic unless it meets certain criteria (see figure 3). The type of criteria used to determine whether traffic should be allowed through varies from one type of firewall to another. Firewalls may be concerned with the type of traffic, or with source or destination addresses and ports. They may also use complex rule bases that analyse the application data to determine if the traffic should be allowed through. How a firewall determines what traffic to let through depends on which network layer it operates at. A discussion on network layers and architecture follows.
Figure 3: Basic Firewall Operation


g) With the possibility of inside attack, where should IDS devices be located? [4]
6.
c) What is the difference between IDS and Firewall? [4]
1. An intrusion detection system (IDS) generally detects unwanted manipulations of computer systems, while a firewall is a dedicated appliance, or software running on another computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules.
2. IDS detects attempted attacks using Signature and Patterns much like an Anti Virus App will.
3. IDS basically works on signatures just like Anti-virus apps and packet filter Firewalls work by looking into the headers of packets. Other firewall types include Proxy firewall, Authentication firewall and Gateway firewall.
A firewall performs inspection of the packet headers. If a packet type is not allowed (invalid destination port), it is dropped or RST. Non-home use firewalls use a stateful inspection, which means that they maintain a table of all of the currently established sessions, so if someone sends an acceptable packet, like SRC port 80, the firewall will still reject it.
An IDS (Intrusion Detection System) may only detect and warn you of a violation of your privacy. Although most block major attacks, some probes or other attacks may just be noted and allowed through. An example of an IDS is Black Ice.

A good firewall will block almost all attacks unless specified otherwise or designed otherwise. The only problem is, the firewall might not warn you of the attacks and may just block them. An example of a firewall is ZoneAlarm.


7. Write short notes on the following:
c) Proxy Firewall. [6]
Proxy firewalls offer more security than other types of firewalls, but this is at the expense of speed and functionality, as they can limit which applications your network can support. So, why are they more secure? Unlike stateful firewalls, which allow or block network packets from passing to and from a protected network, traffic does not flow through a proxy. Instead, computers establish a connection to the proxy, which serves as an intermediary, and initiates a new network connection on behalf of the request. This prevents direct connections between systems on either side of the firewall and makes it harder for an attacker to discover where the network is, because they will never receive packets created directly by their target system.

Proxy firewalls also provide comprehensive, protocol-aware security analysis for the protocols they support. This allows them to make better security decisions than products that focus purely on packet header information.



July-2004 [18]
5.
c) What are the different components of IDS? Explain the different types of IDS. [5]
n IDS can be composed of several components: Sensors which generate security events, a Console to monitor events and alerts and control the sensors, and a central Engine that records events logged by the sensors in a database and uses a system of rules to generate alerts from security events received. There are several ways to categorize an IDS depending on the type and location of the sensors and the methodology used by the engine to generate alerts. In many simple IDS implementations all three components are combined in a single device or appliance.
ypes of Intrusion-Detection systems

In a network-based intrusion-detection system (NIDS), the sensors are located at choke points in network to be monitored, often in the demilitarized zone (DMZ) or at network borders. The sensor captures all network traffic and analyzes the content of individual packets for malicious traffic. In systems, PIDS and APIDS are used to monitor the transport and protocols illegal or inappropriate traffic or constructs of language (say SQL). In a host-based system, the sensor usually consists of a software agent, which monitors all activity of the host on which it is installed. Hybrids of these two systems also exist.

* A network intrusion detection system is an independent platform which identifies intrusions by examining network traffic and monitors multiple hosts. Network Intrusion Detection Systems gain access to network traffic by connecting to a hub, network switch configured for port mirroring, or network tap. An example of a NIDS is Snort.

* A protocol-based intrusion detection system consists of a system or agent that would typically sit at the front end of a server, monitoring and analyzing the communication protocol between a connected device (a user/PC or system). For a web server this would typically monitor the HTTPS protocol stream and understand the HTTP protocol relative to the web server/system it is trying to protect. Where HTTPS is in use then this system would need to reside in the "shim" or interface between where HTTPS is un-encrypted and immediately prior to it entering the Web presentation layer.

* An application protocol-based intrusion detection system consists of a system or agent that would typically sit within a group of servers, monitoring and analyzing the communication on application specific protocols. For example; in a web server with database this would monitor the SQL protocol specific to the middleware/business-login as it transacts with the database.

* A host-based intrusion detection system consists of an agent on a host which identifies intrusions by analyzing system calls, application logs, file-system modifications (binaries, password files, capability/acl databases) and other host activities and state. An example of a HIDS is OSSEC.

* A hybrid intrusion detection system combines two or more approaches. Host agent data is combined with network information to form a comprehensive view of the network. An example of a Hybrid IDS is Prelude.


7.
a) What is the basic purpose of a firewall? Briefly discuss the different types of firewalls. [8]
A firewall is a hardware or software system that prevents unauthorized access to or from a network. They can be implemented in both hardware and software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet. All data entering or leaving the Intranet pass through the firewall, which examines each packet and blocks those that do not meet the specified security criteria.

Generally, firewalls are configured to protect against unauthenticated interactive logins from the outside world. This helps prevent hackers from logging into machines on your network. More sophisticated firewalls block traffic from the outside to the inside, but permit users on the inside to communicate a little more freely with the outside.

This information was excerpted from Firewall.cx creator Chris Partsenidis' tip Introduction to firewalls.

Introduction to types of firewalls

firewalls in to five basic types:

* Packet filters
* Stateful Inspection
* Proxys
* Dynamic
* Kernel

In reality, these divisions are not quite that simple as most modern firewalls have a mix of abilities that place them in more than one of the categories shown above. The NIST document provides more details into each of these categories.

To simplify the most commonly used firewalls, expert Chris Partsenidis breaks them down into two categories: application firewalls and network layer firewalls. The International Standards Organization (ISO) Open Systems Interconnect (OSI) model for networking defines seven layers, where each layer provides services that higher-level layers depend on. The important thing to recognize is that the lower-level the forwarding mechanism, the less examination the firewall can perform.

To see a more in-depth description of the OSI layer, see Michael Gregg's OSI -- Securing the stack tip series.

Network layer firewalls

Network layer firewalls generally make their decisions based on the source address, destination address and ports in individual IP packets. A simple router is the traditional network layer firewall, since it is not able to make particularly complicated decisions about what a packet is actually talking to or where it actually came from. Modern network layer firewalls have become increasingly more sophisticated, and now maintain internal information about the state of connections passing through them at any time.

One thing that's an important difference about many network layer firewalls is that they route traffic directly though them, so to use one you either need to have a validly-assigned IP address block or a private Internet address block. Network layer firewalls tend to be very fast and almost transparent to their users.

This information was excerpted from Chris Partsenidis' tip Introduction to firewalls.

Application layer firewalls

Run-of-the-mill network firewalls can't properly defend applications. As Michael Cobb explains, application-layer firewalls offer Layer 7 security on a more granular level, and may even help organizations to get more out of existing network devices.

Cobb explains fully in his article "Defending Layer 7: A look inside application-layer firewalls."
Application layer firewalls generally are hosts running proxy servers, which permit no traffic directly between networks, and which perform elaborate logging and examination of traffic passing through them. Since proxy applications are simply software running on the firewall, it is a good place to do lots of logging and access control. Application layer firewalls can be used as network address translators, since traffic goes in one side and out the other, after having passed through an application that effectively masks the origin of the initiating connection.

Having an application in the way in some cases may impact performance and may make the firewall less transparent. Early application layer firewalls are not particularly transparent to end-users and may require some training. However more modern application layer firewalls are often totally transparent. Application layer firewalls tend to provide more detailed audit reports and tend to enforce more conservative security models than network layer firewalls.

Mike Chapple explains how a carefully deployed application firewall can plug a critical hole in an enterprise's defenses in this tip: "Building application firewall rule bases."

The future of firewalls sits somewhere between both network layer firewalls and application layer firewalls. It is likely that network layer firewalls will become increasingly aware of the information going through them, and application layer firewalls will become more and more transparent. The end result will be kind of a fast packet-screening system that logs and checks data as it passes through.

This information was excerpted from Chris Partsenidis' tip Introduction to firewalls.

Proxy firewalls

Proxy firewalls offer more security than other types of firewalls, but this is at the expense of speed and functionality, as they can limit which applications your network can support. So, why are they more secure? Unlike stateful firewalls, which allow or block network packets from passing to and from a protected network, traffic does not flow through a proxy. Instead, computers establish a connection to the proxy, which serves as an intermediary, and initiates a new network connection on behalf of the request. This prevents direct connections between systems on either side of the firewall and makes it harder for an attacker to discover where the network is, because they will never receive packets created directly by their target system.

Proxy firewalls also provide comprehensive, protocol-aware security analysis for the protocols they support. This allows them to make better security decisions than products that focus purely on packet header information

b) Present and discuss the screened subnet architecture of firewalls. [5]

Screened Subnet Architecture
The screened subnet architecture, described in Chapter 6, and shown in Figure 24.1, is probably the most common do-it-yourself firewall architecture. This architecture provides good security (including multiple layers of redundancy) at what most sites feel is a reasonable cost.
Figure 24.1. Screened subnet architecture




There are two-router and single-router variations of the screened subnet architecture. Basically, you can use either a pair of two-interface routers or a single three-interface router. The single-router screened subnet architecture works about as well as the two-router screened subnet architecture and is often somewhat cheaper. However, you need to use a router that can handle both inbound and outbound packet filtering on each interface. (See the discussion of this point in Chapter 8.) We're going to use a two-router architecture as our example in this section because it is conceptually simpler.

his type of firewall includes the following components, presented originally in Chapter 6:


Perimeter network

Isolates your bastion host from your internal network, so a security breach on the bastion host won't immediately affect your internal network.

Exterior router

Connects your site to the outside world. If possible, the exterior router also provides at least some protection for the bastion host, interior router, and internal network. (This isn't always possible because some sites use exterior routers that are managed by their network service providers and are therefore beyond the site's control.)

Interior router

Protects the internal network from the world and from the site's own bastion host.

Bastion host

Serves as the site's main point of contact with the outside world. (It should be set up according to the guidelines in Chapter 10.)
The screened subnet architecture adds an extra layer of security to the screened host architecture by adding a perimeter network that further isolates the internal network from the Internet.

Why do this? By their nature, bastion hosts are the most vulnerable machines on your network. Despite your best efforts to protect them, they are the machines most likely to be attacked, because they're the machines that can be attacked. If, as in a screened host architecture, your internal network is wide open to attack from your bastion host, then your bastion host is a very tempting target. There are no other defenses between it and your other internal machines (besides whatever host security they may have, which is usually very little). If someone successfully breaks into the bastion host in a screened host architecture, he's hit the jackpot.

By isolating the bastion host on a perimeter network, you can reduce the impact of a break-in on the bastion host. It is no longer an instantaneous jackpot; it gives an intruder some access, but not all.

With the simplest type of screened subnet architecture, there are two screening routers, each connected to the perimeter net. One sits between the perimeter net and the internal network, and the other sits between the perimeter net and the external network (usually the Internet). To break into the internal network with this type of architecture, an attacker would have to get past both routers. Even if the attacker somehow broke in to the bastion host, he'd still have to get past the interior router. There is no single vulnerable point that will compromise the internal network.

Some sites go so far as to create a layered series of perimeter nets between the outside world and their interior network. Less trusted and more vulnerable services are placed on the outer perimeter nets, fathest from the interior network. The idea is that an attacker who breaks into a machine on an outer perimeter net will have a harder time successfully attacking internal machines because of the additional layers of security between the outer perimeter and the internal network. This is only true if there is actually some meaning to the different layers, however; if the filtering systems between each layer allow the same things between all layers, the additional layers don't provide any additional security.

Figure 4.5 shows a possible firewall configuration that uses the screened subnet architecture. The next few sections describe the components in this type of architecture.
Figure 4.5: Screened subnet architecture (using two routers)
Figure 4.5
4.2.3.1 Perimeter network

The perimeter network is another layer of security, an additional network between the external network and your protected internal network. If an attacker successfully breaks into the outer reaches of your firewall, the perimeter net offers an additional layer of protection between that attacker and your internal systems.

Here's an example of why a perimeter network can be helpful. In many network setups, it's possible for any machine on a given network to see the traffic for every machine on that network. This is true for most Ethernet-based networks, (and Ethernet is by far the most common local area networking technology in use today); it is also true for several other popular technologies, such as token ring and FDDI. Snoopers may succeed in picking up passwords by watching for those used during Telnet, FTP, and rlogin sessions. Even if passwords aren't compromised, snoopers can still peek at the contents of sensitive files people may be accessing, interesting email they may be reading, and so on; the snooper can essentially "watch over the shoulder" of anyone using the network.

With a perimeter network, if someone breaks into a bastion host on the perimeter net, he'll be able to snoop only on traffic on that net. All the traffic on the perimeter net should be either to or from the bastion host, or to or from the Internet. Because no strictly internal traffic (that is, traffic between two internal hosts, which is presumably sensitive or proprietary) passes over the perimeter net, internal traffic will be safe from prying eyes if the bastion host is compromised.

Obviously, traffic to and from the bastion host, or the external world, will still be visible. Part of the work in designing a firewall is ensuring that this traffic is not itself confidential enough that reading it will compromise your site as a whole. (This is discussed in Chapter 5.)
4.2.3.2 Bastion host

With the screened subnet architecture, you attach a bastion host (or hosts) to the perimeter net; this host is the main point of contact for incoming connections from the outside world; for example:

*

For incoming email (SMTP) sessions to deliver electronic mail to the site
*

For incoming FTP connections to the site's anonymous FTP server
*

For incoming domain name service (DNS) queries about the site

and so on.

Outbound services (from internal clients to servers on the Internet) are handled in either of these ways:

*

Set up packet filtering on both the exterior and interior routers to allow internal clients to access external servers directly.
*

Set up proxy servers to run on the bastion host (if your firewall uses proxy software) to allow internal clients to access external servers indirectly. You would also set up packet filtering to allow the internal clients to talk to the proxy servers on the bastion host and vice versa, but to prohibit direct communications between internal clients and the outside world.

In either case, the packet filtering allows the bastion host to connect to, and accept connections from, hosts on the Internet; which hosts, and for what services, are dictated by the site's security policy.

Much of what the bastion host does is act as proxy server for various services, either by running specialized proxy server software for particular protocols (such as HTTP or FTP), or by running standard servers for self-proxying protocols (such as SMTP).

Chapter 5 describes how to secure the bastion host, and Chapter 8 describes how to configure individual services to work with the firewall.
4.2.3.3 Interior router

The interior router (sometimes called the choke router in firewalls literature) protects the internal network both from the Internet and from the perimeter net.

The interior router does most of the packet filtering for your firewall. It allows selected services outbound from the internal net to the Internet. These services are the services your site can safely support and safely provide using packet filtering rather than proxies. (Your site needs to establish its own definition of what "safe" means. You'll have to consider your own needs, capabilities, and constraints; there is no one answer for all sites.) The services you allow might include outgoing Telnet, FTP, WAIS, Archie, Gopher, and others, as appropriate for your own needs and concerns. (For detailed information on how you can use packet filtering to control these services, see Chapter 6.)

The services the interior router allows between your bastion host (on the perimeter net itself) and your internal net are not necessarily the same services the interior router allows between the Internet and your internal net. The reason for limiting the services between the bastion host and the internal network is to reduce the number of machines (and the number of services on those machines) that can be attacked from the bastion host, should it be compromised.

You should limit the services allowed between the bastion host and the internal net to just those that are actually needed, such as SMTP (so the bastion host can forward incoming email), DNS (so the bastion host can answer questions from internal machines, or ask them, depending on your configuration), and so on. You should further limit services, to the extent possible, by allowing them only to or from particular internal hosts; for example, SMTP might be limited only to connections between the bastion host and your internal mail server or servers. Pay careful attention to the security of those remaining internal hosts and services that can be contacted by the bastion host, because those hosts and services will be what an attacker goes after - indeed, will be all the attacker can go after - if the attacker manages to break in to your bastion host.
4.2.3.4 Exterior router

In theory, the exterior router (sometimes called the access router in firewalls literature) protects both the perimeter net and the internal net from the Internet. In practice, exterior routers tend to allow almost anything outbound from the perimeter net, and they generally do very little packet filtering. The packet filtering rules to protect internal machines would need to be essentially the same on both the interior router and the exterior router; if there's an error in the rules that allows access to an attacker, the error will probably be present on both routers.

Frequently, the exterior router is provided by an external group (for example, your Internet provider), and your access to it may be limited. An external group that's maintaining a router will probably be willing to put in a few general packet filtering rules, but won't want to maintain a complicated or frequently changing rule set. You also may not trust them as much as you trust your own routers. If the router breaks and they install a new one, are they going to remember to reinstall the filters? Are they even going to bother to mention that they replaced the router so that you know to check?

The only packet filtering rules that are really special on the exterior router are those that protect the machines on the perimeter net (that is, the bastion hosts and the internal router). Generally, however, not much protection is necessary, because the hosts on the perimeter net are protected primarily through host security (although redundancy never hurts).

The rest of the rules that you could put on the exterior router are duplicates of the rules on the interior router. These are the rules that prevent insecure traffic from going between internal hosts and the Internet. To support proxy services, where the interior router will let the internal hosts send some protocols as long as they are talking to the bastion host, the exterior router could let those protocols through as long as they are coming from the bastion host. These rules are desirable for an extra level of security, but they're theoretically blocking only packets that can't exist because they've already been blocked by the interior router. If they do exist, either the interior router has failed, or somebody has connected an unexpected host to the perimeter network.

So, what does the exterior router actually need to do? One of the security tasks that the exterior router can usefully perform - a task that usually can't easily be done anywhere else - is the blocking of any incoming packets from the Internet that have forged source addresses. Such packets claim to have come from within the internal network, but actually are coming in from the Internet.

The interior router could do this, but it can't tell if packets that claim to be from the perimeter net are forged. While the perimeter net shouldn't have anything fully trusted on it, it's still going to be more trusted than the external universe; being able to forge packets from it will give an attacker most of the benefits of compromising the bastion host. The exterior router is at a clearer boundary. The interior router also can't protect the systems on the perimeter net against forged packets. (We'll discuss forged packets in greater detail in Chapter 6.




January-2005 [24]
2.
b) Briefly describe steps from recovering from system compromise in which an intruder or an attacker has gained access to system. [6]
1. The first step in recovering any system from a compromise is to physically remove
any network cables. The reason for this is that if a system is under external control,
an attacker could be monitoring what is happening on a machine and if they are
aware of your actions could take drastic action to conceal their actions, such as
formatting a drive.However if the network cable is unplugged you may lose information about the attacker, you will not see active network connections.
2. Next, you should take a notebook (a paper one, not electronic) as this will be used to
take notes in. Write down any important details about the system, starting with the
time and date, the IP address and name of the machine, the timezone that the
machine's clock is set to, whether the clock was accurate, patches that were installed
on it, user accounts, how the problem was found, etc. If anything during the course
of your investigation seems pertinent, jot it down.
It will be a handy reference for the future.
3. It may be difficult to regain control of a seriously compromised Windows system which has so many resource consuming programs running at start-up but simply restarting up in safe-mode will stop a large number of Run key based malware loading at boot up, giving some control back to the user for clean-up tasks.
One final point, your local security contact or CERT team will almost certainly be interested in your findings. Very often an attacker will automate an attack, and will almost certainly be targeting other machines in your network. Providing details to your security contacts will enable them to disseminate your findings to other people who may be in a similar situation. And of course your findings may turn up in here!
6. File System
There are well known tricks for hiding malware on Windows systems, these include manipulation of
the file system.
So, be prepared to find files in
%systemroot%\recycled
(or any
drive\recycled
). The recycled folder is system hidden, so will not show up by default, and isn't searched through by default.
%systemroot%\recycler
also exists on many systems (containing the individual SID-identified recycle bins)
and should also be checked.

” one can quickly see when the majority of the OS was installed, then the various service packs/patches, and sub-dirs that update themselves frequently like catroot2 and drivers. The most recently dated items are often the ones you should concentrate on looking in.

Intruders have a high propensity to call files and folders by legitimate looking names. Do not be
surprised to see nvsvc32.exe or serv1ces.exe in the system32 folder. The aim is obfuscation, and goes hand in hand with hiding their automatic startup services.

Other places to look for things starting up is the registry, specifically any of the keys under:

HKEY_LOCAL_MACHINE, HKEY_CURRENT_USER or HKEY_USERS\.DEFAULT

\Software\Microsoft\Windows\CurrentVersion\Run
\Software\Microsoft\Windows\CurrentVersion\RunOnce
\Software\Microsoft\Windows\CurrentVersion\RunOnceEx
\Software\Microsoft\Windows\CurrentVersion\RunServices
\Software\Microsoft\Windows\CurrentVersion\RunServicesOnce
\Software\Microsoft\WindowsNT\CurrentVersion\Winlogon
Another problem is viruses and trojans that put themselves in HKEY_CLASSES_ROOT\* , attaching themselves to all file extensions.
It is not unusual to obfuscate malware by using alternate data streams. This is the hiding of one file in the data stream of another. The method can be used to hide very large files, and any user can manipulate the system in this way. For example:
rundll32 c:\winnt\system32:malware.dll
This indicates that the the system will start rundll32 (an exe will execute a .dll file as a executable) called malware.dll. The use of the second colon indicates that the file is actually stored in an alternate data stream. The tool, lads (http://www.heysoft.de) will list alternate data streams to help find the files involved.
Do not rely on the extensions that a file is given, for example, a .dll file may in fact just be a plain text
.ini file, with a different extension. For the same reason, it is important not to double click on a file to open it, it may be called .txt, but is actually a .exe. Instead, the best way to look at the file would be by using a HEX editor or failing that 'right click' on the file, and choose 'open with' and select 'notepad'
on a windows system.
Another problem is a legitimate sounding process running out of an unusual directory, such as :
C:\winnt\microsoftdrivers\etc\lsass.exe
This process above is actually a known backdoor, irc.ratsou.b
http://securityresponse.symantec.com/avcenter/venc/data/backdoor.irc.ratsou.b.html
A useful guide to editing the registry is available at:
http://msdn.microsoft.com/library/en-us/dnexnt01/html/ewn0201.as
This article explains in a sensible, clear way what the registry is and how it works. Even if you believe
you understand the registry, it's a good idea to read this article anyway.
The other place that you should look for unauthorized programs is in the `services' control panel. This
can be found by going to the control panel, selecting 'Administrative Tools', 'Services'. A useful list of
known services for XP and 2000 is available at:
http://www.blackviper.com/WIN2K/servicecfg.htm
http://www.blackviper.com/WinXP/servicecfg.htm
Please be aware however that common anti-virus programs, video drivers, and other programs
actually make legitimate use of running as a service, so don't be alarmed if you see more services
running than you expect, though of course each of these should be investigated thoroughly. A good
resource is to `google' for the process, more often than not someone else has found this service and
explained exactly what it is.
Do not rely on anti-virus products alone to detect malware, for a number of reasons. Firstly, malware
continually evolves and you may have something on the machine which has yet to be included in your
anti-virus products database. Secondly, a number of infections have ways of turning off virus
protection, so the scanner may not show up anything. Finally, a number of the programs used in a
compromise are legitimate but used in an illegitimate way. For example, an ftp server is a normal
application, or it can be installed by intruders to serve out 'warez', neither use will be flagged by the
virus checker, as it looks at the application, not how it is being used.
Following on from that, Google (
www.google.com
) is an excellent resource for tracking rogue programs.
If you find any programs that look suspicious, simple search for that programs name, and you will very
probably turn up some very useful information.
Finding the malware directory is the first task, this will (hopefully) give you a number of .ini files
which show you what is running, where, and also have lots of other software which they run. Use the
tools from your cd to try to find the directory - if there is a something listening on a port tcpview
should show you the full path to the directory, although this can be confused by reserved names.
Reserved name directories are such as '
com1
' '
lpt1
' and '
con2
' and are hidden from Windows and MS-
Page 8
DOS. Quite often the intruder will include a large number of spaces after the directory name e.g.
'lpt1
'
will display the same as lpt1, but is in fact a different directory all together. These can be difficult to
navigate to and harder to remove.
There is an excellent Microsoft article on removing files with reserved names available here
http://support.microsoft.com/?kbid=320081
Infections from viruses or spyware may also hijack the hosts file on a windows machine. When a
windows machine resolves a hostname into an IP address, it first looks at the hosts file located in,
%windir%\system32\drivers\etc\hosts
If there is no entry for the host there, it forwards the request onto the DNS. However, for example, if
the infection modifies the hosts file to read,
www.google.co.uk 127.0.0.1
It would render the machine unable to connect to
www.google.co.uk
. This has significant impact if a
false entry for windowsupdate is added. Cleaning up this sort of problem is very easy - just remove
the errant entries in the host file, but be aware that it is is a symptom of some other infection, rather
than the infection itself.
If the infected host still exhibits resolve problems, it would be worth checking that the machine has
the correct DNS entries, both in networking properties and in the registry, if the virus writers controls
a DNS outside of your network, they can rewrite the DNS entries on the local machine and have it
resolve all hostnames through their own DNS, at which point they can map any hostname to IP
address they choose.
Batch Files (Files ending in .bat)
The current trend for compromises is very rarely against single boxes, the are more often against
dozens of machine (within your campus) and hundreds / thousands across the Internet. For this
reason the act of compromising a machine is as automated as possible. Sometimes during an
investigation, you can get lucky and find the batch file they used to install all their software.
These batch files can be called anything - all they need to do is to run it. Examples we have seen are
'
licenses.bat
', '
secure.bat
' and '
securing.bat
'. The `bat' files can be very simple - from adding registry entries
to quite complex scripts which affect the very set up of windows, and its security.
If you have the date and time of the compromise, you can search for .bat files created within that
timescale. Below, we have given an example as to what sort of things you may find in one of these
batch files (lets called it 'hacked.bat'). The information is based on a real compromise, but the
filenames have been changed (as these are generic, you don't want to get caught up in searching for
specific names - remember they can call their files whatever they want).
Page 9
So, hacked.bat starts with,
cd "%windir%\system32"
Whatever else happens in this file, it will be relative to that directory - possibly a good place to look
for malware. It is a legitimate directory, so be careful what files you delete! (Its always a good idea
to save the files off to another directory, for checking).
The next few lines read,
dtreg -AddKey \HKLM\SYSTEM\RAdmin
dtreg -AddKey \HKLM\SYSTEM\RAdmin\v2.0
dtreg -AddKey \HKLM\SYSTEM\RAdmin\v2.0\Server
dtreg -AddKey \HKLM\SYSTEM\RAdmin\v2.0\Server\Parameters
This is a manipulation of the registry - they are adding keys for the radmin program, so that when
they actually install it there are no problems with registry errors. If you don't use radmin, you may
want to delete these keys. The next lines populate the keys,
dtreg -Set REG_BINARY \HKLM\SYSTEM\RAdmin\v2.0\Server\Parameters\DisableTrayIcon=01000000
dtreg -Set REG_BINARY \HKLM\SYSTEM\RAdmin\v2.0\Server\Parameters\Port=e5080000
These set the port and make sure that that the tray icon has been disabled - that would be too easy to
spot! If you can decode the port, you can match it up to the tcpview settings and confirm that you
have the right target. Being able to get traffic data for that port wold be really useful in finding other
machines compromised in the same way.
dtreg-Set REG_EXPAND_SZ "\HKLM\SYSTEM\CurrentControlSet\Services\pnpext\ImagePath=%windir%\system32\mybackdoor.exe /service"
This line is the big one. It sets a registry entry, as a service which starts the file 'mybackdoor.exe' out
of the system32 directory. The following line defines the 'pnpext' service,
serv.exe INSTALL pnpext /n:"Universal Serial Bus Control Protocol" /b:%windir%\system32\mybackdoor.exe /u:LocalSystem /s:AUTO
serv.exe is a way to install a service onto the machine, the '/n' switch gives the name of the service
(once you see this, go check the services control panel) '/b' lists the full directory and full name for the
service, '/u' outlines the privilege the service is to run at and '/s' tells windows when to start the
service - in this case automatically whenever windows starts up.
Final lines of the file will start the services, and any other applications they want to run.
As we said before, the batch file might be more complex than this, or be split into separate files. So
you may find a securing batch file which has entries such as,
net share /delete C$ /y >>del.log
net share /delete D$ /y >>del.log
Which deletes the hidden windows shares (and pipes the results to 'del.log'). Once in the machine,
they don't want anyone else breaking in and taking it away from them!
Finding these batch files can be a real benefit, as the list exactly what you need to clean the backdoor
from the machine. Unfortunately, they are often deleted.
Page 10
Using Built-in Tools
Many of the built-in tools on windows machines are also quite useful. For instance running a
command prompt (Start -> Run -> cmd.exe) on XP and running the command
netstat -ano
shows pids
(Process Identifiers) which can then be used to map ports to process names.
One of the best places to look for help on the utilities available and their usage is at the Microsoft
site, in the Knowledge Base:
http://support.microsoft.com/default.aspx
Checking System Files
One excellent way of checking MS Windows files on newer versions of Windows(Windows XP and
Windows 2000) is to run `sigverif'.
To run this, Click Start, click Run, type sigverif, and then click OK. Click the advanced option, select
"Look for other files that are not digitally signed", and then select
c:\Windows
or
c:\winnt
depending on
the version of Windows..
This tool checks the digital signatures on all the system files, and will alert you of any that aren't
correct, or not signed. Be aware however that this program can produce a very verbose output, as it
will of course inform you that a log file is not signed for example.
Tools
The following tools are considered essential by the authors for tracking down system activity
anomalies. Remember, the existing utilities on the victim machine may well have been trojaned.
It is advised that a cd is created with these tools on - this cd can then be taken to a machine and used
locally. You are well advised to check the files' md5sums (or similar) and that they run on the
version of Windows you are aiming to investigate.
Many of these utilities will need Administrative access to run, and most will provide more information
if run as an administrator.
SQL Critical Update Kit
(
http://www.microsoft.com/SQL/downloads/securitytools.asp
)
If you receive a report that you are scanning for port 1434, and that you should check your system for
signs of compromise it is extremely likely that you have been infected with the SQL Slammer worm
(also called Sapphire). This tool will identify vulnerable systems and also patch them as needed.
Once fixed, you *must* reboot to clear the problem.
TCPView
(
http://www.sysinternals.com/ntw2k/source/tcpview.shtml
)
TCPView is a Windows program that will show you detailed listings of all TCP and UDP endpoints on
your system, including the local and remote addresses and state of TCP connections.
On Windows NT, 2000 and XP TCPView also reports the name of the process that owns the endpoint.
TCPView provides a more informative and conveniently presented subset of the Netstat program that
Page 11
ships with Windows. Please note there is one small issue with this program, when it is run from a
floppy it does not display process names.
TDIMon
(
http://www.sysinternals.com/ntw2k/freeware/tdimon.shtml
)
TDIMon is an application that lets you monitor TCP and UDP activity on your local system. It is the
most powerful tool available for tracking down network-related configuration problems and analyzing
application network usage.
Filemon
(
http://www.sysinternals.com/ntw2k/source/filemon.shtml
)
FileMon monitors and displays file system activity on a system in real-time.
Its advanced capabilities make it a powerful tool for exploring the way Windows works, seeing how
applications use the files and DLLs, or tracking down problems in system or application file
configurations. Filemon's timestamping feature will show you precisely when every open,
read, write or delete happens, and its status column tells you the outcome.
Deleted File Analysis Utility
(
http://www.execsoft.com/freeware/undelete/download.asp
)
This freeware can directly view your hard drive partition and list all deleted files that have not yet
been completely overwritten. Runs on Windows NT, Windows 2000 and Windows XP.
DumpSec
(
http://www.systemtools.com/somarsoft/
)
SomarSoft's DumpSec is a security auditing program for NT/2000. It dumps the permissions (DACLs)
and audit settings (SACLs) for the file system, registry, printers and shares in a concise, readable
format, so that holes in system security are readily apparent. DumpSec also dumps user, group and
replication information.
DumpReg
(
http://www.systemtools.com/somarsoft/
)
DumpReg is a program for Windows NT and Windows 95 that dumps the registry, making it easy to
find keys and values containing a string. For Windows NT, the registry entries can be sorted by
reverse order of last modified time, making it easy to see changes made by recently installed
software, for example.
Fport
(
http://www.foundstone.com/knowledge/proddesc/fport.html
)
Fport reports all open TCP/IP and UDP ports and maps them to the owning application. This is the
same information you would see using the 'netstat -an' command, but it also maps those ports to
running processes with the PID, process name and path. Fport can be used to quickly identify unknown
open ports and their associated applications.
Page 12
MBSA
(
http://www.microsoft.com/
)
MBSA scans for common security misconfigurations in Windows, Internet Information Services (IIS),
SQL Server, Internet Explorer, and Microsoft Office. MBSA also scans for missing security updates in
Windows, IIS, SQL Server, Internet Explorer, Windows Media Player, Exchange Server, Microsoft Data
Access Components (MDAC), Microsoft XML (MSXML), Microsoft virtual machine (VM), Content
Management Server, Commerce Server, BizTalk Server, Host Integration Server, and Office (local
scans only). A graphical user interface (GUI) and command-line interface are available in version 1.2.
MBSA version 1.1 replaced the stand-alone HFNetChk tool and fully exposes all HFNetChk switches in
the MBSA command-line interface (Mbsacli.exe).
Spybot Search & Destroy
(
http://www.safer-networking.org/
)
Spybot - Search & Destroy can detect and remove spyware of different kinds from your computer.
Spyware is a relatively new kind of threat that common anti-virus applications do not yet cover. If you
see new toolbars in your Internet Explorer that you didn't intentionally install, if your browser crashes,
or if you browser start page has changed without your knowing, you most probably have spyware. But
even if you don't see anything, you may be infected, because more and more spyware is emerging that
is silently tracking your surfing behavior to create a marketing profile of you that will be sold to
advertisement companies.
Autoruns
(
http://www.sysinternals.com/ntw2k/freeware/autoruns.shtml
)
This applet shows you what programs are configured to run during system bootup or login. These
programs include ones in your startup folder, Run, RunOnce, and other Registry keys. You'll probably
be surprised at how many executables are launched automatically. Autoruns works on Windows 9x
and Windows NT/2K/XP. It provides a safer way to look at the myriad run keys and startup folders
without directing users to use Regedit.
Ad-aware
(
http://www.lavasoftusa.com/software/adaware/
)
One of the first applications built to find and remove adware and spyware, Ad-aware's good
reputation is well justified. Ad-aware does an excellent job of finding and removing most adware and
spyware components, although you will have to restart and rescan for a seriously infected machine.
Page 13
Investigating Kernel Rootkits
The use of Kernel level rootkits is becoming far more widespread. Once on a machine, the hacker will
try everything they can to stay there. This document has already looked at obfuscation techniques,
and batch files that secure the machine, the next step is to make the system lie to you. This is
currently the most successful way to hide a compromise - the intruder will break into the machine,
secure the machine, install the rootkit and then install the services they require. The rootkit will then
protect those services, making sure you don't find them and remove them.
A remote administration application such as “VNC” or “radmin” is exactly that, an application. A
rootkit, on the other hand, patches the already existing paths within the target operating system.
One of the most popular rootkits for Windows systems is the “Hacker Defender” toolkit. This installs
itself as a service, and thus is quite straightforward to identify if you follow the correct procedures.
One of the easiest ways to detect if a rootkit backdoor is installed on a system is to use tools such as
tcpview or netstat on the suspect machine, and then to correlate these results with a network scan of
the system from another clean machine, using a utility such as the excellent nmap
(www.insecure.org/
nmap/
). If the clean machine reports an extra open port, it is almost certain that the suspect machine
has a rootkit installed.
There are currently only a small number of applications which can help discover the presence of
rootkits. This document outlines some of them, but will not give a preference - these tools will likely
mature faster than this document will be updated. Along with the other tools on detailed in this
document, keeping a selection of rootkit detectors on a c.d. would be good practice.
RKDetect
(
http://www.security.nnov.ru/files/rkdetect.zip
)
RKdetect runs remotely, enumerating services through WMI (user level) and Services Control Manager
(kernel level). The tool then compares results and displays any differences. This method allows you to
find the hidden services that start the rootkit. Process Explorer and TCP/IP View (both from
SysInternals) should also be used in conjunction with RKDetect. It is recommended that you use the
sc.exe in the windows resource kit rather than the one supplied by the Rkdetect authors. The Windows
resource kits can be downloaded from one of the following locations:
http://www.microsoft.com/windows2000/techinfo/reskit/tools/default.asp
http://go.microsoft.com/fwlink/?LinkId=4544
http://www.microsoft.com/ntworkstation/downloads/Recommended/Featured/NTKit.asp
To actually run the script:
cscript rkdetect.vbs
Example:
C:\detector>cscript rkdetect.vbs 192.168.0.100
Microsoft (R) Windows Script Host Version 5.6
Copyright (C) Microsoft Corporation 1996-2001.
All rights reserved.
Query services by WMI...
Detected 79 services
Query services by SC...
Detected 80 services
Finding hidden services...
Possible rootkit found: HXD Service 100
Page 14
Done
C:\detector>
RKDetector
(
http://bagpuss.swan.ac.uk/comms/RKDetectorv0[1].62.zip
)
Runs on the local machine and attempts to provides information about hidden processes and services
Once it identifies the hidden processes, RKDetector will try to kill those hidden tasks and then scan
the service database in order to detect hidden services and hidden regkeys (Run, Runonce).
RKDetector also contains a MD5 database of common rootkits, which it can compare output from
against which it will compare output. To actually run the tool,
c:\rkdetector.exe
. .. ...: Rootkit Detector Professional 2004 v0.62 :... .. .
Rootkit Detector Professional 2004
Programmed by Andres Tarasco Acuna
Copyright (c) 2004 - 3wdesign Security
Url: http://www.3wdesign.es
-Gathering Service list Information... ( Found: 271 services )
-Gathering process List Information... ( Found: 30 process )
-Searching for Hidden process Handles. ( Found: 0 Hidden Process )
-Checking Visible Process.............
-Searching again for Hidden Services..
-Gathering Service list Information... ( Found: 0 Hidden Services)
-Searching for wrong Service Paths.... ( Found: 1 wrong Services )
Blacklight, Fsecure
(
http://www.f-secure.com/blacklight/
)
The rootkit detector, Blacklight, from Fsecure is currently in beta form, so is likely to change at
anytime. It also doubles up as an eliminator - so if it finds a rootkit, it may be able to remove it from
the system. It is currently a free download, which requires administrator privileges to run. Once
passed the licensing agreement, the window will ask to perform a scan of the machine - you also have
an option to show all running processes. Once the scan is complete, a summary will be presented -
showing if it has found anything, and the software will allow you to move onto the cleaning process.
Rootkitrevealer, Sysinternals
(
http://www.sysinternals.com
)
Rootkitrevealer is produced by sysinternals, whose tools feature often in this document. Again it is a
free download, requiring administrator privileges to run (strictly speaking, the help file identifies the
permissions it requires, and administrator gets these permissions by default). Once again it works
from within windows, and presents a small window which displays options and scan results.
Rootkitrevealer will not clean the machine, it does, however, scan the hard drive and the registry for
possibly problematic files / entries. These are then highlighted for the user to take action, if required.
This has its own benefits and problems. Using Psexec, rootkitrevealer can also be run against a
remote system.
Unhackme
(
http://www.greatis.com/unhackme/
)
Unhackme can be downloaded for free, but has an evaluation version - the paid-for version comes
with free support and updates. Unlike other rootkit detectors, unhackme requires installation on the
machine - which in turn requires administrator privileges. It does come with a 'monitor' which will
Page 15
check your machine every minute (default setting). Once in the application, it has a very simple
interface which will allow you to scan the system, get help etc. The software will also act as a rootkit
cleaner.
As it requires installation, this may be of more use to people wanting to keep their system secure,
rather than those responding to incidents.
RegdatXP
(
http://people.freenet.de/h.ulbrich/
)
This isn't strictly a rootkit detector - it is actually a raw registry editor. This means it can be used to
load up the existing registries on a machine (files like ntuser.dat and usrClass.dat). It has good
searching tools, so admins can look for autoruns, suspicious registry keys etc. This has benefits over
signature based detection, although it requires a greater degree of time and effort. It bypasses the
problems when a rootkit prevents the inbuilt RegEdit from working correctly. The software is
shareware.
Page 16
Removing a Rootkit
It should be noted that both these tools suffer from false negatives, so further testing and
examination of the machine should be undertaken. Once you have a better idea of the rootkit involved
you may want to try and disable it - boot windows into Rescue mode:
• Insert the Windows OS Installation CD into the Drive.
• Boot from the CD
• Choose ‘R’ to enter the Rescue Console
• Choose the Windows installation you want to Clean from the list presented to you.
• Enter the Administrator Password.
Once in the recovery console, you have a few commands for this, including:
listsvc - lists services that can be enabled or disabled
enable - enables a service, with a service type,
• SERVICE_DISABLED
• SERVICE_BOOT_START
• SERVICE_SYSTEM_START
• SERVICE_AUTO_START
• SERVICE_DEMAND
disable - disables a service, but prints out the previous start-type, which
should be recorded in case you need to re-enable the service.
More info on the XP Recovery Console can be found here
http://support.microsoft.com/default.aspx?scid=kb;EN-US;314058
Use listsvc to find any undesirable services, make a note of them, HackerDefender is usually called
something along the lines of HackerDefender if the attacker is careless, however it could also be
renamed to be something that sounds like an “official” service.
Once these have been disabled you can reboot safely into full Windows without HackerDefender
starting up.
After the reboot search the registry for the name of the service that you disabled in the previous
section, this should lead you to the executable for HackerDefender and more importantly its .ini file
(not necessarily a .ini file, but may have a different extension).
Open/Edit the .ini file and in there you should find a number of files, ports and services that
HackerDefender is defending. Systematically find each of these services in the registry and delete
them (they will probably appear more than once), likewise find all of the referenced files and delete
them also.
The .ini file can be obfuscated in a variety of ways, these 2 examples contain the same lines, but with
different levels of obfuscation
1.
[H<<>a/"ble]
raddrv.dll

Final point - these tools cannot be used to determine that there is no rootkit on the machine, they are
limited in what they can find.
One of the biggest problems with these tools that can occur is if some piece of malware that runs as
a service has been detected and removed by a virus scanner (which won't fix the registry entries), it
will alert the user that this is a component of a rootkit.
F-Secure (and to a lesser degree Sophos) seems quite good at identifying most of the malware
executables once you have killed the service that is the problem but they leave the registry a bit of a
mess with regard to service entries.
These 'bad' service entries can also occur for older legitimate software that doesn't uninstall/reinstall
itself properly. The problem isn't so much with the utilities as with the older software not conforming
to the Microsoft rules about how software should be installed or upgraded

e) Is a firewall sufficient to secure network or do we need anything else? [4]
The firewall is an integral part of any security program, but it is not a security program in and of itself. Security involves data integrity (has it been modified?), service or application integrity (is the service available, and is it performing to spec?), data confidentiality (has anyone seen it?) and authentication (are they really who they say they are?). Firewalls only address the issues of data integrity, confidentiality and authentication of data that is behind the firewall. Any data that transits outside the firewall is subject to factors out of the control of the firewall. It is therefore necessary for an organization to have a well planned and strictly implemented security program that includes but is not limited to firewall protection.



f) How can an intrusion detection system actively respond to an attack? [4]
Active Response is a mechanism in intrusion detection systems (IDS) that provides the IDS with capability to respond to an attack when it has been detected. There are two methods that the IDS can take to circumvent an attack. The first method of circumventing attacks would be Session disruption, and the second is Filter rule manipulation. The specific feature varies with each IDS product and each countermeasure method possesses its own strengths and weaknesses.

Method 1- Session disruption

Session disruption is the most popular method of circumvention because of the ease of its implementation. Depending on the type of session established, UDP or TCP, an IDS that is configured for session disruption can reset or knock down the established connection. This does not prevent the attacker from launching additional attacks, but it does prevent the attacker from causing any further damage in conjunction with the "broke" session. When using the session disruption method, if an attacker launches subsequent attacks, the IDS must continually attempt to close every initiated attack session.

With sessions disruption the IDS uses different methods for breaking the connection depending on the type of traffic it sees. If an attacker uses TCP sessions, they are reset by RST packet that is sent to reset one or both hosts in a session from the IDS. In the case of UDP, a session can be broken by sending various ICMP packets to the host from the IDS box.

Why might the IDS send RSTs to the attacker and victim host?

An IDS might send a TCP RST packet to an attacker and victim after detecting malicious traffic like an established Sub seven connection.

There are a few IDS systems that provide the session disruption, but for discussion I will focus on Snort, which is a lightweight network intrusion system that runs on different platforms. When Snort is configured with the Flexresp feature it provides session disruption. Flexresp is a feature that allows Snort to automatically respond to an attack if the corresponding option is specified in the snort rule. In order to enable active response on Unix, Snort must be compiled with Flexresp enable as shown below.

Configure -enable-flexresp

When installing on a Win32 system, Flexresp is enabled by selecting the Snort +FlexResp option as shown in Fig 1.1 below.

Fig. 1.1



Below in Fig 1.2 is an example of a Snort rule configured to respond to an attack

Fig 1.2

Rule Header Rule Options
alert tcp $EXTERNAL_NET any -> $HTTP_SERVERS 80 (msg:"WEB-IIS CodeRed v2 root.exe access"; flags: A+; uricontent:"scripts/root.exe?"; nocase;resp:rst_snd;)

alert udp $EXTERNAL_NET any -> $HOME_NET any (msg:"SCAN Webtrends Scanner UDP Probe"; content: "|0A 68 65 6C 70 0A 71 75 69 74 0A|";



Rules define what traffic snort considers as hostile and consists of two parts, Rule Header and Rule Options. The Rule Header contains an action field, protocol field, source IP and port fields, direction of traffic field and destination IP and port fields, which all basically define who is involved. The Rule options define what packet attributes to search to consider if traffic is hostile.

When we examine the Rule Header of the first rule in Fig 1.2 we see that Snort will alert us to any TCP session connecting to the web server at port 80. Let's look at the second part of the rule, the Rule Options. Snort inspects packets that meet the Rule header requirements for the TCP ACK flag and any other TCP flags that are set, and searches the payload for the character string scripts/root.exe. The resp:rst_snd value sends a forge packet with the TCP reset flag set to the sender.

The second rule is for any Webtrends UDP Scans with the character string of "0A 68 65 6C 70 0A 71 75 69 74 0A" in the payload. If this rule meet both the header rule and option rule the flexresp values icmp_port,icmp_host tells snort to send an icmp port unreachable and host unreachable packet to knock down connection.

Why does Snort send ICMP packets to UDP stimulus?

ICMP packets are sent to a host initiating a UDP connection to inform the sender that a requested port/host is unavailable. The reason ICMP packets are sent to a UDP stimulus is UDP does not have the capability to report errors, so ICMP is used to assist. Snort use this normal process to send a spoofed ICMP packet to the host initiating the connection in an attempt to fool the host in to thinking that the host is unavailable.

Session disruptions in action

Snort Rule

alert tcp 192.168.1.1 any > $HOME_NET 135 (msg:"Block host"; flags:S+; resp:rst_snd;)

This rule was created to rest any TCP session initiated by host 192.168.1.1 with the SYN TCP flag and any other TCP flags set.

The traffic below was generated in my lab between two machines. The targeted pc is configured with Snort 1.8.3 for Win32 systems and runs on windows 2000 professional. The attacking host is a Red Hat Linux 7.0 machine. Nmap was used to port scan the target machine by typing nmap -p 135 -sF 192.168.1.2, which triggered alert.

Tcpdump snip

08:17:23.477034 Attacker.4634 > Target.135: S 3719449388:3719449388(0) win 5840 (DF)

08:17:23.477203 Target.135 > Attacker.4634: R 0:0(0) ack 3719449389 win 0

08:17:23.477275 Attacker.4635 > Target.135: S 3715810638:3715810638(0) win 5840 (DF)

08:17:23.477346 Target.135 > Attacker.4635: R 0:0(0) ack 3715810639 win 0

There are a few techniques which can be used that allow an attacker to bypass session disruption enable IDS. An attacker with basic knowledge of TCP/IP can defeat this mechanism as stated in a paper by Jason Larsen and Jed Haile on Understanding IDS Active Response Mechanisms. Here they wrote about techniques that could defeat session disruption. One of the methods they talked about was trying to have the host disregard the tcp reset packet sent from the IDS system. The session disruption bypass techniques took advantage of the time it took for the IDS to examine network traffic, detect an exploit and respond to an attack. Also the tcp stack and the way it receives data were used to circumvent session disruption.

An attacker could also attack the IDS with a Denial of Service in an attempt to crash the machine or starve it of it's resources and render the use of session disruption. Any Evasion techniques where an attacker tries to prevent the IDS from detecting the rule would also work. Session disruption is only useful when the IDS can identify the traffic.

Method 2- Filter rule manipulation

The second countermeasure is filter rule manipulation. This mechanism works by modifying the access control list (acl) on a firewall or router. Filter rule manipulation block the IP of the attacker preventing any further attacks. This option should be used with extreme care, because an attacker can use it to Dos the network. If an attacker used the IP address of a partner they could spoof the address. When the IDS sees the attack and goes to respond, it would block your partner access.

There are a few IDS products that provide filter rule manipulation. Real Secure has the ability to modify Checkpoint firewall. Cisco Intrusion Detection System (IDS), formerly known as Cisco NetRanger, is a hardware based IDS that can respond to an attack by adding an access control list to a router.

Snort can provide filter rule manipulation when used with IDScenter, a tool used to manage snort, when run on a Win32 systems and BlackICE Defender. Attackers are blocked by IDScenter after an alert is triggered which modifies the file firewall.ini access lists that is used by BlackICE Defender, a personal desktop firewall that only protects the machine it is installed on. This can be accomplished by checking on the IDScenter Auto block box as shown in Fig 1.3 below.

Fig 1.3



Then you provide the path to the firewall.ini file that is used by BlackICE, which is C:\Program files\Network ICE\BlackICE by default.

This method can be evaded by tricking the user behind the firewall in to installing a backdoor via email. Once the backdoor is installed the attacker can remotely admin PC and can launch attacks from within. Jason Larsen and Jed Haile wrote paper on Understanding IDS Active Response Mechanisms mentions launching an attack with a spoof address of an popular website like CNN.com, AOL.com, and Ebay.com. This would block traffic from these sites to enter your site. Users would call the helpdesk about not being able to access site and demand a resolution. This would result in the disabling of the rule manipulation feature allowing the attacker to attack without blocking.

Conclusion

Active Response mechanisms is an effective tool when used within its limitation. When used in conjunction with other network security devices it enhances network security. Session disruption should not be configured to respond to every alert just serious attack like Denial of service. Rule manipulation should be used with care because of the effect it could cause if turn on a network. Active Response is by no means meant to be fool proof.


3.
b) What other countermeasures besides IDS are there in a network? What are different types on Intrusion Detection Systems? [6]
Firewalls
Most people think of the firewall as their first line of defense. This means if intruders figure out how
to bypass it (easy, especially since most intrusions are committed by employees inside the firewall),
they will have free run of the network. A better approach is to think of it as the last line of defense:
you should be pretty sure machines are configured right and intrusion detection is operating, and
then place the firewall up just to avoid the wannabe script-kiddies. Note that almost any router these
days can be configured with some firewall filtering. While firewalls protect external access, they
leave the network unprotected from internal intrusions. It has been estimated that 80% of losses due
to "hackers" have been internal attacks.
authentication
You should run scanners that automated the finding of open accounts. You should enforce
automatically strict policies for passwords (7 character minimum, including numbers, dual-case, and
punctuation) using crack or built in policy checkers (WinNT native, add-on for UNIX). You can also
consider single-sign on products and integrating as many password systems as you can, such as
RADIUS/TACACS integration with UNIX or NT (for dial-up style login), integrating UNIX and WinNT
authentication (with existing tools are the new Kerberos in Windows 2000). These authentication
systems will help you also remove "clear-text" passwords from protocols such as Telnet, FTP, IMAP,
POP, etc.
VPNs (Virtual Private Networks)
VPNs create a secure connection over the Internet for remote access (e.g. for telecomuters).
Example #1: Microsoft includes a a technology called PPTP (PPP over TCP) built into Windows.
This gives a machine two IP addresses, one on the Internet, and a virtual one on the corporate
network. Example #2: IPsec enhances the traditional IP protocol with security. While VPN vendors
claim their product "enhance security", the reality is that they decrease corporate security. While the
pipe itself is secure (authenticated, encrypted), either ends of the pipe are wide open. A home
machine compromised with a backdoor rootkit allows a hacker to subvert the VPN connection, allow
full, undetectable access to the other side of the firewall.
encryption
Encryption is becoming increasingly popular. You have your choice of e-mail encryption (PGP,
SMIME), file encryption (PGP again), or file system encryption (BestCrypt, PGP again).
lures/honeypots
Programs that pretend to be a service, but which do not advertise themselves. It can be something
as simple as one of the many BackOrifice emulators (such as NFR's Back Officer Friendly), or as
complex as an entire subnet of bogus systems installed for that purpose


c) What are Intrusion Prevention Systems? Explain. [6]
An intrusion prevention system is a network security device that monitors network and/or system activities for malicious or unwanted behavior and can react, in real-time, to block or prevent those activities. Network-based IPS, for example, will operate in-line to monitor all network traffic for malicious code or attacks. When an attack is detected, it can drop the offending packets while still allowing all other traffic to pass. Intrusion prevention technology is considered by some to be an extension of intrusion detection (IDS) technology
Intrusion prevention actually works – giving you time and protection to test and
apply patches without frantic effort. Intrusion prevention actually works – giving you time and protection to test and apply patches without frantic effort. Protecting the vulnerability works better than protecting specific attacks – because
it protects against unknown and changing attacks against the vulnerability
Performance can reach more than 800 gigabits per second even though all packets
must be analyzed. Internal deployment of IPS can protect against walk-in worms where people bring
infected laptops into the building or network.
• IPS can also block peer-to-peer traffic, avoiding security risks and embarrassment
• Management of IPS is surprisingly easy.

5.
c) What is Demilitarized Zone? Explain with a diagram. [6]
In a DMZ configuration, most computers on the LAN run behind a firewall connected to a public network like the Internet. One or more computers also run outside the firewall, in the DMZ. Those computers on the outside intercept traffic and broker requests for the rest of the LAN, adding an extra layer of protection for computers behind the firewall.
Traditional DMZs allow computers behind the firewall to initiate requests outbound to the DMZ. Computers in the DMZ in turn respond, forward or re-issue requests out to the Internet or other public network, as proxy servers do. (Many DMZ implementations, in fact, simply utilize a proxy server or servers as the computers within the DMZ.) The LAN firewall, though, prevents computers in the DMZ from initiating inbound requests.
DMZ is a commonly-touted feature of home broadband routers. However, in most instances these features are not true DMZs. Broadband routers often implement a DMZ simply through additional firewall rules, meaning that incoming requests reach the firewall directly. In a true DMZ, incoming requests must first pass through a DMZ computer before reaching the firewall.
A single firewall with at least 3 network interfaces can be used to create a network architecture containing a DMZ. The external network is formed from the ISP to the firewall on the first network interface, the internal network is formed from the second network interface, and the DMZ is formed from the third network interface. The firewall becomes a single point of failure for the network and must be able to handle all of the traffic going to the DMZ as well as the internal network.

A single firewall with at least 3 network interfaces can be used to create a network architecture containing a DMZ. The external network is formed from the ISP to the firewall on the first network interface, the internal network is formed from the second network interface, and the DMZ is formed from the third network interface. The firewall becomes a single point of failure for the network and must be able to handle all of the traffic going to the DMZ as well as the internal network.
7. Write short notes on the following:
b) Reverse Proxy [6]
A reverse proxy or surrogate is a proxy server that is installed within the neighborhood of one or more servers. Typically, reverse proxies are used in front of Web servers. All connections coming from the Internet addressed to one of the Web servers are routed through the proxy server, which may either deal with the request itself or pass the request wholly or partially to the main web servers. A reverse proxy or surrogate is a proxy server that is installed within the neighborhood of one or more servers. Typically, reverse proxies are used in front of Web servers. All connections coming from the Internet addressed to one of the Web servers are routed through the proxy server, which may either deal with the request itself or pass the request wholly or partially to the main web servers.
There are several reasons for installing reverse proxy servers:
• Security: the proxy server may provide an additional layer of defense by separating or masquerading the type of server that is behind the reverse proxy. This configuration may protect the servers further up the chain - mainly through obfuscation.
• Encryption / SSL acceleration: when secure websites are created, the SSL encryption is sometimes not done by the Web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware.
• Load distribution: the reverse proxy can distribute the load to several servers, each server serving its own application area. In the case of reverse proxying in the neighborhood of Web servers, the reverse proxy may have to rewrite the URLs in each webpage (translation from externally known URLs to the internal locations).
• Caching static content: A reverse proxy can offload the Web servers by caching static content, such as images. Proxy caches of this sort can often satisfy a considerable amount of website requests, greatly reducing the load on the central web server. Sometimes referred to as a Web accelerator.
• Compression: the proxy server can optimize and compress the content to speed up the load time.
• Spoon feeding: a dynamically generated page can be produced all at once and served to the reverse-proxy, which can then return it to the client a little bit at a time. The program that generates the page is not forced to remain open and tying up server resources during the possibly extended time the client requires to complete the transfer
January-2006 [17]
1.
f) What is an application level firewall and why is it necessary? [4]
An Application Level Firewall is a firewall where one application-level (i.e., not kernel) process is used to forward each session that an internal user makes to a network resource on the public network.
Application Level Firewall's are considered to be the most secure type of Firewall's, but they incur a significant performance penalty. The penalty arises because a new process must be started each time a user starts a new session -- for instance by following a URL to a new World-Wide Web site.

5.
a) In most of the campus/corporate networks, we find firewalls preceded by a router, but not the reverse. Can you explain why this has become almost a de-facto standard?
Quite simply, routers are faster than firewalls. A router is a relatively simple networking device designed solely to get packets from point A to point B. In terms of unit cost, it's generally much cheaper for a router to handle a packet than for a firewall to analyze it. Additionally, there are a lot of "junk" packets out there on the Internet, as a result of port scanning and other malicious activity.
With those facts in mind, most organizations choose to use a router as the first perimeter defense, implementing a simple rule set that blocks all unwanted traffic. For example, if the only acceptable inbound traffic is HTTPS and VPN activity, you could write a simple router rule set that allows those two ports (to any address) and blocks everything else. The firewall would then be responsible for more granular filtering, determining which specific hosts may receive HTTPS and/or VPN traffic, for example, and performing advanced analysis, such as stateful inspection and/or application-layer filtering.
It's possible, however, to bypass this norm. One approach that I've seen attempted in smaller organizations is to use only a firewall, dropping the router entirely. In that scenario, the firewall performs routing functions for the network. The primary benefit to such an approach is that it simplifies the environment, providing only one device that must be managed. It's not, however, a scalable design, as the cost quickly becomes prohibitive as network throughput rises.





6. b) What are the three classes of intruders? Discuss any three metrics used in profile-based anomaly detection. Explain the architecture of a distributed intrusion detection system (with a suitable diagram) and name the various components. [10]
Intruders
The intruder is also referred to as a hacker or cracker. There are three classes of intruders:
• Masquerader: An individual who is not authorized to use the computer and who penetrates a system's access controls to exploit a legitimate user's account
• Misfeasor: A legitimate user who access data, programs, or resources for which such access is not authorized.
• Clandestine user: An individual who seizes supervisory control of the system and uses this control to evade auditing and access controls or to suppress audit collection.
There are four general categories of attack:
• Interruption: An asset of the system is destroyed or becomes unavailable or unusable. This is an attack on availability, as for example, destruction of a piece of hardware.
• Interception: An unauthorized party gains access to an asset. This is an attack on confidentiality. Examples include wiretapping to capture data in a network
• Modification: An unauthorized party not only gains access to but tampers with an asset. This is an attack on integrity. Examples include changing values in a data file or altering a program so that it performs differently.
• Fabrication: An unauthorized party insets counterfeit objects into the system. This is an attack on authenticity.

These attacks can also be classified as: passive and active attacks. Active attacks are more damaging. It is also quite difficult to prevent active attacks absolutely, because to do so would require complete protection of all communications facilities and path at all times


July-2006 [34]
1.
c) How does two filtering routers make the screened subnet firewall most secure? [4]
3.
a) What are the basic techniques that are used by firewalls to control access and enforce the site’s security policy? [12]
The firewall design policy defines the rules used to implement the service access policy. Firewalls generally implement one of the two basic design policies. The policies are:
• Permit any service unless it is expressly denied, and deny any service unless it is expressly permitted.
• A firewall that implements the first policy allows all services to pass into the site by default.
• With the exception of those services that the service access policy has identified as disallowed.
• A firewall that implements the second policy denies all services by default, but passes those services that have been identified as allowed.
• This second policy follows the classic access model used in all areas of information security.
• The first policy is less desirable, since it offers more avenues for getting around the firewall.
• The second policy is stronger and safer, but it is more difficult to implement and may impact users more because services like the ones mentioned above may have to be blocked or restricted more heavily.
• The effectiveness of the firewall system in protecting the network depends on the type of implementation, the use of proper firewall procedures, and the service access policy.
• The service access policy is the most significant component of the four policies mentioned.
• The other three components are used to implement and enforce the policy.
How Firewalls tackle the security issues
Setting restrictions on packet traversing the Firewall, based on protocol type, destination address, user origin address , port number, time of day, URLs, etc.
• Hiding the internal network numbering scheme-port address translation, network address translation.
• Http content filtering, Java, active X, URL, keyword content.
• Scanning for viruses on incoming data streams.
The firewall design policy is generally to deny all services except those that are explicitly permitted or to permit all services except those that are explicitly denied. The former is more secure and is therefore preferred, but it is also more stringent and causes fewer services to be permitted by the service access policy. The firewall design policy start with the most secure. i.e., deny all services except those that are explicitly permitted. The following documentation should be done for an efficient firewall policy to be set up:
• What all Internet services the corporate plans to use? e.g: TELNET, www, mail, nfs, etc.
• Where the services will be used, e.g., on a local basis, across the Internet, dial in from home, or from remote organizations.
• Additional need such as encryption or dial in support.
• What are the risks associated with providing these services and access?
• What is the cost in terms of controls and impact on network usability to provide protection?
• Assumptions about security versus usability: addressing if security wins out if a particular service is too risky or too expensive to secure.
The parameters Assurance:
The firewall policy and configuration should be accurately documented and the firewall devices must be subject to regular monitoring and yearly audits.
Identification and authentication:
Strong authentication systems are used for the incoming user connections from the Internet like one time passwords, challenge-response, and use of certificates. The administrative accounts also use encrypted login sessions or one-time password mechanisms.
Accountability and Audit:
• Firewall devices and Proxy Machines should be securely installed.
• All unnecessary services would be stopped in the operating system.
• The Firewall logs should be archived at least for one year and should be detail in nature on a dedicated server.
• Logs shall be automatically analyzed with critical errors generating alarms.
Access Control :
• All Internet access from the corporate network must occur over proxies situated in a Firewall.
• Classified content should not be sent out by mail or FTP.
• Default configuration: unless otherwise specified, services are forbidden.
• All users are allowed to exchange e-mail with the Internet.
• R&D department users are allowed to use World Wide Web and ftp (over proxies). Other users require authorization.
• Users may not provide services to the Internet.
• Research and development departments requiring full Internet access for experimental services should not install these services on the corporate network, but on a separate network outside the Firewall.
• Users should not be able to logon directly onto Firewall machines.
• Internet access to illicit material should be prevented where possible

b) Which type of firewall does act as a relay of application level traffic? Explain, how it is better from other types of firewalls. [6]
Application-Level Gateway (A Proxy Server):
■ Acts as a relay of application-level traffic.
■ Does not support a particular application (feature), if the
gateways is not coded for this service

o Also called proxy server
o Acts as a relay of application-level traffic

Application-level Gateway
• Advantages:
o Higher security than packet filters
o Only need to scrutinize a few allowable applications
o Easy to log and audit all incoming traffic
• Disadvantages:
o Additional processing overhead on each connection (gateway as splice point)

5.
a) What are some of the attacks that can be made on packet filtering routers and their appropriate counter measures? [12]
Ip address snoofing: The intruder transmits pacckets from the outside with a source IP address field containing an address of an internal host. The attacker hopes that the use of a snoofed address will allow penetration of systems that employ simple source addresss security, in which packets from specific trusted internal hosts are accepted. The countermeasure is to discard packets with an inside source address if the packert arrives on an external interface.
Source routing attacks: The source station specifies te route that a packet should take as it crosses the Internet, i the hopes that this will bypass security measures that do not analyze the source routing information. The countermeasure is to discard all packets that use this option
Tiny fragment attacks: The intruder uses the IP fragmentation option to create extremely small fragments and force the TCP header inofrmation into a separate packet fragment. This attack is designed to circumvent filtering decision on the first fragment of a packet. All subsequent fragments of that packet are filtered out solely on the basis that they are part of the packet whose first fragment was rejected. The attacker hopes that the filtering router examines only the first fragment and that the remaining fragments are passed through. A tiny fragment attack cqan be defeated byenforcing a rule that the first fragment of a packet must contain a predefined minimum amount of the transport header. If the first fragment is rejected, the filter can remember the packet and discard all subsequent fragments.


c) Why can IP spoofing not be prevented by using Packet Filter Firewall Technique?
[5]
In IP spoofing, an attacker gains unauthorized access to a computer or a network by making it appear that a malicious message has come from a trusted machine by “spoofing” the IP address of that machine
July-2007 [18]
1.
g) A firewall’s basic task is to control traffic between computer networks with different zones of trust. What are the main categories of firewall with reference to the layers where the traffic can be intercepted? Define each category with example. [4]
ayers where the traffic can be intercepted, three main categories of firewalls exist:

Network layer firewalls. An example would be iptables.
Application layer firewalls. An example would be TCP Wrappers.
Application firewalls. An example would be restricting ftp services through/etc/ftpaccess file.




These network-layer and application-layer types of firewall may overlap, even though the personal firewall does not serve a network; indeed, single systems have implemented both together. What is a Network-layer Firewall?

Network layer firewalls operate at a (relatively) low level of the TCP/IP protocol stack as IP-packet filters, not allowing packets to pass through the firewall unless they match the rules. The firewall administrator may define the rules; or default built-in rules may apply (as in some inflexible firewall systems). A more permissive setup could allow any packet to pass the filter as long as it does not match one or more "negative-rules", or "deny rules". Today network firewalls are built into most computer operating systems and network appliances. Modern firewalls can filter traffic based on many packet attributes like source IP address, source port, destination IP address or port, destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, domain name of the source, and many other attributes.

What is an Application-layer Firewall?

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgement to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines.
By inspecting all packets for improper content, firewalls can even prevent the spread of the likes of viruses. In practice, however, this becomes so complex and so difficult to attempt (given the variety of applications and the diversity of content each may allow in its packet traffic) that comprehensive firewall design does not generally attempt this approach. The XML firewall exemplifies a more recent kind of application-layer firewall.
What are Proxies?
A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets. Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network.

2.
c) What is Intrusion Detection System (IDS)? Briefly explain network based IDS and host based IDS. [6]
An intrusion detection system (IDS) monitors network traffic and monitors for suspicious activity and alerts the system or network administrator. In some cases the IDS may also respond to anomalous or malicious traffic by taking action such as blocking the user or source IP address from accessing the network.
IDS come in a variety of “flavors” and approach the goal of detecting suspicious traffic in different ways. There are network based (NIDS) and host based (HIDS) intrusion detection systems. There are IDS that detect based on looking for specific signatures of known threats- similar to the way antivirus software typically detects and protects against malware- and there are IDS that detect based on comparing traffic patterns against a baseline and looking for anomalies. There are IDS that simply monitor and alert and there are IDS that perform an action or actions in response to a detected threat. We’ll cover each of these briefly.
NIDS
Network Intrusion Detection Systems are placed at a strategic point or points within the network to monitor traffic to and from all devices on the network. Ideally you would scan all inbound and outbound traffic, however doing so might create a bottleneck that would impair the overall speed of the network.
HIDS
Host Intrusion Detection Systems are run on individual hosts or devices on the network. A HIDS monitors the inbound and outbound packets from the device only and will alert the user or administrator of suspicious activity is detected
Signature Based
A signature based IDS will monitor packets on the network and compare them against a database of signatures or attributes from known malicious threats. This is similar to the way most antivirus software detects malware. The issue is that there will be a lag between a new threat being discovered in the wild and the signature for detecting that threat being applied to your IDS. During that lag time your IDS would be unable to detect the new threat.
Anomaly Based
An IDS which is anomaly based will monitor network traffic and compare it against an established baseline. The baseline will identify what is “normal” for that network- what sort of bandwidth is generally used, what protocols are used, what ports and devices generally connect to each other- and alert the administrator or user when traffic is detected which is anomalous, or significantly different, than the baseline
5.
c) A firewall is an information Technology (IT) security device which is configured to permit, deny or proxy data connections set and configured by the organization’s security polity. What is stateless and stateful firewall? Explain. [8]
A firewall can be described as being either Stateful, or Stateless.
STATELESS
Stateless firewalls watch network traffic, and restrict or block packets based on source and destination addresses or other static values. They are not 'aware' of traffic patterns or data flows. A stateless firewall uses simple rule-sets that do not account for the possibility that a packet might be received by the firewall 'pretending' to be something you asked for.
STATEFUL
Stateful firewalls can watch traffic streams from end to end. They are are aware of communication paths and can implement various IP Security (IPsec) functions such as tunnels and encryption. In technical terms, this means that stateful firewalls can tell what stage a TCP connection is in (open, open sent, synchronized, synchronization acknowledge or established), it can tell if the MTU has changed, whether packets have fragmented etc.
Neither is really superior and there are good arguments for both types of firewalls. Stateless firewalls are typically faster and perform better under heavier


January-2008 [10]
2.
a) What do you understand by a firewall? What is the packet filter? Explain the application level gateway mechanism in firewall to protect the vulnerable network. [10]
A firewall is a collection of devices (hardware and software) placed between two networks that collectively have the following properties:
All traffic from inside to outside, and vice-versa, must pass through
the firewall.
Only authorized traffic, as defined by the local security policy, will
be allowed to pass.
The firewall itself is immune to penetration
A router can control (or filter) network traffic at the packet level. Packets are allowed or denied based on the source/destination address and the port number. This is called packet filtering. Application layer firewalls which should always be used in conjunction with packet filters act as intermediaries between the two networks. A user must first connect to the firewall, and then to the desired external network. Due to the critical location of this machine (bridging the two networks) it must be carefully and skillfully constructed. This machine will become the strong point, or bastian, to the external network. For a user to gain outward access, the bastian host must have the desired application to execute. If the bastian machine lacks an application (perhaps a web browser, or FTP client) than the user is denied use of that application. This is called application layer firewalling.
The Nature of Security
Security is used in at least two senses:
• a condition in which harm does not arise, despite the occurrence of threatening events; and
• a set of safeguards designed to achieve that condition.
Threatening events can be analysed into the following kinds:
• natural threats, commonly referred to in the insurance industry as Acts of God or Nature, e.g. fire, flood, lightning strike, tidal wave, earthquake, volcanic eruption;
• accidental threats:
o by humans who are directly involved, e.g. dropping something, tripping over a power-cord, failing to perform a manual procedure correctly, mis-coding information, mis-keying, failing to perform a back-up;
o by other humans, e.g. skiers causing an avalanche, transmission failure due to a back-hoe cutting a power cable, the insolvency, bankruptcy, or withdrawal of support by a key supplier;
o by machines and machine-designers, e.g. disk head-crash, electricity failure, software bug, air-conditioning failure; and
• intentional threats:
o by humans who are directly involved, e.g. sabotage, intentional capture of incorrect data, unjustified amendment or deletion of data, theft of backups, extortion, vandalism;
o by other humans, e.g. graffiti, vandalism, release of malicious software ('malware'), riot, terrorism, warfare.
The Nature of Information Security
For an information system to be secure, it must have a number of properties:
• service integrity. This is a property of an information system whereby its availability, reliability, completeness and promptness are assured;
• data integrity. This is a property whereby records are authentic, reliable, complete, unaltered and useable, and the processes that operate on them are reliable, compliant with regulatory requirements, comprehensive, systematic, and prevent unauthorized access, destruction, alteration or removal of records (AS 4390, 1996). These requirements apply to machine-readable databases, files and archives, and to manual records;
• data secrecy . This is a property of an information system whereby information is available only to those people authorised to receive it. Many sources discuss secrecy as though it was only an issue during the transmission of data; but it is just as vital in the context of data storage and data use. This property is often misleadingly referred to in the technical literature as `confidentiality', and even more misleadingly as `privacy'. I explain the appropriate usages for these terms elsewhere;
• authentication. Authentication is a property of an information system whereby assertions are checked. Forms of assertion that are subjected to authentication include:
o `data authentication', whereby captured data's authenticity, accuracy, timeliness, completeness and other quality aspects are checked;
o `identity authentication', whereby an entity's claim as to its identity is checked. This applies to all of the following:
 the identity of a person;
 the identity of an organisational entity;
 the identity of a software agent; and
 the identity of a device; and
o `attribute authentication', whereby an entity's claim to have a particular attribute is checked, typically by inspecting a `credential'. Of especial relevance in advanced electronic communications is claims of being an authorised agent, i.e. an assertion by a person, a software agent or a device to represent an organisation or a person; and
• non-repudiation. This is a property of an information system whereby an entity is unable to convincingly deny an action it has taken.
There is a strong tendency in the information systems security literature to focus on the security of data communications. But security is important throughout the information life-cycle, i.e. during the collection, storage, processing, use and disclosure phases, as well as transmission. Each of the properties of a secure system identified above needs to be applied to all of the information life-cycle phases.
Attributes Of Information Security : Confidentiality, Integrity, Availability
"the right information to the right people at the right time".

These can be remembered by the mnemonic “CIA”
• Confidentiality
Confidentiality is the principle that information will not be disclosed to unauthorized subjects. Unauthorized Network sniffing is an example of a violation of confidentiality.
• Integrity
Integrity is trust that can be placed in the information. Data integrity is having assurance that the information has not been altered between its transmission and its reception. Data integrity can be compromised when information has been corrupted, willfully or accidentally, before it is read by its intended recipient.
• Availabilit
Availability defines that information or resources are available when required. Most often this means that the resources are available at a rate which is fast enough for the wider system to perform its task as intended. It is certainly possible that a confidentiality and integrity are protected, but an attacker causes resources to become less available than required, or not available at all. See Denial of Service.
Threats and Vlnerabilities : A threat is an unwanted (deliberate or accidental) event that may result in harm to an asset. Often, a threat is exploiting a known vulnerability(ies). A threat could be the perception of insecurity; see also risk. A threat is also an explicit or implicit message from a person to another that the first will cause something bad to happen to the other, often except when certain demands are met. Often a weapon is used. Examples are a robbery, kidnapping, hijacking, extortion, blackmail.
*Threats are defined as specific activities that will damage the system, its facilities, or its patrons. Threats include any actions which detract from overall security. They range from the extreme of terrorist-initiated bombs or hostage-taking to more common events such as theft of services, pickpocketing and vandalism (Damage). Those responsible for identifying and assessing threats and vulnerabilities must not only measure the degree of potential danger, but the chances of that particular danger actually occurring.

*Vulnerability is defined as the susceptibility of the system to a particular type of security hazard. Vulnerabilities can be corrected, but risk analysis must be undertaken to determine which vulnerabilities take the highest priority.
Definitions: Risk = Threat X Vulnerability
• Being “at risk" is being exposed to threats.
• Risks are subjective -- the potential to incur consequences of harm or loss of target assets.
• A Risk Factor is the likelihood of resources being attacked.
• Threats are dangerous actions that can cause harm. The degree of threat depends on the attacker's Skills, Knowledge, Resources, Authority, and Motives.
• Vulnerabilities are weaknesses in victims that allow a threat to become effective.
Threats to Electronic Data
Intrusion & Attack Modality Dimension of Involvement Attack Type (examples)
Interference Active • Spam with junk mail (such as chain letters) to inform or annoy.
• Denial of Service: Overwhelm (ala IP Syn flooding, mail bombing, or Smurf with ICMP Echo Requests); Take advantage of software bugs (ala Buffer overflow, Ping of death, LAND)
• Bacteria: corrupt live data or destroy boot sector; Make back up data unrecoverable
Passive • Worms are self-propagating malicious code that executes unauthorized computer instructions. They can infect any component (boot record, registry, .exe & .com program files, macro scripts, etc.) They do not destroy data.
• Viruses are worms that harm data.
• Rabbits - runaway applications that consume all resources (memory on machines or bandwidth on networks).
Interception
of message stream Active • Connection/Session hijacking: Active Telnet session seized
• Spoofing: Altering DNS namespace to setup Web Page Redirection
Passive • Eavesdropping with a wiretap: capture data in transit. Using a packet Sniffer (Protocol Analyzer) for network traffic analysis (see patterns in text flow, packet size, etc.).
• Compromised Key Disseminate sensitive information for illicit gain or to embarrass organizations and individuals. [Sircam]
Impersonation Active • IP Address Spoofing -- when a rogue site intercepts authenticated communications between legitimate users and presents altered content as legitimate.
• Man-in-the-middle spoofing: captured packets are tampered and reinserted into an active session pipe
• Crack (decrypt passwords and cyphertext by brute force or other means)
• Replay reusing a captured authenticator
• DDoS (Distributed Denial of Service) attack
• DNS Name Server cache loading
Passive • Trap doors (such as Sub7, NetBus patch, or Back Orafice) to bypass noraml security and allow unauthorized/undetected entry.
• Trojan horses inserted to reconfigure network settings or grant root access and permissions to

Vulnerabilities, Threats, and Safeguards
A computer vulnerability is a weakness in an operating system, application code, or configuration that makes it possible for threats to exploit the system (or underlying network) thereby creating negative impact or damage.
Threats are entities that act upon vulnerabilities for the purpose of trying to exploit it. A threat may be an unauthorized user such as a hacker, or even a system administrator trying to obtain access above and beyond their authorized level of privilege. Errant application or system processes can also act as threats and could possible erase valuable data if files and directories are not set with the correct permissions. Today's threats can prevent organizations from accomplishing their mission by causing significant downtime, altering information and inserting fraudulent information in its place, or removing and destroying information altogether. While it is clearly illegal to destroy data that does not belong to you, this has not stopped hackers from taking part in these irreverent and disruptive crimes.
Logic Bomb
- In a computer program, a logic bomb, also called slag code, is programming code, inserted surreptitiously or intentionally, that is designed to execute (or "explode") under circumstances such as the lapse of a certain amount of time or the failure of a a program user to respond to a program command. It is in effect a delayed-action computer virus or Trojan horse. A logic bomb, when "exploded," may be designed to display or print a spurious message, delete or corrupt data, or have other undesirable effects.
Some logic bombs can be detected and eliminated before they execute through a periodic scan of all computer files, including compressed files, with an up-to-date anti-virus program. For best results, the auto-protect and e-mail screening functions of the anti-virus program should be activated by the computer user whenever the machine is online. In a network, each computer should be individually protected, in addition to whatever protection is provided by the network adminstrator. Unfortunately, even this precaution does not guarantee 100-percent system immunity.
Risks from viruses, Trojans and worms
Viruses, Trojan horses and worms are all computer programs that can infect computers.
Viruses and worms spread across computers and networks by making copies of themselves, usually without the knowledge of the computer user.
A Trojan horse is a program that appears to be legitimate but actually contains another program or block of undesired malicious, destructive code, disguised and hidden in a block of desirable code. Trojans can be used to infect a computer with a virus.
A back-door Trojan is a program that allows a remote user or hacker to bypass the normal access controls of a computer and gives them unauthorised control over it. Typically a virus is used to place the back-door Trojan onto a computer, and once the computer is online, the person who sent the Trojan can run programs on the infected computer, access personal files, and modify and upload files.
Risks to e-commerce systems
While some viruses are merely irritants, others can have extremely harmful effects. Some of the threats that they pose to e-commerce systems include:
corrupting or deleting data on the hard disk of your server
stealing confidential data by enabling hackers to record user keystrokes
enabling hackers to hijack your system and use it for their own purposes
using your computer for malicious purposes, such as carrying out a denial-of-service (DoS) attack on another website
harming customer and trading partner relationships by forwarding viruses to them from your own system
What is a computer virus?
Computer viruses are software programs deliberately designed to interfere with computer operation, record, corrupt, or delete data, or spread themselves to other computers and throughout the Internet, often slowing things down and causing other problems in the process.

Malicious Programs
Needs host program
-trap doors
-logic bombs
-Trojan horses
-viruses
Independent
-bacteria
-worms
Trap Doors : A secret entry point to a program or system get in without the usual security access procedures
Recognize some special sequence of inputs, or special user ID.
Logic Bomb : Embedded in some legitimate program. “Explode” when certain conditions are met.
Trojan Horses : Hidden in an apparently useful host program. Perform some unwanted/harmful function when the host program is executed.
Worms and Bacteria :
Worms : Use network connections to spread from system to system
Bacteria : No explicitly damage, just replicate
Types of Viruses :
Parasitic virus : search and infect executable files
Memory-resident virus : infect running programs
Boot sector virus : spreads whenever the system is booted
Stealth virus
Back Door :
- A back door is a means of access to a computer program that bypasses security mechanisms. A programmer may sometimes install a back door so that the program can be accessed for troubleshooting or other purposes. However, attackers often use back doors that they detect or install themselves, as part of an exploit. In some cases, a worm is designed to take advantage of a back door created by an earlier attack. For example, Nimda gained entrance through a back door left by Code Red.
Definitions
• Security tools and toolkits are designed to be used by security professionals to protect their sites. These can also be used by unauthorized individuals to probe for weaknesses. Many of the programs that fall in the malware categories below have benevolent uses. For example, worms can be used to distribute computation on idle processors; back doors are useful for debugging programs; and viruses can be written to update source code and patch bugs. The purpose, not the approach, makes a program malicious.
• Back doors, sometimes called trap doors, allow unauthorized access to your system.
• Logic bombs are programmed threats that lie dormant for an extended period of time until they are triggered; at this point, they perform a function that is not the intended function of the program in which they are contained. Logic bombs usually are embedded in programs by software developers who have legitimate access to the system.
• Viruses are "programs" that modify other programs on a computer, inserting copies of themselves. Viruses are not distinct programs - they cannot run on their own, and need to have some host program, of which they are a part, executed to activate them.
• Worms are programs that propagate from computer to computer on a network, without necessarily modifying other programs on the target machines. Worms can run independently and travel from machine to machine across network connections; worms may have portions of themselves running on many different machines. Worms do not change other programs, although they may carry other code that does (for example, a true virus).
• Trojan horses are programs that appear to have one function but actually perform another function. Trojan horses are named after the Trojan horse of myth. Analogous to their namesake, modern-day Trojan horses resemble a program that the user wishes to run - a game, a spreadsheet, or an editor. While the program appears to be doing what the user wants, it is also doing something else unrelated to its advertised purpose, and without the user's knowledge.
• Bacteria, or rabbit programs, make copies of themselves to overwhelm a computer system's resources. Bacteria do not explicitly damage any files. Their sole purpose is to replicate themselves. A typical bacteria program may do nothing more than execute two copies of itself simultaneously on multiprogramming systems, or perhaps create two new files, each of which is a copy of the original source file of the bacteria program. Both of those programs then may copy themselves twice, and so on. Bacteria reproduce exponentially, eventually taking up all the processor capacity, memory, or disk space, denying the user access to those resources.
• A dropper is a program that is not a virus, nor is it infected with a virus, but when run it installs a virus into memory, on to the disk, or into a file. Droppers have been written sometimes as a convenient carrier for a virus, and sometimes as an act of sabotage. Some anti-virus programs try to detect droppers.
Virus Varieties
Stealth Virus
A stealth virus has code in it that seeks to conceal itself from discovery or defends itself against attempts to analyze or remove it. The stealth virus adds itself to a file or boot sector but, when you examine, it appears normal and unchanged. The stealth virus performs this trickery by staying in memory after it is executed. From there, it monitors and intercepts your system calls. When the system seeks to open an infected file, the stealth virus displays the uninfected version, thus hiding itself.
Macro Viruses
Macro languages are (often) equal in power to ordinary programming languages such as C. A program written in a macro language is interpreted by the application. Macro languages are conceptually no different from so-called scripting languages. Gnu Emacs uses Lisp, most Microsoft applications use Visual Basic Script as macro languages. The typical use of a macro in applications, such as MS Word, is to extend the features of the application. Some of these macros, known as auto-execute macros, are executed in response to some event, such as opening a file, closing a file, starting an application, and even pressing a certain key. A macro virus is a piece of self-replicating code inserted into an auto-execute macro. Once a macro is running, it copies itself to other documents, delete files, etc. Another type of hazardous macro is one named for an existing command of the application. For example, if a macro named FileSave exists in the "normal.dot" template of MS Word, that macro is executed whenever you choose the Save command on the File menu. Unfortunately, there is often no way to disable such features.
Unix/Linux Viruses
The most famous of the security incidents in the last decade was the Internet Worm incident which began from a Unix system. But Unix systems were considered virus-immune -- not so. Several Linux viruses have been discovered. The Staog virus first appeared in 1996 and was written in assembly language by the VLAD virus writing group, the same group responsible for creating the first Windows 95 virus called Boza.
Like the Boza virus, the Staog virus is a proof-of-concept virus to demonstrate the potential of Linux virus writing without actually causing any real damage. Still, with the Staog assembly language source code floating around the Internet, other virus writers are likely to study and modify the code to create new strains of Linux viruses in the future.
The second known Linux virus is called the Bliss virus. Unlike the Staog virus, the Bliss virus can not only spread in the wild, but also possesses a potentially dangerous payload that could wipe out data.
While neither virus is a serious threat to Linux systems, Linux and other Unix systems will not remain virus-free. Fortunately, Linux virus writing is more difficult than macro virus writing for Windows, so the greatest virus threat still remains with Windows.
The Information Security Process
(1) Scope Definition
A Security Strategy and Plan needs to be sculpted to the context. The first step in the process is the definition of its scope, with reference to the following:
• the set of stakeholders. A checklist is provided in AS.NZS 4360 (1999, p.28), and an improved one at Appendix 2.
• the proxies that represent those stakeholders;
• the interests of those stakeholders;
• the degree of importance of security in the organisation's business strategy, e.g. in relation to business continuity, and the accessibility and/or secrecy of various categories of data;
• the degree of importance of public visibility of assurance of the system's security; and
• legal requirements to which the organisation and its stakeholders are subject, including contracts with customers and other parties, data protection and privacy statutes, intellectual property laws, occupational health and safety, the laws of evidence, and common law obligations such as the duty of confidence, and the duty of care inherent in the tort of negligence.
It is highly desirable that the scope definition be formalised, and that relevant executives be exposed to it, and commit to it. It then sets the framework within which the subsequent phases unfold.
(2) Threat Assessment
A stocktake needs to be undertaken of the information and processes involved, their sensitivity from the perspectives of the various stakeholders, and their attractiveness to other parties. This needs to be followed by analysis of the nature, source and situation of threats.
The nature of threats are of a variety of kinds, including access to data by unauthorised persons, disclosure of it to others, its alteration, and its destruction.
The sources of the threats include several categories of entities:
• a person who has authorisation to access the data, but for a purpose different from that for which they use it;
• an intruder, who has no authorisation to access the data, including:
o an interceptor of data during a transmission; and
o a 'cracker' who gains access to data within storage; and
• an unauthorised recipient of data from an intruder.
Categorisations of intentional threats to facilities are to be found in Neumann (1995, reproduced in Appendix 3), and Anderson 2001.
The situations of the threats include several categories of locations:
• within manual processes, content and data storage;
• within the physical premises housing facilities connected with the system; and
• within the organisation's computing and communications facilities, including:
o data storage, including:
 permanent storage, such as hard disk, including high-level cache;
 transient storage, such as RAM, including low-level cache and video RAM;
 archival storage;
o software that:
 receives data;
 stores data (e.g. a file-handler or database manager);
 renders data (e.g. a viewer or player);
 despatches data; and
 enables access to the data, in any of the above storage media (e.g. disk utilities and screen-scrapers); and
o transmission, including via:
 discrete media (e.g. diskettes, CD-ROMs); and
 electronic transmission over local area and wide area networks;
• within other people's computing and communications facilities, e.g.:
o a workstation on a trusted network that is cracked by an intruder;
o a powerful computer that is cracked, and that is used to crack one or more passwords on the organisation's computers;
o one or more weakly protected machines that are cracked and then used to launch denial of service (DOS) or distributed denial of service (DDOS) attacks against the organisation's servers or networks;
• within supporting infrastructure, including electrical supplies, air-conditioning, and fire protection systems.
(3) Vulnerability Assessment
The existence of a threat does not necessarily mean that harm will arise. For example, it is not enough for there to be lightning in the vicinity. The lightning has to actually strike something that is relevant to the system. Further, there has to be some susceptibility within the system, such that the lightning strike can actually cause harm. The purpose of the Vulnerability Assessment is to identify all such susceptibilities to the identified threats, and the nature of the harm that could arise from them.
It is common for vulnerabilities to be countered by safeguards. For example, safeguards against lightning strikes on a facility include lightning rods on the building in which it is housed. Safeguards may also exist against threatening events occurring in situations remote to the system in question. For example, a lightning strike on a nearby electricity substation may result in a power surge, or a power outage in the local facility. This may be safeguarded against by means of a surge protector and an Uninterruptable Power Supply (UPS).
Every safeguard creates a further round of vulnerabilities, including susceptibilities to threats that may not have been previously considered. For example, a UPS may fail because the batteries have gone flat and not been subjected to regular inspections, or because its operation is in fact dependent on the mains supply not failing too quickly, and has never been tested in such a way that that susceptibility has become evident.
(4) Risk Assessment
The term 'risk' is used in many different senses (including as a synonym for what was called above 'threat', and 'harm', and even 'vulnerability'!). But when security specialists use the word 'risk', they have a very specific meaning for it: a measure of the likelihood of harm arising from a threat.
Risk assessment builds on the preceding analyses of threats and vulnerabilities, by considering the likelihood of threatening events occurring and impinging on a vulnerability. More detailed discussion is to be found in AS/NZS 4360 (1999).
In most business contexts, the risk of each particular harmful outcome is not all that high. The costs of risk mitigation, on the other hand, may be very high. Examples of the kinds of costs involved include:
• the time of managers, for planning and control;
• the time of operational staff and computer time, for regular backups;
• the loss of service to clients during backup time;
• additional media, for storing software and data;
• the time of operational staff, for training;
• duplicated hardware and networks; and
• contracted support from alternative 'hot-sites' or 'warm-sites'.
Risks have varying degrees of likelihood, have varying impacts if they do happen, and it costs varying amounts of time and money in order to establish safeguards against the threatening events or against the harm arising from a threatening event.
The concept of `absolute security' is a chimera; it is of the nature of security that risks have to be managed. It is therefore necessary to weigh up the threats, the risks, the harm arising, and the cost of safeguards. A balance must be found between predictable costs and uncertain benefits, in order to select a set of measures appropriate to the need.
The aim of risk assessment is therefore to determine the extent to which expenditure on safeguards is warranted in order to provide an appropriate level of protection against the identified threats.
(5) Risk Management Strategy and Security Plan
A range of alternative approaches can be adopted to each threat. These comprise:
• Proactive Strategies. These are:
o Avoidance, e.g. non-use of a risk-prone technology or procedure;
o Deterrence, e.g. signs, threats of dismissal, publicity for prosecutions;
o Prevention, e.g. surge protectors and backup power sources; quality equipment, media and software; physical and logical access control; staff training, assigned responsibilities and measures to sustain morale; staff termination procedures;
• Reactive Strategies. These are:
o Detection, e.g. fire and smoke detectors, logging, exception reporting;
o Recovery, e.g. investment in resources, procedures/documentation, staff training, and duplication including 'hot-sites' and 'warm-sites';
o Insurance, e.g. policies with insurance companies, fire extinguishing apparatus, mutual arrangements with other organisations, maintenance contracts with suppliers, escrow of third party software, inspection of escrow deposits;
• Non-Reactive Strategies. These are:
o Tolerance, i.e. 'it isn't worth the worry' / 'cop it sweet';
o Graceless Degradation, e.g. siting a nuclear energy company's headquarters adjacent to the power plant, on the grounds that if it goes, the organisation and its employees should go with it.
Devising a risk management strategy involves the following:
• selection of a mix of measures that reflects the outcomes of the preceding threat and risk assessments. The measures need to comprise:
o technical safeguards. These are variously of a preventative nature, support the detection of the occurrence of threatening events, enable the investigation of threatening events, and monitor the environment for signs of possible future threatening events. Categorisations of technical safeguards are to be found in AS/NZS 4444-1 in chapters 6-10. An outline is provided in Appendix 4; and
o policies and procedures. These are organisational features, in the form of structural arrangements, responsibility assignment, and process descriptions;
• formulation of a Security Plan, whereby the safeguards and the policies and procedures will be put into place;
• resourcing of the Security Plan;
• devising and implementing controls, to detect security incidents and investigate and address them, and to monitor whether that all elements of the Security Plan are in place and functioning;
• embedment of audit processes, in order to periodically evaluate the safeguards, the policies and procedures, the actual practices that are occurring, and the implementation of the planned controls.
(6) Security Plan Implementation
The process of implementing the Security Plan must be subjected to strong project management. Policies need to be expressed and communicated. Manual procedures need to be variously modified and created, in order to comply with the strategy and policy. Safeguards need to be constructed, tested and cutover.
Critically, implementation of a Security Plan also requires the development of awareness among staff, education in the generalities, and training in the specifics of the attitudes and actions required of them. This commonly involves a change in organisational culture, which must be achieved, and then sustained.
(7) Security Audit
No strategy is complete without a mechanism whereby review is precipitated periodically, the need for adaptation detected, and appropriate actions taken.
To be effective, audit must be comprehensive, rather than being limited to specific aspects of security; and it must follow through the entire organisation and its activities rather than being restricted to examinations of technical safeguards. Needless to say, this is heavily dependent on real commitment to the security strategy by executives and managers.
Introduction to Security Policies
Policies allow organizations to set practices and procedures in place that will reduce the likelihood of an attack or an incident and will minimize the damage caused that such an incident can cause, should one occur.

What is a Policy?
The nicest definition for 'policy' that I could find is from the American Heritage Dictionary of the English language. It reads:
"A plan or course of action, as of a government, political party, or business, intended to influence and determine decisions, actions, and other matters"
In practical security terms, I define a policy as a published document (or set of documents) in which the organization's philosophy, strategy, policies and practices with regard to confidentiality, integrity and availability of information and information systems are laid out.
Thus, a policy is a set of mechanisms by means of which your information security objectives can be defined and attained. Let's take a moment to briefly examine each of these concepts. First, we have the information security objectives:

* Confidentiality is about ensuring that only the people who are authorized to have access to information are able to do so. It's about keeping valuable information only in the hands of those people who are intended to see it.
* Integrity is about maintaining the value and the state of information, which means that it is protected from unauthorized modification. Information only has value if we know that it's correct. A major objective of information security policies is thus to ensure that information is not modified or destroyed or subverted in any way.
* Availability is about ensuring that information and information systems are available and operational when they are needed. A major objective of an information security policy must be to ensure that information is always available to support critical business processing.
These objectives are globally recognized as being characteristic of any secure system.
Having broadly defined the reasons for implementing a security policy, we can now discuss the mechanisms through which these objectives can be achieved, namely:

Philosophy : This is the organization's approach towards information security, the framework, the guiding principles of the information security strategy. The security philosophy is a big umbrella under which all other security mechanisms should fall. It will explain to future generations why you did what you did.

Strategy : The strategy is the plan or the project plan of the security philosophy. A measurable plan detailing how the organization intends to achieve the objectives that are laid out, either implicitly or explicitly, within the framework of the philosophy.

Policies : Policies are simply rules. They're the dos and the don'ts of information security, again, within the framework of the philosophy.
Practices : Practices simply define the how of the organization's policy. They are a practical guide regarding what to do and how to do it.
A security policy is a high-level management document to inform all users of the goals of and constraints on using a system.
A policy document is written in broad enough terms that it does not change frequently. The information security policy is the foundation upon which all protection efforts are built. The key role of security policy is to acts as a vehicle for informing people about the rules of accessing organizational technology and information assets.
A security policy must be understood by all as a product of a directives.
Contents of any Security Policy:
1.Purpose
–Recognizing sensitive assets.
–Clarifying security responsibilities
–Promoting awareness for existing employees
–Guiding new employee
2.People: A security policy addresses several different people with different expectations
It identifies which people are responsible for implementing the security requirements.
Different users play different roles
•Personal User : Responsible for the security of their own machines.
•Project leader : Responsible for the security of the data and the entire system
•Managers : Responsible for supervising the security implementations.
•Database Administrator : Responsible for the access to and integrity of data in the database.
•Personnel (HR) Staff : Responsible for arranging training and awareness programs.
3.Protected resources: By risk analysis, we identify the assets to be protected. Policy must lay down the items it addresses i.e., the policy must contain the list of items /assets to be protected.
4.The security policy should state what degree of protection should be provided to which kinds of resources.
5.The policy should also indicate that who should have access to the protected items. It must also indicate how that access can be ensured and how unauthorized people will be denied access.
Characteristics of Good security Policy :Poorly written policy cannot guide the developers and users in providing appropriate security mechanisms to protect important assets.
Certain characteristics make a policy a good one
•Coverage : Security policy must be comprehensive. The policy must either apply to or explicitly exclude all possible situations. It must be general so that it can be applied to unusually.
•Durability : Security policy must grow and adapt well. It must survive the system’s growth and expansion without change. It must be flexible.
•Realism : It must be realistic. Implementation of policy must be beneficial in terms of time, cost and convenience.
•Usefulness : It must be clear, direct and succinct .An obscure or incomplete security policy will not be easy to implement. It must be written in such a way that it can be understood and followed by everyone who must implement it or is affected by it.

Importance of security policies :
1. the development of a policy includes the ancillary benefit of making upper management aware of and involved in information security, which can only increase the level of security throughout the company.
2. They Provide a Paper Trail in Cases of Due Diligence : In many cases the only way you can prove due diligence in this regard is by referring to your published policies. Because policy reflects the philosophy and strategy of your company's management it is fair proof of the company's intention regarding information security.
3. They Exemplify an Organization's Commitment to Security : Because a policy is typically published, and because it represents executive decision, a policy may be just what is needed to convince that potential client / merger partner / investor exactly how clever you really are. Increasingly companies are requesting proof of sufficient levels of security from the parties they link to do business with.
4. They Form a Benchmark for Progress Measurement : Policy reflects the philosophy and strategy of management with regard to information security. As such it is the perfect standard against which technology and other security mechanisms can be measured. An information security policy thus serves as a measure by which responsible behavior can be tested and suitably punished.
5. They help ensure consistency : A well-implemented policy helps to ensure consistency in your security systems by giving a directive and clearly assigning responsibility and, equally important, by stipulating the consequences of failing to fulfill those responsibilities.
6. They Serve as a Guide to Information Security : A well-designed policy can become an IT administrator's Bible. IP network security policy will ensure that machines are always installed in a part of the network that offers a level of security appropriate to the role of the machine and the information it hosts.
7. They're Define Acceptable Use : By clearly defining what can and cannot be done by users, by pre-establishing security standards, and by ensuring that all users are educated to these standards, the company places the onus of responsibility on users who can no longer plead 'ignorant' in case of transgression of the policy.
8. They Give Security Staff the Backing of Management : Often security staff face resentment and opposition from people in more senior positions to themselves. The policy, as a directive from top management, empowers security staff to enforce decisions that may not be popular amongst system users. Armed with a policy your security administrators can do their jobs without having to continuously justify themselves.

Security Audit
The word "audit" can send shivers down the spine of the most battle-hardened executive. It means that an outside organization is going to conduct a formal written examination of one or more crucial components of the organization. Financial audits are the most common examinations a business manager encounters. This is a familiar area for most executives: they know that financial auditors are going to examine the financial records and how those records are used. They may even be familiar with physical security audits. However, they are unlikely to be acquainted with information security audits; that is, an audit of how the confidentiality, availability and integrity of an organization's information is assured. They should be. An information security audit is one of the best ways to determine the security of an organization's information without incurring the cost and other associated damages of a security incident.

What is a Security Audit?
You may see the phrase "penetration test" used interchangeably with the phrase "computer security audit". They are not the same thing. A penetration test (also known as a pen-test) is a very narrowly focused attempt to look for security holes in a critical resource, such as a firewall or Web server. Penetration testers may only be looking at one service on a network resource. They usually operate from outside the firewall with minimal inside information in order to more realistically simulate the means by which a hacker would attack the site.
On the other hand, a computer security audit is a systematic, measurable technical assessment of how the organization's security policy is employed at a specific site. Computer security auditors work with the full knowledge of the organization, at times with considerable inside information, in order to understand the resources to be audited.
Security audits do not take place in a vacuum; they are part of the on-going process of defining and maintaining effective security policies. This is not just a conference room activity. It involves everyone who uses any computer resources throughout the organization. Given the dynamic nature of computer configurations and information storage, some managers may wonder if there is truly any way to check the security ledgers, so to speak. Security audits provide such a tool, a fair and measurable way to examine how secure a site really is.
Computer security auditors perform their work though personal interviews, vulnerability scans, examination of operating system settings, analyses of network shares, and historical data. They are concerned primarily with how security policies - the foundation of any effective organizational security strategy - are actually used. There are a number of key questions that security audits should attempt to answer:
* Are passwords difficult to crack?
* Are there access control lists (ACLs) in place on network devices to control who has access to shared data?
* Are there audit logs to record who accesses data?
* Are the audit logs reviewed?
* Are the security settings for operating systems in accordance with accepted industry security practices?
* Have all unnecessary applications and computer services been eliminated for each system?
* Are these operating systems and commercial applications patched to current levels?
* How is backup media stored? Who has access to it? Is it up-to-date?
* Is there a disaster recovery plan? Have the participants and stakeholders ever rehearsed the disaster recovery plan?
* Are there adequate cryptographic tools in place to govern data encryption, and have these tools been properly configured?
* Have custom-built applications been written with security in mind?
* How have these custom applications been tested for security flaws?
* How are configuration and code changes documented at every level? How are these records reviewed and who conducts the review?
Security Policy, Procedures, and Practices
Security Policy
A security policy is concerned with the the following Issues:
• high-level description of the technical environment of the site, the legal environment (governing laws), the authority of the policy, and the basic philosophy to be used when interpreting the policy
• risk analysis that identifies the site's assets, the threats that exist against those assets, and the costs of asset loss
• guidelines for system administrators on how to manage systems
• definition of acceptable use for users
• guidelines for reacting to a site compromise (e.g., how to deal with the media and law enforcement, and whether to trace the intruder or shutdown and rebuild the system)
Security-Related Procedures
Procedures are specific steps to follow that are based on the computer security policy. Procedures address such topics as retrieving programs from the network, connecting to the site's system from home or while traveling, using encryption, authentication for issuing accounts, configuration, and monitoring.
Security Practices
Checklists and general advice on good security practices are readily available. Below are examples of commonly recommended practices:
• Ensure all accounts have a password and that the passwords are difficult to guess. A one-time password system is preferable.
• Use tools such as MD5 checksums (8), a strong cryptographic technique, to ensure the integrity of system software on a regular basis.
• Use secure programming techniques when writing software. These can be found at security-related sites on the World Wide Web.
• Be vigilant in network use and configuration, making changes as vulnerabilities become known.
• Regularly check with vendors for the latest available fixes and keep systems current with upgrades and patches.
• Regularly check on-line security archives, such as those maintained by incident response teams, for security alerts and technical advice.
• Audit systems and networks, and regularly check logs. Many sites that suffer computer security incidents report that insufficient audit data is collected, so detecting and tracing an intrusion is difficult.
How It Works
Usually, the task is performed randomly by an impartial third party to avoid personal involvement. It is important that you do not notify the party or facility being audited, since this could give them time to “fix the books.”
The auditor should have a written document from the owner/management company stating the purpose of the process (to ensure proper operating procedures are followed), an authorization for access to the information he will need, and a statement giving the manager permission to allow unlimited access to the physical plant and records. The auditor should provide the client a written description of the methods he will use, an estimate of the time frame necessary to complete the evaluation, and a disclosure statement ensuring the owner complete confidentiality of the audit results.
Once the audit begins, the auditor will follow the outlined procedures. He should not discuss any of the process with the manager other than what is necessary to perform the audit or discuss the results with anyone other than the client. He should also have detailed notes on any discovered discrepancies. The auditor should make it a point to ease the anxiety of the person being audited. However, if he concludes early in the process that unlawful acts are being committed, he should immediately notify the client to act on the discovery.
An auditor should never accuse a person of stealing, even if evidence is overwhelming. His job is merely to review the facility’s systems, not confront employees—that is the duty of the owner or supervisor. On the other hand, if results indicate an employee is honest and trustworthy and follows proper procedures, the auditor can let him know the results are good. This provides positive reinforcement and alleviates any anxiety the employee may have. The owner should acknowledge the positive feedback as well and perhaps reward staff with an appropriate token of appreciation.

















Computers use numbers to represent letters. Therefore, to encode a message, all we really need to do is encode the number that represents the message. To encode numbers, we have to come up with two formulae, one which will encode the number and one which will decode the number. However, in the case of public key, the formula for encoding has been given out freely to everyone. This means that the formulae that we choose should be done such that anyone with the encrypting formula cannot compute the decrypting formula. For example, if we had an encrypting formula which was
e = m+1
where e is the encrypted number and m is the original message, it would be very easy for anyone with this formula to come up with
m = e-1
to decrypt the number. For public key encryption, we use a more sophisticated formula.
e = mk mod(n)
where e is the encrypted number and m is the original number. Then, for decrypting the number, we simply pass e into the formula
m = ed mod(n)
to get back to m. What makes this encryption hard to break is the fact that it is nearly impossible to derive d by simply knowing n and k.
A Step by Step Approach to Creating a public key.
First of all, we want to come up with numbers n and k to create our encryption formula. To do this:
Step 1: To create a public key, we start by getting two very large prime numbers, p and q.
Step 2: From these two numbers we can calculate n easily with the formula
n = p*q.
Now we have a range of numbers R = {1,2,3,...,n}. However we can only encode numbers that are relatively prime to n. That is
gcd(m,n) = 1
where m is the number we want to encode. There are (n), such numbers in R where (n), the result from the Euler Phi function, is simply an integer. For example, with n = 12,


Step 3: Now we need some number k. We can choose nearly any k that we want, however it must follow two constraints. First, k must be relatively prime to (n). That is
gcd(k,(n)) = 1.
Second, k must be large enough so that
mk > n
for any m that we are encoding. If
mk < n,
then the encrypting function,
e = mk mod(n)
will simply return mk. The code would not be secure in this case because anyone could take the kth root of e to get back to m. For example, say we are encoding the number 4, and n is 25. So, m = 4 and n = 25.
If we choose k = 2, then the encoded message is:
e = mk mod(n)
= 42 mod(25)
= 16 mod(25)
= 16
This can be easily cracked since all that needs to be done is to take the square root of e to get back to m.
m = sqrt(e)
= sqrt(16)
= 4
On the other hand, if k = 3, then
e = mk mod(n)
= 43 mod(25)
= 64 mod(25)
= 14
This can't be easily cracked since if we take the square root of 14, we don't get 4, but something closer to 3.74.
Thus, we now have computed k and n from two prime numbers p and q. Now we should not be able to get our secret decoding number, d directly from k and n as this would essentially make our code breakable (recall that we are making k and n public so anyone can encode messages for us). We now have to compute d, another number, so that we can decrypt our messages. To compute d, we will need to use numbers that we already have such as p, q, and (n), all of which should be kept secret.
Step 4: The property that we need d to exhibit is that
k*d mod((n)) = 1.
This allows the decoding formula to undo the encryption performed by the encoding formula. To calculate d, we simply compute the gcd of (n) and k by using Euclid's algorithm. We can then use back-substitution, by working from the bottom up line by line as is shown to the right. We continue to do this until we get to the point where we have
(i)(k)=(j)((n)) + 1
where i and j are integers. In the case of the example, i = -23, k = 13, j = -5, and (n) = 60 so we have
(-23)(13) = (-5)(60) + 1
Once we reach this point we have one last thing to check. If i is negative (as in our example), then our value for d is:
d = (n) + i = 60 + (-23) = 37
Otherwise we simply take d to be:
d = i.
The reason this value for d can decode our messages encrypted by k is shown by the Euler-Fermat theorem.



Back-Substitution in Euclidean Algorithm.

Now you should be able to choose 2 prime numbers, p and q, a number k and calculate n, (n), and d. The following exercise provides several choices for p and q. You can try different values for k and calculate d automatically with it by clicking on the "d = " button.


NOTE: Because we calculated d using (n) it is secret since it is very hard to calculate d by simply using k and n, the two public numbers. This is because if we only had k and n, we would have to start by factoring n back into p and q. If we choose p and q to each be 100 digit prime numbers, which can be easily found, then n is a 200 digit number. Although it takes very little time to multiply p and q to get n, it takes on the order of 4 * 109 years on the fastest available machines to factor n back into p and q. For this reason alone, our public key code is secure.
We now have three numbers, n, k and d, which together can be used to encrypt and decrypt messages:
e=mk mod(n) to encode m --> e
m=ed mod(n) to decode e --> m.
From this point, we can forget about p,q, and (n) as they are no longer needed. We can also release n and k publicly so that people can encode messages to send to us. The only really important point is to keep d secret so that only we can decode messages sent to us.


Now you should be able to choose 2 prime numbers, p and q, a number k and calculate n, (n), and d. The following exercise provides several choices for p and q. You can try different values for k and calculate d automatically with it by clicking on the "d = " button.

Security Policy
• Need
– Security measures are most effective when they flow from a well-designed and comprehensive security policy and are enforced by human engagement as well as by technical solutions.
– Vary widely from company to company.
– For a large organization, develop a suite of documents that cover the different aspects of the security issues concerning the organization.
Types of Security Policies

Corporate Security Policy
• Covers information security concepts at a high level, define these concepts, describe why they are important, and detail what the organization’s stand is on them.
• Read by managers and by technical custodians (particularly security technical custodians) and these groups will use the policy to gain a sense of the company’s overall security policy philosophy.
• Covers the following topics
• Security requirements with regards to the views of the asset owners in terms of C,I and A.
• Organizational structure and responsibilities
• Integration of Security into SDLC
• Risk Management Strategies
• Incident handling
System Security Policy
• Developed for security of an IT System
• Normally consist of a Technical Policy and an End User Policy.
• From a Technical Custodian and End-User perspective the policies must be implemented by the application of appropriate safeguards to the systems and services to ensure that an adequate level of protection is achieved.
• Endorsed by senior management
• Following issues need to be considered
• Definition of the considered IT System and its boundary.
• Definition of the business objectives to be achieved by the System.
• Threats to the System and the information handled.
• Vulnerabilities afflicting the System.
• Cost of IT Security.
Technical Policy
• Technical custodians are responsible for implementing technical policies on systems they work with.
• They will be more detailed than the governing policy and will be system or issue specific.
• Flows from the governing policy. Not only does it address the issues mentioned in the governing policy, but specifics of the technical topic are also dealt with.
• They describe what must be done, but not how to do it.
End User Policy
• Security is also the responsibility of the end-user.
• Knowledge and awareness of information security can be created in end-users by elucidating to them all the information required to comply with and to implement, thereby enabling them to carry out their responsibilities in a more secure manner.
• An effective way to do this would be by grouping all end-user policies together.
• Some of these policy statements may overlap with the technical policy, and it should be ensured that all these should be at the same level as a technical policy.

Security Policy Document :
A security policy is a high-level management document to inform all users of the goals of and constraints on using a system.
A policy document is written in broad enough terms that it does not change frequently .
The information security policy is the foundation upon which all protection efforts are built.
The key role of security policy is to acts as a vehicle for informing people about the rules of accessing organizational technology and information assets.

A security policy must be understood by all as a product of a directives.
Contents of any Security Policy:
1. Purpose
– Recognizing sensitive assets.
– Clarifying security responsibilities
– Promoting awareness for existing employees
– Guiding new employee
2. People: A security policy addresses several different people with different expectations.

It identifies which people are responsible for implementing the security requirements.
Different users play different roles
• Personal User : Responsible for the security of their own machines.
• Project leader : Responsible for the security of the data and the entire system
• Managers : Responsible for supervising the security implementations.
• Database Administrator : Responsible for the access to and integrity of data in the database.
• Personnel (HR) Staff : Responsible for arranging training and awareness programs.

3. Protected resources: By risk analysis, we identify the assets to be protected. Policy must lay down the items it addresses i.e., the policy must contain the list of items /assets to be protected.
4. The security policy should state what degree of protection should be provided to which kinds of resources.

5. The policy should also indicate that who should have access to the protected items. It must also indicate how that access can be ensured and how unauthorized people will be denied access.
Characteristics of Good security Policy
Poorly written policy cannot guide the developers and users in providing appropriate security mechanisms to protect important assets.
Certain characteristics make a policy a good one
• Coverage : Security policy must be comprehensive. The policy must either apply to or explicitly exclude all possible situations. It must be general so that it can be applied to new cases that occurs unusually.
• Durability : Security policy must grow and adapt well. It must survive the system’s growth and expansion without change. It must be flexible.
• Realism : It must be realistic. Implementation of policy must be beneficial in terms of time, cost and convenience.
• Usefulness : It must be clear, direct and succinct .An obscure or incomplete security policy will not be easy to implement.
It must be written in such a way that it can be understood and followed by everyone who must implement it or is affected by it.
Risk Analysis
Risk : It is a potential problem that the system or its users may experience.
Any event that has risk will have the following factors:
• A loss is associated with an event: Event must generate a negative effect, compromised security, lost time, lost money etc.,
• Likelihood that the event will occur :The possibility of occurrence of the risk will be there.
• Risk Control :It is the degree to which the effects/outcomes could be reduced/eliminated.
A risk assessment/Analysis is the process of identifying and prioritizing risks to the business. The assessment is crucial.
Without an assessment, it is impossible to design good security policies and procedures that will defend your company’s critical assets.
It is useful for generating and documenting thoughts about likely threats and possible countermeasures.
It supports rational decision making about security controls.
• Threat—A natural or man-made event that could have some type of negative impact on the organization.
• Vulnerability—A flaw, loophole, oversight, or error that can be exploited to violate system security policy.
• Controls—Mechanisms used to restrain, regulate, or reduce vulnerabilities. Controls can be corrective, detective, preventive, or deterrent.
Steps in Risk Analysis:
1. Identification of assets: We must assess what to protect and list it properly.
2. Determination of threats and vulnerabilities: We must identify what threats these assets are exposed to.
3. Estimation of likelihood of exploitation
4. Computation of Risk : We calculate the value of the assets and risks.
5. Survey applicable controls and their costs.
Threat Type Threat Exploit/Vulnerability Exposed Risk
Human factor internal threat Intruder No security guard or controlled entrance Theft
Human factor external threat Hacker Misconfigured firewall Stolen credit card information
Human factor internal threat Current employee Poor accountability; no audit policy Loss of integrity; altered data
Natural Fire Insufficient fire control Damage or loss of life
Natural Hurricane Insufficient preparation Damage or loss of life
Malicious external threat Virus Out-of-date antivirus software Virus infection and loss of productivity
Technical internal threat Hard drive failure No data backup Data loss and unrecoverable downtime
























2. Determination of threats and vulnerabilities : The next step is to determine the vulnerabilities of these assets.
We make this assessment by keeping in mind the security goals like confidentiality, Integrity, and availability.
The threats and vulnerabilities of assets could lead to loss of these goals.
It is not possible to provide a complete list of threats and vulnerabilities as there is no easy procedure to list all the threats.
Assets and Attacks :
Asset Secrecy Integrity Availability
Hardware Overloaded Destroyed Tampered with Failed stolen destroyed unavailable
Software Stolen copied pirated Impaired by Trojan Horse modified tampered with Deleted misplaced usage expired
Data Disclosed accessed by outsider inferred Damaged s/w error h/w error user error Deleted misplaced destroyed
People Quit retired terminated on vacation
Documentation Lost stolen destroyed
Supplies Lost stolen destroyed
3. Estimate likelihood of exploitation:
In the third step ,we evaluate and assess the operations and assets that are most susceptible to attacks or disasters.
Likelihood of occurrence is related to existing controls and the likelihood that someone or something will evade the existing controls.
The objective of a vulnerability assessment is to examine systems for weaknesses that could be exploited, and to determine the chances of someone attacking any of those weaknesses.
Numerous types of vulnerabilities, both physical and electronic, are possible. Each should be examined .
Once a list of vulnerabilities per system is compiled, each vulnerability should be classified according to the probability that it could be exploited .
Internet Security Systems (ISS) Scanner are used to evaluate electronic vulnerabilities . ISS is used to scan systems for configurations and services, compare the results with a database of known exploits, and produce a report.
An alternative method is frequency probability i.e., to estimate the number of occurrence of an event in a given period of time.
The other method could be subjective probability (Delphi approach) . Experts make estimates based on their experience.
Experts are given the necessary details like the S/w and H/w architecture ,condition of use, users etc.,
4. Computation of Risk :Calculating how much loss can be expected from an incident.
A. Quantitative assessment:
It is an attempt to assign a monetary value to the assets and threats of risk analysis.
All elements (asset value, impact, threat frequency, safeguard effectiveness, safeguard costs ,uncertainty and probability) are quantified.

The quantitative assessment process involves the following three steps:
1. Estimate potential losses (SLE)—This step involves determining the single loss expectancy (SLE). SLE is calculated as follows:
Single loss expectancy x Asset value = Exposure factor
Items to consider when calculating the SLE include the physical destruction or theft of assets, the loss of data, the theft of information.
The exposure factor is the measure or percent of damage that a realized threat would have on a specific asset.
2. Conduct a threat analysis (ARO)—The purpose of a threat analysis is to determine the likelihood of an unwanted event.
The goal is to estimate the annual rate of occurrence (ARO).
Simply stated, how many times is this expected to happen in one year?
3. Determine annual loss expectancy (ALE)—
This third and final step of the quantitative assessment seeks to combine the potential loss and rate per year to determine the magnitude of the risk.
This is expressed as annual loss expectancy (ALE). ALE is calculated as follows:
Annualized loss expectancy (ALE) = Single loss expectancy (SLE) * Annualized rate of occurrence (ARO)
The Annualized Loss Expectancy (ALE) is the expected monetary loss that can be expected for an asset due to a risk over a one year period.
An important feature of the Annualized Loss Expectancy is that it can be used directly in a cost-benefit analysis.
If a threat or risk has an ALE of $5,000, then it may not be worth spending $10,000 per year on a security measure which will eliminate it.
For example, suppose the ARO is 0.5 and the SLE is $10,000. The Annualized Loss Expectancy is then $5,000 .
Using the Poisson Distribution we can calculate the probability of a specific number of losses occurring in a given year:
When performing the calculations you should include all associated costs, such as :
• Lost productivity
• Cost of repair
• Value of the damaged equipment or lost data
• Cost to replace the equipment or reload the data
B. Qualitative Assessment:
In Qualitative risk analysis , we ranks the seriousness of threats and sensitivity of assets into grades or classes, such as low, medium, and high . The ranking are subjective.
• Low—Minor inconvenience that could be tolerated for a short period of time.
• Medium—Could result in damage to the organization or cost a moderate amount of money to repair.
• High—Would result in loss of goodwill between the company and clients or employees. Could result in a legal action or fine, or cause the company to lose revenue or earnings.
Asset Loss of Confidentiality Loss of Integrity Loss of Availability
Customer database High High Medium
Internal documents Medium Medium Low
Advertising literature Low Medium Low
HR records High High Medium
5. Handling Risk / Countermeasures
This step determines how identified threats can be addressed. Risk can be dealt with in four general ways, either individually or in combination.
• Risk reduction—Implement a countermeasure to alter or reduce the risk.
• Risk transference—Purchase insurance to transfer a portion or all of the potential cost of a loss to a third party.
• Risk acceptance—Deal with risk by accepting the potential cost and loss if the risk occurs.
• Risk rejection—Pretend that the risk doesn’t exist and ignore it. Although this is not a prudent course of action, it is one that some organizations choose to take.
Which is the best way to handle risk?
This depends on the cost of the countermeasure, the value of the asset, and the amount by which risk-reduction techniques reduce the total risk.
Companies usually choose the one that provides the greatest risk reduction while maintaining the lowest annual cost.
Threat X Vulnerability X Asset value = Total risk
Total risk - Countermeasures = Residual risk
No organization can ever be 100% secure. There will always be remaining risk.
The residual risk is the amount that is left after safeguards and controls have been put in place.
What’s cost-effective? The cost-effectiveness of a safeguard can be measured as follows:
• ALE before the safeguard - ALE after the safeguard = Value of the safeguard to the organization
This formula can be used to evaluate the cost-effectiveness of a safeguard or to compare various safeguards to determine which are most effective. The higher the resulting value is, the more cost-effective the safeguard is.
Risk Analysis
PROS CONS
• Improves Awareness : It educates user about the role security plays in protecting functions and data that are essential to user operations.
• Relate security mission to management objective : It make the management realize that security balance harm and control costs.
• Justify expenditure for security
• Lack of accuracy
• Hard to perform as it requires creative thinking
• False sense of precision and confidence : It might calculate wrong estimate for the risk, loss etc.



Security Framework
Every enterprise prepares the security Framework based on their security needs, security commitments etc.
Every organisation must implement and maintain a program to adequately secure its information and system assets.
Program must
1)assure that systems and applications operate effectively and provide appropriate confidentiality, integrity, and availability .
2) protect information commensurate (proportionate) with the level of risk and magnitude of harm resulting from loss, misuse, unauthorized access, or modification.

Organisation must plan for security, and ensure that the appropriate officials are assigned security responsibility .

Framework provides a method to determine the current status of the security programs relative to existing policy and where necessary, establish a target for improvement.
It does not establish new security requirements.

The Framework is used to assess the status of security controls for a given asset or collection of assets.
The Framework comprises of five levels to assess the security programs and assist in prioritizing efforts for improvement.
The Framework provides a vehicle for consistent and effective measurement of the security status for a given asset.
The security status is measured by determining if specific security controls are documented, implemented, tested and reviewed, and incorporated into a cyclical review/improvement program, as well as whether unacceptable risks are identified and mitigated (Diminished).

Five Levels of Security Framework




The Framework is divided into five levels:
• Level 1 :Reflects that an asset has documented security policy.
A documented security policy is necessary to ensure adequate and cost effective organizational and system security controls. A sound policy delineates the security
management structure and clearly assigns security responsibilities, and lays the foundation necessary to reliably measure progress and compliance.
• Level 2 : Reflects that the asset also has documented procedures and controls to implement the policy.
Well-documented and current security procedures are necessary to ensure that adequate and cost effective security controls are implemented.
• Level 3 indicates that procedures and controls have been implemented.
• Level 4 shows that the procedures and controls are tested and reviewed.
• Routinely evaluating the adequacy and effectiveness of security policies, procedures, and controls ensures that effective corrective actions are being taken to address identified weaknesses.
• Level 5 indicates that the asset has procedures and controls fully integrated into a comprehensive program
The five levels measure specific management, operational, and technical control objectives.

Each of the five levels contains criteria to determine if the level is adequately implemented. For example,
In Level 1 all written policy should contain
• the purpose and scope of the policy
• who is responsible for implementing the policy, and
• the consequences and penalties for not following the policy.
• The policy for an individual control must be reviewed to ascertain that the criteria for level 1 are met.

Types of Information Security Controls
Controls can be Physical, Technical or Administrative or Logical.
• Preventive
• Detective



Physical Controls :
Preventive Physical Controls
– Backup files and Documentation
– Security Guards
– Badge Systems
– Double Door Systems
– Lock and Keys
– Biometric Access Controls
– Fire Controls
Detective Physical Controls
– Motion Detectors
– Smoke and Fire Detectors
– Closed Circuit Television Monitors
– Sensors and Alarms
Physical Security Controls has the following broad areas:
• Physical Facility
• Geographic locations
• Supporting Facilities
Threats :
• Interruption in providing computer services
• Physical Damage
• Unauthorized disclosure of information
• Loss of control over System Integrity
• Physical theft
Factors affecting Physical Controls :
1. PHYSICAL ACCESS CONTROL
2. FIRE SAFETY FACTORS
3. SUPPORTING UTILITIES
4. STRUCTURAL COLLAPSE
5. PLUMBING LEAK
6. INTERCEPTION OF DATA
7. MOBILE AND PORTABLE SYSTEMS
Logical Security Control :
Password Guidelines:
• Passwords are to be assigned to the individual employee or issued on an individual employee basis if computerized records are being accessed as part of their responsibility.
• Distribution of passwords should be handled with the strictest confidentiality.
• Passwords shall be changed on a regular basis (at least once every 60 days).
• Passwords that are obvious, such as nicknames and dates of birth, should not be allowable.
• Passwords should never be shared with another user. Employees are formally notified as to their role in protecting the security of the user ID and password.
• Passwords should have a minimum length of five characters.
• Passwords stored on a computer should be encrypted in storage.
• System software should enforce the changing of passwords and the minimum length and format.
• System software should disable the user identification code if more than three consecutive invalid passwords are given.
• System software should maintain a history of at least two previous passwords and prevent their reuse.
• Procedures for forgotten passwords should require that Support Services personally identify the user.
Access Security:
Desktop administrators should ensure that workstations are configured consistent with the job function of the computer user.
• Limiting programs or utilities available to only those needed by the position.
• Increasing controls on key system directories.
• Increased levels of auditing.
• Limiting use of removable media, such as floppy disks.
Data and Software Availability
• Back up and store important records and programs on a regular schedule.
• Check data and software integrity against original files.
• Use the latest version of specific software when possible.
• Ensure that software patches and updates are applied in a timely fashion.
Confidential Information
• Encrypt sensitive and confidential information where appropriate.
• Monitor printers used to produce sensitive and confidential information.
• Local System Protection

Firewalls:
Firewalls are hardware devices or software that protect a system or systems from access or intrusion by outside or untrusted systems or users, especially malicious hackers.
A firewall should also keep a log of any such attempts. Much of the functionality of a firewall can be implemented through the enabling and disabling of selected system services, Operating System auditing and control of Access Control Lists (ACLs).

Viruses:
Computer viruses are self-propagating programs that infect other programs.
Viruses and worms may destroy programs and data as well as using the computer's memory and processing power.
Viruses, worms, and Trojan horses are of particular concern in networked and shared resource environments because the possible damage they can cause is greatly increased.

To decrease the risk of viruses and limit their spread:
– Check all software before installing it.
– Use virus-scanning software to detect and remove viruses.
– Ensure that virus-scanning software is kept updated, preferably automated.
– Immediately isolate any contaminated system.

Audit Trails :
• Audit trails maintain a record of system activity both by system and application processes and by user activity of systems and applications.
• In conjunction with appropriate tools and procedures, audit trails can assist in detecting security violations, performance problems, and flaws in applications.
• An audit trail is a series of records of computer events, about an operating system, an application, or user activities.
• Benefits and Objectives
• Individual Accountability
• Reconstruction of events
• Intrusion Detection
• Problem Analysis
Audit trials and logs :
• Keystroke monitoring
– View and record user’s and computer’s keystrokes during an interactive session.
– Can be used to monitor specific users.
– Effective against intruders.
– User identification
Types :
• System level audit trails
– Log attempts (success or failure).
– Log-on id, date, time for each attempt.
– Devices used and functions accessed.
– Log-off.
• Application level audit trails
– Monitor user activities
– File manipulation
– Confidentiality and integrity of data
– Sensitivity
• User level audit trails
– All commands directly initiated by the user.
– All identification and authentication attempts.
– Files ad resources accessed.

Review of audit trails :
• Audit Trail Review After an Event
• Periodic Review of Audit Trail Data.
• Real-Time Audit Analysis.

Security & Integrity Threats :
Business connected to the Internet, essentially connects to the public networks of the entire world, thus exposes business infrastructure to the possibility of exploitation by thousands of people in the outside, online, global community.
Following could be the threats to security
1. Internal Users: Threats from internal users can be classified as either malicious or inadvertent/experimentation.
2. External Threats: Potentially embarrassing threat to your business comes from outside; viruses, worms, denial of service, web-defacement, and hacker penetration from the Internet can lead to downtime and loss of reputation and business.
3.Theft: All computers (and their components) are valuable physical assets, ripe for theft. Theft leads to downtime, embarrassment, loss of business, and leakage of proprietary information.
4. Fraud: Fraud-related risks impact e-commerce businesses and must be addressed: Bogus payment, and liability due to theft of customer payment data, such as credit-card details.
5. Human Error: This is perhaps the broadest yet mildest form of threat;
Lack of security awareness amongst employees can lead to leakage of proprietary data through personal emails, being locked-out of network resources through loss/forgetting of passwords etc.
6. Proprietary Information: Your data is your lifeblood. Threats come from physical theft, accidental deletion or destruction (fire, flood)
7.Security Policy: An effective security policy will EXPLICITLY make clear the risks that a business has foreseen and how they must address them, while also setting IMPLICIT standards of practice that must be adhered to. We raise the issue of security policy here because policy ITSELF must be created first of all, and it must address the following matters of misconduct, hacking, etc.

Data and infrastructure integrity
Whenever network security is compromised, whether due to a new worm attack or an intrusion from an inside source, the integrity of company data is in question. Many e-mail viruses modify or remove files from PC users’ disk drives; successful attacks against Web sites deface their content .
These business assets are too valuable to be left open to compromise, which is where Data Integrity Solutions come into play.
One of the primary applications of Data Integrity software is to monitor the integrity of other security products such as firewalls, intrusion detection systems and anti-virus scanners.
One of the first things attackers try to do is disable the security tools on the servers that they are attacking.
If an attacker can change a firewall rule to allow them to open a port, then that would allow the attacker to gain access through the firewall to other more critical servers or targets.
Some security products have configuration files that are stored in plain text that control how the product operates and functions.
It is important to monitor these files in order to detect any unauthorised changes that may allow an attacker to subvert (weaken) the tool.
Data Integrity tools can easily be configured to monitor these specific configuration files.
They also monitor the binary files of the security products to verify that no new possible malicious binary versions of the product are replaced.
• Data integrity assurance software establishes the baseline by taking a ‘snapshot’ of data in its desired state.
• It detects and reports changes to the baseline, whether accidental or malicious, from outside or within.
• By immediately detecting changes from the baseline, Data Integrity software can trigger fast remediation, and avoid the necessity of having to rebuild servers or routers from scratch.
• In this way Data Integrity software provides the foundation for data security and ensures a safe, productive, stable IT environment.
These are used for :
• Intrusion detection
• File integrity assessment,
• Damage discovery
• Change/configuration management
• System auditing and
• Policy compliance
It monitors data at rest and identifies data changes, and then alerts the system manager to unauthorised changes or internal or external intrusions.
• Data Integrity software is used in many cases to complete a well-rounded security policy complementing other security tools that may lack integrity verification functionality.

How Data Integrity Solutions Complement Other Security Technologies
• Network-Based Intrusion Detection Systems: Examine network packets for suspicious patterns, based on a database of attack signatures.
• Host-Based Intrusion Detection Systems: Monitor system or application logs for evidence of attack, based on a database of attack signatures.
• Security Policy Implementation: Enables security mangers to automate each step of the security policy from a central console. These tools enforce awareness and assess employee understanding of security policies.
• Network Vulnerability Assessment: Network-based scanners provide comprehensive views of all operating systems and services running and available on the network, detailed listings of all system-user accounts that can be discovered from standard network resources, and discover unknown or unauthorised devices on a network.
• Anti-Virus Scanners: Provide the ability to block viruses from getting to servers or workstations. These products should be deployed to prevent virus attacks from deleting and changing files. Many AV products use a signature database to scan against to detect viruses.
• By utilising Data Integrity software companies will not only mitigate security threats, but also create a more stable IT environment.
• By detecting unauthorised changes, companies can proactively increase the effectiveness of their change control and configuration management.
Data Integrity software provides the fundamental security layer that provides a high degree of confidence in the integrity of data assets and system infrastructure.
This foundation provides the means to detect and understand changes to systems and data over time, and better enforce the security and availability of those assets.

Security Awareness and Training
Security is not just technology. It is a process, a policy, and a culture. Companies spend on technology to keep the “bad guys” out, but spends little time in building the foundations of Security Program.
Usually companies don’t have relevant policies or culture of security awareness among managers or end users/employees.
The technology was not able to prevent end users from disabling it or doing unintentional damage by opening strange email attachments or telling someone their password.
Security awareness and training are perhaps the most overlooked parts of your security management program.

Why is security awareness and training so important and what constitutes a security awareness and training program?
• Security Awareness and Training should be vital components in the battle against malicious attacks and serious accidents/threats.
• A Security Awareness Program involves defining your baseline (the policies), communicating them (awareness), and evaluating your success (compliance monitoring and vulnerability assessments).

Security Awareness refers to those practices, technologies and/or services used to promote User awareness, User training and User responsibility with regards to security risks, vulnerabilities, methods, and procedures related to information technology resources
The goal of security awareness and training program should be to protect the confidentiality, integrity, and availability of the IT assets and data.

Phases of Your Security Awareness & Training Program
The National Institute of Standards and Technology (NIST) has defined four critical steps that a security awareness and training program should include:
• Design and planning of awareness and training program
• Development of your awareness and training materials
• Implementation of your awareness and training program
• Measuring effectiveness and updating your program

Security awareness and training program must advise the employees of the most important security policies and tell them where they can find an online copy of the complete set of security policies.

Purpose of security awareness and training
• To increase awareness on the importance of securing data/information efficiently.
• To understand the different types of threats, risks and vulnerabilities that exist and learn effective ways to mitigate the risks.
• To learn more about the different type of effective security management practices and tools that can be used to increase security.
• To create an in-depth culture of security among the employees.
Implementation :
• Delivering the Security Message
The security awareness and training program can be delivered on hardcopy memos, posters, classes, or through online initiatives.
Most organizations today choose to deliver their security awareness and training program over their Web-based intranet.
Some courses include short quizzes that test basic security knowledge, in order to find out if the employees understand the training.
• Employees should be educated not to give out their passwords out to anyone
• Employees need to understand what constitutes a safe password and a poor password
• Employees should be trained on how to update their anti-virus software
• Employees should also be educated about the dangers of opening up attachments.
• Employees must be educated to use certified and licensed softwares and other equipments.
• Employee must be educated about whom they should contact /report on suspecting a virus or an Internet-based attack or any problem .
• Employee must be educated on how to take backups of critical data/information.


Backups are vital for two reasons:
• If your site suffers serious damage and you have to restore your systems from scratch, you will need these backups.
• If you aren't sure of the extent of the damage, backups will help you to determine what changes were made to a system and when.
Even the best backup system won't work if the backup images aren't safeguarded. Don't rely on online backups and keep your media in a secure place separate from the data they're backing up.
• They must be trained and educated about how to put Labels and diagrams on the system. Labels and diagrams are crucial in investigating and controlling a security incident.
System labels should indicate what a system is, what it does, what its physical configuration is (how much disk space, how much memory, etc.), and who is responsible for it. They should be attached firmly to the correct systems.
• Network diagrams should show how the various systems are connected, both physically and logically, as well as things like what kind of packet filtering is done where.
• Employees must be trained for
o Proper Information Management (Confidentiality, Data Protection & Privacy)
o AccessControl
o Security incidents (hacker attacks, social engineering, theft, handling incidents)
o Data Storage & Device management
o Information Security in the Office
Techniques used for generating Security Awareness
• Using Motivational slogans
• Security is everyone's responsibility!
• SEC_RITY is not complete without U!
– Banners ,Videos
– Briefings, articles, newsletters, and magazines
– Exhibits (e.g., Computer Security Days, Agency Conferences)
– Monthly awareness newsletters provide updates on the latest news with actual stories from actual users personal experiences.
Security is an on-going process. It is not a one-shot deal.
Security Awareness and Training Program must be Updated periodically with the changing requirements.


Security awareness and training program is no guarantee that the organization will not get attacked by cyber miscreants, it will surely decrease the impact an attack will have, and the management team will be able to rest assured knowing that they at least made an attempt to institute awareness and training of the policies, procedures, and processes.



Incident Handling

• Incident Handling refers to those practices, technologies and/or services used to respond to suspected or known breaches to security safeguards.
• Once a suspected intrusion activity has been qualified as a security breach (i.e., incident), it is imperative that the incident be contained (restricted) as soon as possible, and then eradicated so that any damage and risk exposure are avoided or minimized.
• Security incidents refer to deliberate, malicious acts which may be technical (e.g., creation of viruses, system hacking) or non-technical (e.g., theft, property abuse, service disruption).
• An incident is defined as any activity that interrupts the normal activities of a system and that may trigger some level of crisis.
• An incident does not necessarily need to be based in technology to be effective in damaging an organization. But you can use technology to prevent , track and report incidents to proper authorities.
• If the incident is left "unchecked" (i.e., not contained), then the damage resulting from the incidents continues to spread within, and across.
• Incident handling involves breach recognition, evidence integrity maintenance, damage recovery, investigation and prosecution.
An incident starts when somebody (user/operator /System Administrator/Network Administrator) detects an intruder or attacker or detects some malicious activity.
Seeing the severity of the situation/problem user responds by either informing the authorized person (member of Incident Response Team) or shutting down their personal machines if they suspect an attack.
Before responding to any incident, it is better to have a suitable incident Response Plan that will inform you primarily about two things : Authority and Communication. For each part of the incident response, the plan will say who's in charge, and who they're supposed to talk to.
• Incidents vary so much that the response plan mostly details who's going to make decisions, and who they're going to contact.
Response plan must clearly state the authorized person to be contacted or reported in case of any incident.
There could be multiple methods of contacting them : Home phone, work, cell, friends, etc.
It must also mention that if primary Subject Matter Expert is not available then which secondary person should be contacted.
The company must designate a few people that can assume leadership in a crisis and have the authority to make decisions.
Response plan needs to specify what kind of situation warrants disconnecting or shutting down, and who can make the decision to do that.
• In most security emergencies, the correct way to shut down the machine is to do an immediate but graceful shutdown (specially in multi-user system), with no explanations or warnings sent.
• If the intruder is actively destroying things, you want people to shut the machine down by the fastest method possible. Cutting off the power to the machine or the disk drive is completely appropriate, despite the damage it may cause.
The next priority is to start to fix what's gone wrong. Whatever corrective actions you're contemplating, think them carefully.
Will they really solve the problem? Will they, in turn, cause other problems? etc.,

Make ‘Incident in Progress’ Notification for People Who Need to Know
The incident response plan needs to specify who you're going to notify, who's going to do the notification, when they're going to do it, and what method they're going to use.
you may need to notify
• People within your own organization
• CERT(Computer Emergency Response Team) or other incident response teams
• Vendors and service providers
• People at other sites



Recovering from incident
Finally, you're at the point of actually dealing with the incident.
Different incidents require different amounts of recovery. Your response plan should provide some general guidelines.
Type of Recovery depends on the circumstances. Here are some possibilities:
• If the attacker didn't succeed in compromising your system, you may not need to do much. You may decide not to bother reacting to casual attempts.
You may also find that your incident was actually something perfectly innocent, and you don't need to do anything at all.
• If the attack was a particularly determined one, you may want to increase your monitoring (at least temporarily), and you'll probably want to inform other people to watch out for future attempts.
• If the attacker became an intruder, that is, he actually managed to get into your computers, you're going to need to at least plug the hole the intruder used, and check to make certain he hasn't damaged anything.
• At worst, you may need to rebuild your system from scratch. Sometimes you end up doing this because the intruder damaged things, purposefully or accidentally.
• Keeping Secured Checksums
Once you've had a break-in, you need to know what's been changed on your systems.
With a cryptographic checksumming program installed (such as Tripwire) that make checksums of important files, and store them where an intruder can't modify them (which generally means somewhere off-line).
You may not need to checksum every system separately if they're all running the same release of the same operating system, although you should make sure that the checksum program is available on all your systems.
• Keeping Activity Logs
An activity log is a record of any changes that have been made to a system, both before an incident and during the response to an incident.
• Testing the Reload of the Operating System
If a serious security incident occurs, you may need to restore your system from backups.
• Planning for Documentation
Your plan should include the basic instructions on what documentation methods you intend to use and where to find the supplies.
If you might pursue legal action, your plan should also include the instructions on dating, labeling, signing, and protecting the documentation.
• Periodic Review of Plans
However solid your security incident response plans may seem to be, make sure to go back and review them periodically. Changes - in requirements, priorities, personnel, systems, data, and other resources - are inevitable, and you need to be sure that your response plans keep up with these changes.
• The incident response plan is not the only thing that you need to have ready in advance. There are a number of practices and procedures that you need to set up so that you'll be able to respond quickly and effectively when an incident occurs.

Incident Response Team
We must prepare an Incident response team with people from all fields like : senior managers, technical persons, network engineers, system administrators etc.
• They are responsible
– to decide whether a suspicious activity is a serious attack or not.
– To detect the problem/incident
– To decide what actions to be taken to recover.




Doing Drills
Don't assume that responding to a security incident will come naturally. Like everything else, such a response benefits from practice. Test your own organization's ability to respond to an incident by running occasional drills.
There are two basic types of drills:
• In a paper (or "tabletop") drill, you gather all the relevant people in a conference room (or over pizza at your local hangout), outline a hypothetical problem, and work through the consequences and recovery procedures. It's important to go through all the details, step by step, to expose any missing pieces or misunderstandings.
• In a live drill, you actually carry out a response and recovery procedure. A live drill can be performed, with appropriate notice to users, during scheduled system downtimes.



NETWORK ATTACKS
*Buffer overflow : In computer security and programming, a buffer overflow, or buffer overrun, is a programming error which may result in a memory access exception and program termination, or in the event of the user being malicious, a breach of system security.
A buffer overflow is an anomalous condition where a process attempts to store data beyond the boundaries of a fixed length buffer. The result is that the extra data overwrites adjacent memory locations. The overwritten data may include other buffers, variables and program flow data.
Buffer overflows may cause a process to crash or produce incorrect results. They can be triggered by inputs specifically designed to execute malicious code or to make the program operate in an unintended way. As such, buffer overflows cause many software vulnerabilities and form the basis of many exploits. Sufficient bounds checking by either the programmer or the compiler can prevent buffer overflows.
A buffer overflow occurs when data written to a buffer, due to insufficient bounds checking, corrupts data values in memory addresses adjacent to the allocated buffer. Most commonly this occurs when copying strings of characters from one buffer to another.
Basic example
In the following example, a program has defined two data items which are adjacent in memory: an 8-byte-long string buffer, A, and a two-byte integer, B. Initially, A contains nothing but zero bytes, and B contains the number 3. Characters are one byte wide.
A A A A A A A A B B
0 0 0 0 0 0 0 0 0 3
Now, the program attempts to store the character string "excessive" in the A buffer, followed by a zero byte to mark the end of the string. By not checking the length of the string, it overwrites the value of B:
A A A A A A A A B B
'e' 'x' 'c' 'e' 's' 's' 'i' 'v' 'e' 0
In addition to subroutine calls, an attacker must also understand enough assembly (the byte codes used by processors like the Intel Pentium) to code the exploit itself. In a buffer overflow exploit, code gets written on the stack, beyond the return address and function call arguments, and the return address gets modified so that it will point to the beginning (approximately) of the code. Then, when the function call returns, the attacker’s code gets executed instead of normal program execution.
Buffer overflows on the stack :
In the following example, "X" is data that was on the stack when the program began executing; the program then called a function "Y", which required a small amount of storage of its own; and "Y" then called "Z", which required a large buffer:
Z Z Z Z Z Z Y X X X
: / / /
If the function Z caused a buffer overflow, it could overwrite data that belonged to function Y or to the main program:
Z Z Z Z Z Z Y X X X
. . . . . . . . / /
This is particularly serious because on most systems, the stack also holds the return address, that is, the location of the part of the program that was executing before the current function was called. When the function ends, the temporary storage is removed from the stack, and execution is transferred back to the return address. If, however, the return address has been overwritten by a buffer overflow, it will now point to some other location. In the case of an accidental buffer overflow as in the first example, this will almost certainly be an invalid location, not containing any program instructions, and the process will crash. However, a malicious attacker could tailor the return address to point to an arbitrary location such that it could compromise system security.
Protection against buffer overflows :
(i)Choice of programming language : The choice of programming language can have a profound effect on the occurrence of buffer overflows. C and C++ provide no protection against accessing or overwriting data in any part of memory through invalid pointers; more specifically, they do not check that data written to an array (the implementation of a buffer) is within the assumed boundaries of that array.
Many other programming languages provide runtime checking which might send a warning or raise an exception when C or C++ would overwrite data. Examples of such languages range broadly from Python to Ada, from Lisp to Modula-2, and from Smalltalk to OCaml. The Java and .NET bytecode environments also require bounds checking on all arrays. Nearly every interpreted language will protect against buffer overflows, signalling a well-defined error condition.
(ii) Use of safe libraries : Well-written and tested abstract data type libraries which centralize and automatically perform buffer management and include bounds checking can reduce the occurrence of buffer overflows. The two main building block data types in these languages in which buffer overflows commonly manifest are strings and arrays; libraries preventing buffer overflows in these data types provide the vast majority of the necessary coverage. Still, failure to use these safe libraries correctly can result in buffer overflows and other vulnerabilities; naturally, any bug in a library itself is a potential vulnerability.
(iii) Stack-smashing protection : Stack-smashing protection is used to detect the most common buffer overflows by checking that the stack has not been altered when a function returns. If it has been altered, the program exits with a segmentation fault. Three such systems are Libsafe,[4] and the StackGuard [5] and ProPolice[6] gcc patches. Stronger stack protection is possible by splitting the stack in two: one for data and one for function returns.
(iv) Executable space protection : Some operating systems now include features to prevent execution of code on the stack, such that an attempt to execute machine code in these regions will cause an exception. Implementations for linux include PaX[7], Exec Shield[8], OpenBSD's W^X and Openwall. Preventing the execution of code on the stack or heap means that an attacker cannot provide arbitrary code that will reside on the stack or heap before being executed.
(v) Address space layout randomization : Address space layout randomization (ASLR) is a computer security feature which involves arranging the positions of key data areas, usually including the base of the executable and position of libraries, heap, and stack, randomly in a process' address space. Randomization of the virtual memory addresses at which functions and variables can be found can make exploitation of a buffer overflow more difficult, but not impossible.
(vi) Deep packet inspection : The use of deep packet inspection (DPI) can detect, at the network perimeter, remote attempts to exploit buffer overflows by use of attack signatures and heuristics. Packet scanning is not an effective method since it can only prevent known attacks. Attackers have begun to use alphanumeric, metamorphic, and self-modifying shellcodes to avoid detection by heuristic packet scans also.
*Internet protocol address spoofing : In computer networking, the term internet protocol address spoofing is the creation of IP packets with a forged (spoofed) source IP address. Since "IP address" is sometimes just referred to as an IP, IP spoofing is another name for this term.
The term spoofing is also sometimes used to refer to header forgery, the insertion of false or misleading information in email or netnews headers. Falsified headers are used to mislead the recipient, or network applications, as to the origin of a message. This is a common technique of spammers and sporgers, who wish to conceal the origin of their messages to avoid being tracked down. Less fraudulently, some netnews users place obviously false email addresses in their headers to avoid spam or other unwanted responses.
Spoofing is the creation of TCP/IP packets using somebody else's IP address. Routers use the "destination IP" address in order to forward packets through the Internet, but ignore the "source IP" address. That address is only used by the destination machine when it responds back to the source.
A common misconception is that "IP spoofing" can be used to hide your IP address while surfing the Internet, chatting on-line, sending e-mail, and so forth. This is generally not true. Forging the source IP address causes the responses to be misdirected, meaning you cannot create a normal network connection.
Examples of spoofing:
Non-Blind Spoofing : This type of attack takes place when the attacker is on the same subnet as the victim. The sequence and acknowledgement numbers can be sniffed, eliminating the potential difficulty of calculating them accurately. The biggest threat of spoofing in this instance would be session hijacking. This is accomplished by corrupting the datastream of an established connection, then re-establishing it based on correct sequence and acknowledgement numbers with the attack machine. Using this technique, an attacker could effectively bypass any authentication measures taken place to build the connection.
Blind Spoofing : This is a more sophisticated attack, because the sequence and acknowledgement numbers are unreachable. In order to circumvent this, several packets are sent to the target machine in order to sample sequence numbers. While not the case today, machines in the past used basic techniques for generating sequence numbers. It was relatively easy to discover the exact formula by studying packets and TCP sessions. Today, most OSs implement random sequence number generation, making it difficult to predict them accurately. If, however, the sequence number was compromised, data could be sent to the target. Several years ago, many machines used host-based authentication services (i.e. Rlogin). A properly crafted attack could add the requisite data to a system (i.e. a new user account), blindly, enabling full access for the attacker who was impersonating a trusted host.
Man In the Middle Attack : Both types of spoofing are forms of a common security violation known as a man in the middle (MITM) attack. In these attacks, a malicious party intercepts a legitimate communication between two friendly parties. The malicious host then controls the flow of communication and can eliminate or alter the information sent by one of the original participants without the knowledge of either the original sender or the recipient. In this way, an attacker can fool a victim into disclosing confidential information by “spoofing” the identity of the original sender, who is presumably trusted by the recipient.
Routing Redirect : redirects routing information from the original host to the hacker's host (this is another form of man-in-the-middle attack).
Source Routing : Another variant of IP spoofing makes use of a rarely used IP option, "Source Routing". Source routing allows the originating host to specify the path (route) that the receiver should use to reply to it. An attacker may take advantage of this by specifying a route that by-passes the real host, and instead directs replies to a path it can monitor (e.g., to itself or a local subnet). Although simple, this attack may not be as successful now, as routers are commonly configured to drop packets with source routing enabled.
How spoofing is done ?
The header of every IP packet contains its source address. This is normally the address that the packet was sent from. By forging the header, so it contains a different address, an attacker can make it appear that the packet was sent by a different machine. This can be a method of attack used by network intruders to defeat network security measures, such as authentication based on IP addresses.
This method of attack on a remote system can be extremely difficult, as it involves modifying thousands of packets at a time and cannot usually be done using a Microsoft Windows computer. IP spoofing involves modifying the packet header, which lists, among other things, the source IP, destination IP, a checksum value, and most importantly, the order value in which it was sent. As when a box sends packets into the Internet, packets sent may, and probably will arrive out of order, and must be put back together using the order sent value. IP spoofing involves solving the algorithm that is used to select the order sent values, and to modify them correctly. This poses a major problem because if one evaluates the algorithm in the wrong fashion, the IP spoof will be unsuccessful.
This type of attack is most effective where trust relationships exist between machines. For example, it is common on some corporate networks to have internal systems trust each other, so that a user can log in without a username or password provided they are connecting from another machine on the internal network (and so must already be logged in). By spoofing a connection from a trusted machine, an attacker may be able to access the target machine without authenticating.
Defending Against Spoofing :
(i) Filtering at the Router - Implementing ingress and egress filtering on your border routers is a great place to start your spoofing defense. You will need to implement an ACL (access control list) that blocks private IP addresses on your downstream interface. Additionally, this interface should not accept addresses with your internal range as the source, as this is a common spoofing technique used to circumvent firewalls. On the upstream interface, you should restrict source addresses outside of your valid range, which will prevent someone on your network from sending spoofed traffic to the Internet.
(ii) Encryption and Authentication - Implementing encryption and authentication will also reduce spoofing threats. Both of these features are included in Ipv6, which will eliminate current spoofing threats. Additionally, you should eliminate all host-based authentication measures, which are sometimes common for machines on the same subnet. Ensure that the proper authentication measures are in place and carried out over a secure (encrypted) channel.
*Session Hijacking : Session hijacking takes the act of spoofing one step further. It involves faking someone's identity in order to take over a connection that's already established. TCP session hijacking is when a hacker takes over a TCP session between two machines. Since most authentication only occurs at the start of a TCP session, this allows the hacker to gain access to a machine.
Session Hijacking - A method of attack which involves a third party intercepting communications in a session, or series of communications, and pretending to be one of the parties involved in the session.
A popular method is using source-routed IP packets. This allows a hacker at point A on the network to participate in a conversation between B and C by encouraging the IP packets to pass through its machine.
If source-routing is turned off, the hacker can use "blind" hijacking, whereby it guesses the responses of the two machines. Thus, the hacker can send a command, but can never see the response. However, a common command would be to set a password allowing access from somewhere else on the net.
A hacker can also be "inline" between B and C using a sniffing program to watch the conversation. This is known as a "man-in-the-middle attack".
A common component of such an attack is to execute a denial-of-service (DoS) attack against one end-point to stop it from responding. This attack can be either against the machine to force it to crash, or against the network connection to force heavy packet loss.
The difference between spoofing and hijacking :
-Spoofing is pretending to be someone else. This could be achieved by sniffing a logon/authentication process and replaying it to the server after the user has logged off. The server may then assume you are the user that the sniffed process actually belongs to.
-Hijacking is taking over an already established TCP session and injecting your own packets into that stream so that your commands are processed as the authentic owner of the session.
One problem with TCP is that it was not built with security in mind. Any TCP session is identified by the (client IP address + client port number) and (server IP address + server port number). Any packets that reach either machine that have those identifiers are assumed to be part of the existing session. So if an attacker can spoof those items, they can pass TCP packets to the client or server and have those packets processed as someone else!
One of the key features of TCP is reliability and ordered delivery of packets. To accomplish this, TCP uses acknowledgment (ACK) packets and sequence numbers. Manipulating these is the basis for TCP session hijacking.
To complete a hijack you must perform 3 actions.
1. Monitor or track a session
2. Desynchronize the session
3. Inject your own commands
To monitor a session, you simply sniff the traffic. How do we achieve the de-synchronization of a session??
BySequencePacketPrediction.
If we can predict the next sequence number to be used by a client (or server), we can use that sequence number before the client (or server) gets a chance to. Predicting the number may seem a difficult task to do as we have a possible 4 billion combinations, but remember that the ACK packet actually tells us the next expected sequence number. If we have access to the network and can sniff the TCP session, we don’t have to guess, we are told what sequence number to use! This is known as Local Session Hijacking.
If you do not have the ability to sniff the TCP session between client and server, you will have to attempt Blind Session Hijacking. This attack vector is much less reliable as you are transmitting data packets with guessed sequence numbers. 4 billion possible combinations then becomes a very big pool to choose from!
For now, observe what happens to these sequence numbers when the client starts sending data to the server. In order to keep the example simple, the client sends the character A in a single packet to the server.

Figure 2 Sending Data over TCP
The client sends the server the single character in a data packet with the sequence number x+1. The server acknowledges this packet by sending back to the client an ACK packet with number x+2 (x+1, plus 1 byte for the A character) as the next sequence number expected by the server.

Figure 3 Blind Injection
Here is how this might play out. The attacker sends a single Z character to the server with sequence number x+2. The server accepts it and sends the real client an ACK packet with acknowledgment number x+3 to confirm that it has received the Z character. When the client receives the ACK packet, it will be confused, either because it did not send any data or because the next expected sequence is incorrect. (Maybe the attacker sent something "nice" like "mv `which emacs` /vmunix && shutdown –r now" and not just a single character.) As you will see later, this confusion can cause a TCP ACK storm, which can disrupt a network. In any case, the attacker has now successfully hijacked this session.
*Sequence Guessing : Both the sender and the receiver exchange initial sequence numbers (ISN) during the connection setup phase. After a successful initial handshake, both the sender and the receiver know the sequence numbers that they have to use for communication. Since TCP allows for delayed segments, it must accept segments that are out of sequence, but within certain bounds, known as the receiver window size. The receiver window size is also exchanged during the initial handshake. TCP will discard all segments that do not have a sequence number within the computed bounds. The TCP sequence number is a 32-bit counter. In order to distinguish between different connections between the same sender and receiver, it is important that the sequence numbers do not start at 0 or any other fixed number each time a connection is opened. Hence, it is important that the first byte of data from the sender to the receiver is using a random sequence number. Current implementations increment the sequence number by a finite amount every second.
The sequence number used in TCP connections is a 32 bit number, so it would seem that the odds of guessing the correct ISN are exceedingly low. However, if the ISN for a connection is assigned in a predictable way, it becomes relatively easy to guess. The ISN for a connection is assigned from a global counter. This counter is incremented by 128 each second, and by 64 after each new connection (i.e., whenever an ISN is assigned). By first establishing a real connection to the victim, the attacker can determine the current state of the system's counter. The attacker then knows that the next ISN to be assigned by the victim is quite likely to be the predetermined ISN, plus 64. The attacker has an even higher chance of correctly guessing the ISN if he sends a number of spoofed IPframes, each with a different, but likely, ISN.
However, when the host receiving spoofed packets completes its part of the three-way handshake, it will send a SYN&ACK to the spoofed host. This host will reject the SYN&ACK, because it never started a connection—the host indicates this by sending a reset command (RST), and the attacker's connection will be aborted. To avoid this, the attacker can use the aforementioned SYN attack to swamp the host it is imitating. The SYN&ACK sent by the attacked host will then be ignored, along with any other packets sent while the host is flooded. The attacker then has free reign to finish with his attack. Of course,
if the impersonated host happens to be off-line (or was somehow forced off-line), the attacker need not worry about what the victim is sending out.


Preventing Sequence Guessing : For host A to successfully impersonate host B in a connection with host C, A must be able to acquire the sequence number which C sends back (to B). There are a number of ways to get at this number, but one popular technique is to simply guess the number that will be used. If we could prevent people from being able to guess the next sequence number used for a connection based on the previous sequence number used in a connection, we would be able to nip this problem in the bud.
Bellovin goes on to suggest that we utilize a random number generator, which would make things hopefully harder to analyze. He further suggests that DES could be used in place of a random number generator, since it is well-studied, and the encryptions of an incrementing counter are quite varied and difficult to predict. Of course, all methods of generating pseudorandom numbers are subject to analysis; if nothing else, once the seed is discovered, the whole sequence is known. Whatever we choose as a solution, we should
certainly avoid using a simple, easily predicted sequence number generator.
NETWORK SCANNING :
Scanning - The art of detecting which systems are alive and reachable via the Internet, and
what services they offer, using techniques such as ping sweeps, port scans, and operating
system identification, is called scanning.
The kind of information collected here has to do with the following:
1. TCP/UDP services running on each system identified.
2. System architecture (Sparc, Alpha, x86).
3. Specific IP addresses of systems reachable via the Internet.
4. Operating system type.

2.0 PING Sweeps
2.1 ICMP sweeps (ICMP ECHO requests)
We can use ICMP packets to determine whether a target IP address is alive or not, by simply sending an ICMP ECHO request (ICMP type 8) packets to the targeted system and waiting to see if an ICMP ECHO reply (ICMP type 0) is received. If an ICMP ECHO reply is received, it means that the target is alive; No response means the target is down. Querying multiple hosts using this method is referred to as Ping Sweep. Ping Sweeps is the most basic step in mapping out a network. This is an older approach to mapping, and the scan is fairly slow.
Some of the tools used for this kind of scan include –
UNIX:
• fping & gping 2
• nmap 3
Windows:
• Pinger from Rhino9 4
Pinger is one of the fastest ICMP sweep scanners. Its advantage lies in its ability to send multiple ICMP ECHO packets concurrently and wait for the response. It also allows you to resolve host names and save the output to a file.
Blocking ICMP sweeps is rather easy, simply by not allowing ICMP ECHO requests into your network from the void. If you are still not convinced that you should block ICMP ECHO requests, bear in mind that
you can also perform Broadcast ICMP’s.


2.4 TCP Sweeps
The TCP connection establishment process is called “the three way handshake”, and is combined of three segments.
1. A client sends a SYN segment specifying the port number of a server that the client wants to connect to, and the client initial sequence number.
2. If the server’s service (or port) is active the server will respond with its own SYN segment containing the server’s initial sequence number. The server will also acknowledge the client’s SYN by ACKing the client’s SYN+1.
If the port is not active, the server will send a RESET segment, which will reset the
connection.
3. The client will acknowledge the server’s SYN by ACKing the servers ISN+1.

When will a RESET be sent? – Whenever an arriving segment does not appear correct to the referenced connection. Referenced connection means the connection specified by the destination IP address and port number, and the source IP address and the port number .
With the TCP Sweep technique, instead of sending ICMP ECHO request packets we send TCP ACK or TCK SYN packets (depending if we have root access or not) to the target network. The port number can be selected to meet our needs. Usually a good pick would be one of the following ports – 21 / 22 / 23 / 25 / 80 (especially if a firewall is protecting the targeted network). Receiving a response is a good indication that something is up there. The response depends on the target’s operating system, the nature of the packet sent and any firewalls, routers or packet-filtering devices used.
Bear in mind that firewalls can spoof a RESET packet for an IP address, so TCP Sweeps may not be reliable.
Nmap and Hping are tools that support TCP Sweep, both for the Unix platform. Hping even adds an additional option to fragment packets, which allows the TCP packet to pass through certain access control devices.

3.1 Port Scanning Types
3.1.1 TCP connect() scan
With this type of scan we use the basic TCP connection establishment mechanism. To open a connection to an interesting port on the targeted machine:
1. A SYN packet is sent to the target’s system interesting port.
2. Now we wait to see what type of packet is sent back from the target.
• If a SYN/ACK packet is received it usually means the port is in a LISTENING state.
• If a RST/ACK packet is received, it usually means the port is not LISTENING and the connection will RESET.
3. We finish the three-way handshake (if SYN/ACK packet was received) by sending an ACK.
4. A connection is terminated after the full connection establishment process has been completed.
This kind of scan is easily detected. Inspecting the target system log will show a number of
connections and error messages immediately after each one of them was initiated.
3.1.2 TCP SYN Scan (half open scanning)
This type of scan differs from TCP connect() scan because we do not open a full TCP connection. You send a SYN packet to initiate the three-way handshake and wait for a response. If we receive an SYN/ACK it indicates the port is LISTENING. If we receive an RST/ACK it indicates a non-LISTENING port. If we do receive a SYN/ACK packet weimmediately tear down the connection by sending a RESET.
Because the TCP three-way handshake was not completed some of the sites will probably not log these scanning attempts.
3.1.3 Stealth Scan
Chris Klaus was one of the first people to write a paper about stealth scans. In a paper called
“stealth scanning – Bypassing Firewalls/SATAN Detectors” 9, he describes a technique which
intentionally violates the TCP three-way handshake. This technique is what people refer to
today as “half-open” scanning.
Today, some people use the “stealth” term to mean NULL flags (no flags or code bits set).
“Stealth” can also be defined as a scanning technique family, doing one of the following:
• Pass through filtering rules.
• Not to be logged by the targeted system logging mechanisms.
• Try to hide themselves at the usual site / network traffic.
DoS / DDoS Attacks : A Denial of Service (DoS) attack is one of the most simple and common attacks today. DoS attacks are not targeted at stealing, modifying or destroying information, but to prevent legitimate users from using a service. A DOS attack comes in many forms, from simply cutting of the power to a system, or flooding a system with seemingly legitimate network traffic, anything that will results in a denial of service. The public nature of the Internet makes it particularly vulnerable to DoS attacks. The DoS/DDoS attacks described below are all network-based DoS attacks. DoS/DDoS attacks are also active attacks, as the attacker actively attempts to change something, in this case the availability of a server or service.
An assault on a network that floods it with so many additional requests that regular traffic is either slowed or completely interrupted. Unlike a virus or worm, which can cause severe damage to databases, a denial of service attack interrupts network service for some period.
A denial-of-service attack (also, DoS attack) is an attack on a computer system or network that causes a loss of service to users, typically the loss of network connectivity and services by consuming the bandwidth of the victim network or overloading the computational resources of the victim system.
A DoS attack can be perpetrated in a number of ways. There are three basic types of attack:
1. consumption of computational resources, such as bandwidth, disk space, or CPU time
2. disruption of configuration information, such as routing information
3. disruption of physical network components.
1.1 What is a DoS?
As the name implies, DoS is a Denial of Service to a victim trying to access a resource. In many cases it can be safey said that the attack requires a protocol flaw as well as some kind of network amplification.
Denial of Services is also an attack on a computer system or network that causes a loss of service to users, typically the loss of network connectivity and services through the the consumption of bandwidth of the victim network, or the overloading the computational resources of the victim system. (see the Wikipedia definition) The motivation for DoS attacks is not to break into a system. Instead, it is to deny the legitimate use of the system or network to others who need its services. One can say that this will typically happen through one of the following means:
1. Crashing the system.
2. Deny communication between systems.
3. Bring the network or the system down or have it operate at a reduced speed which affects productivity.
4. Hang the system, which is more dangerous than crashing since there is no automatic reboot. Productivity can be disrupted indefinitely.
DoS attacks can also be major components of other type of attacks.

TCPSYN Flood Attack : A common example of a DoS attack is the TCP SYN flood attack, in which the attacker exploits behavior inherit to the TCP protocol. A TCP session is established by using a three-way handshake mechanism, which allows the client and the host to synchronize the connection and agree upon the initial sequence numbers. When the client connects to the host, it sends a SYN request to establish and synchronize the connection. The host replies with a SYN / ACK, again to synchronize. Then the client acknowledges it received the SYN/ ACK packet by sending and ACK. When the host receives the ACK the connection will become OPEN, allowing traffic from both sides (full-duplex). The connection remains open until the client or the host issues a FIN or RST packet, or the connection times out. This process is illustrated below:

In a TCP SYN flood attack, the attacker creates half-open TCP connections by sending the initial SYN packet with a forged IP address, and never acknowledges the SYN /ACK from the host with an ACK. This will eventually lead to the host reaching a limit and stop accepting connections from legitimate users as well. Many routers and other network nodes today are able to detect SYN floods by monitoring the amount of unacknowledged TCP sessions and kill them before the session queue is full. They can often be configured to set the maximum allowed number of half-open connections, and limit the amount of time the host waits for the final acknowledgement. Without these preventive measures, the server could eventually run out of memory, causing it to crash entirely.
UDPFloodAttacks:UDP is a connectionless protocol that doesn’t use a handshake mechanism to establish a connection. This makes it relatively easy to abuse for flood attacks. A common type of UDP flood attack often referred to as a Pepsi attack, is an attack in which the attacker sends a large number of forged UDP packets to random diagnostic ports on a target host. The CPU time, memory, and bandwidth required to process these packets may cause the target to become unavailable for legitimate users. To minimize the risk of a UDP flood attack, disabling all unused UDP services on hosts and block the unused UDP ports if you use a firewall to protect your network.
Pingof Death AttacksAnother well-known DoS attack is the Ping of Death. It is also targeted at hosts with a weak implementation of the TCP/IP stack. The attacker sends an ICMP Echo request packet with a size larger than 65,535 bytes, causing the buffer at the receiver to overflow when the packet is included in the reassemble process. This can lead to the target system to crash and/or reboot. Especially older Windows versions (95/NT4), but also older MAC and Linux operating systems and other network devices such as routers were vulnerable to the Ping of Death. Modern operating systems and network devices safely disregard these oversized packets. Older systems can usually be updated with a patch.
SmurfAttacks:A nasty type of DoS attack is the Smurf attack, which is made possible mostly because of badly configured network devices that respond to ICMP echoes sent to broadcast addresses. The attacker sends a large amount of ICMP traffic to a broadcast address and uses a victim’s IP address as the source IP so the replies from all the devices that respond to the broadcast address will flood the victim. The nasty part of this attack is that the attacker can use a low-bandwidth connection to kill high-bandwidth connections. The amount of traffic sent by the attacker is multiplied by a factor equal to the number of hosts behind the router that reply to the ICMP echo packets.

The diagram above depicts a Smurf attack in progress. The attacker sends a stream of ICMP echo packets to the router at 128Kbps. The attacker modifies the packets by changing the source IP to the IP address of the victim’s computer so replies to the echo packets will be sent to that address. The destination address of the packets is a broadcast address of the so-called bounce site, in this case 129.64.255.255. If the router is (mis-)configured to forward these broadcasts to hosts on the other side of the router (by forwarding layer 3 broadcasts to the layer 2 broadcast address FF:FF:FF:FF:FF:FF) all these host will reply. In the above example that would mean 640Kbps (5 x 128Kbps) of ICMP replies will be sent to the victim’s system, which would effectively disable its 512Kbps connection. Besides the target system, the intermediate router is also a victim, and thus also the hosts in the bounce site. A similar attack that uses UDP echo packets instead of ICMP echo packets is called a Fraggle attack.
It is difficult to prevent Smurf attacks entirely because they are made possible by incorrectly configured networks from a third party. The Smurf Amplifier Registry (SAR) http://www.powertech.no/smurf/ Netscan.org is one of several publicly available databases that can be used to configure routers and firewalls to block ICMP traffic from these networks. The Smurf Amplifier Registry (SAR) can be downloaded in Cisco ACL format. If you use Cisco routers, make sure all interfaces are configured with the no ip-directed broadcast command(defaultsinceIOS12.0).
TeardropAttacks: When data is sent across a TCP/IP network, it is fragmented into small fragments. The fragments contain an Offset field in their TCP header that specifies where certain data starts and ends. In a Teardrop attack, the attacker sends fragments with invalid overlapping values in the Offset field, which may cause the target system to crash when it attempts to reassemble the data. Today’s implementations of the TCP/IP stack safely disregard such invalid packets.
BonkAttacks: The Bonk attack is similar to a Teardrop attack. Instead of sending IP fragments with overlapping Offset values in the TCP header, the Offset values that are too large. As with the Teardrop attack, this may cause the target system to crash.
LandAttacks: During a Land attack, the attacker sends a forged TCP SYN packet with the same source and destination IP address. This confuses systems with outdated versions of the TCP/IP stack because it receives a TCP connection request from itself. This may cause the target system to crash.
DistributedDenialofService(DDoS)Attacks: When an attacker attacks from multiple source systems, it is called a Distributed Denial of Service (DDoS) attack. If the attacker is able to organize a large amount of users to connect to the same website at the same time, the web server, often configured to allow a maximum number of client connections, will deny further connections. Hence, a denial of service will occur. This is a common method used by ‘Hacktivists’. An organization like Green Peace could organize such an attack against a Fortune 500 company’s website that sells fur, for example.
However, the attacker typically does not own these computers. The actual owners are usually not aware of their system being used in a DDoS attack. The attacker usually distributes Trojan Horses that contain malicious code that allows the attacker to control their system. Such malicious code is also referred to as a Backdoor. Once these Trojan Horses are executed, they may use email to inform the attacker that the system can be remotely controlled. The attacker will then install the tools required to perform the attack. Once the attacker controls enough systems, which are referred to as zombies or slaves, he or she can launch the attack. The following diagram depicts such a scenario:

In most cases, it is difficult or even impossible to prevent DDoS attacks entirely. Some routers, firewalls, and IDSs are able to detect DoS attacks and block suspicious connections to prevent a service from being overloaded. When you are the victim of an ongoing DDoS attack, you should contact your ISP to block the IP addresses that seem to be the source of the attack. However, the attacker may forge the source addresses, making it very difficult to trace the actual source(s) of the attack without extensive cooperation of your ISP.
4.0 Recovering
Unfortunately DoS attack requires filtering response which is very reactive in nature. The methods of filtering depends on the type and the source of attacks. As described above, some of the attacks unique identifier are in the source IP, while mass worm attacks can be detected based on the payload. Traditional DoS attack technique most often does not involve host compromise, thus they are the most easiest to respond to. If the source IP Address or the pattern of the attack is identified it is possible to filter the traffic at the router. However, recent
development of DoS attacks, such as Code Red Worm and NIMDA attacks, have changed that perspective, since the attack also involve compromise of certain platforms of web hosts and generate various pattern of scanning and exploits. In normal circumstances, after an attack is filtered, there is a list of other activities which
require to be conducted to recover the network services. This is because filtering are only temporary solutions. Recovery and prevention steps are crucial to maintain the service. Recovery and rectification of the host often involves the following measures:
• Implement Access Control List (ACL) to limit malicious traffic – this can be done only when the full pattern of the attack is identified, with payload if any, by applying specific “ban” of the packet based on the pattern of the header or the payload. More information about DoS attacks and countermeasures using ACLs are detailed in http://www.cisco.com/warp/public/707/22.html .
• Content Filtering Device or Proxy Servers can also be used to filter out DoS attacks which can be identified based on its unique payload.
• Reformat and reinstall the operating system and relevant applications on the computer to ensure complete elimination of malicious codes.
• Removal of unnecessary listening services, since these services cause the device to respond to unnecessary requests which can trigger an attack.
• Upgrade software to the latest version, since often times, DoS effect are due to software vulnerabilities (i.e. buffer overflow) which causes malfunction or misbehavior of the service.
• Fine tune respective Internet applications to prevent the system from consuming too many simultaneous sessions. AS/400 for example has a DoS limit of number of timeouts request and HTTP servers has limit to the number of simultaneous session it can handle.
• Restrict access to listening services on the host using host ACLs, this can be done using tools such as TCPwrappers on the respective hosts.
In most situation, to respond to DoS attacks which cause high resource and bandwidth utilization, require cooperation from the Internet Service Providers in providing the filtering mechanism at the upstream router, where the network bandwidth can be consumed by the ISP, but not by the last mile, small bandwidth and multiple targets at the customer end.

5.0 Prevention
What makes DoS attacks so difficult to prevent is because it not only affect open services on devices, but also closed ports, as long as the service request reaches the device, the bandwidth uti lization will be effected. Due to the nature of the attack which can be crafted in many forms, targeted at many services and devices, it is most difficult to prevent devices from being susceptible to such attack.
Even a legitimate request packet can turn into malicious traffic if it creates recursive effect such as opening multiple simultaneous connections. That is another reason why DoS is very difficult to prevent. However, like other network threats, there is no silver bullet solution to the problem. Prevention of DoS requires combination of the followingactions:
• High redundancy and high availability network design
In order to prevent a network from falling trap into a DoS attack it is crucial to design the network as such that there is not a single point of failure. However, such high availability will incur additional cost, especially in maintaining dual connection to the Internet. It is also desired that ISPs provide load balancing on the upstream router to load share the redundant link.
• Perimeter Defense
The router and firewalls should pass through only legitimate packets to reach its internal network. An example is, limiting the internal web server from initiating port 80 connection destined to external hosts. Such filtering can prevent propagation of Code Red Worm attacks which causes a stream of scanning to
various IP Addresses on port 80. Preventing IP Address Spoofing using egress [2] and ingress filtering [1] are
examples of filtering at the gateway or router level to prevent packet spoofing from internal hosts, and to internal hosts respectively. However, it will not prevent attacks from legitimate IP Addresses within the network. Every interface on a router should prohibit packets that logically could not come from that
network interface.
• Defense In-depth
Implementation of Intruder Detection System (IDS) will allow detection of "slave", "master" or "agent" machines communications. Action can be taken to remove those infected host from the network. However, IDS may be able to detect known attacks but not new variations of these attacks.
• Host Hardening
Hardening the respective device on the network will prevent the host from DoS attack. Host hardening involves upgrading the operating system, applying relevant patches for the operating system and required applications, closing irrelevant services, customizing and tightening configurations, and applying
Access control Lists on the required services. Changing default passwords and applying good password policies. Known buffer overflow attacks can be prevented by keeping the host up to date with patches or version upgrades.
• Malware Detection and Prevention
The hosts and the network must have antivirus installed and scanning any introduction of new data, while file integrity checkers is used to detect any unauthorized attempt to change the original data. This will prevent infection of malicious codes and attempts to rootkit the host. Compromised host could make the host a potent host to become handlers for malicious users who wish to conduct DoS attack.
Periodic Scanning : Periodic network vulnerability scanning will detect vulnerable host and detect new infection. It is necessary to conduct periodic vulnerability since in any network, there are always new production host going on-line, or new devices being connected to the network.
Policy Enforcement :
Last but not least is having a strong policy enforcement on acceptable use and management of computing resources. It is also a daunting task to ensure that all in house and outsource code development apply good programming practices to avoid loopholes such as buffer overflow and DoS. Rigorous testing of preproduction system is inevitable to avoid unwanted loopholes. Despite applying all these measures there is still no guarantee that one will be immune to any DoS attacks but it will mitigate the effect of DoS attacks. However, applying the above recommendations would also mitigate other forms of malicious activities such as session hi-jacking, buffer overflow attacks and reconnaissance. It will not only prevent your network from becoming targets of DoS attacks, but also prevent it from becoming the launching pad for such attacks.

IPSEC
The language of the Internet is IP, the Internet Protocol. Everything can, and does, travel over IP. One thing IP does not provide, though, is security. IP packets can be forged, modified, and inspected en route. IPSec is a suite of protocols that seamlessly integrate security into IP and provide data source authentication, data integrity, confidentiality, and protection against replay attacks.
With IPSec, the power of the Internet can be exploited to its fullest potential.
• Communication is the lifeblood of business. Without a guarantee that a customer's order is authentic, it is difficult to bill for a service. Without a guarantee that confidential information will remain confidential, it is impossible for businesses to grow and partnerships to be formed.
• Unless there is a guarantee that records and information can remain confidential, the health care industry cannot utilize the Internet to expand its services and cut its costs.
• Personal services, such as home banking, securities trading, and insurance can be greatly simplified and expanded if these transactions can be done securely.
The growth of the Internet is truly dependent on security, and the only technique for Internet security that works with all forms of Internet traffic is IPSec. IPSec runs over the current version of IP, IPv4, and also the next generation of IP, IPv6. In addition, IPSec can protect any protocol that runs on top of IP such as TCP, UDP, and ICMP. IPSec is truly the most extensible and complete network security solution.
IPSec enables end-to-end security so that every single piece of information sent to or from a computer can be secured. It can also be deployed inside a network to form Virtual Private Networks (VPNs) where two distinct and disparate networks become one by connecting them with a tunnel secured by IPSec.
Thankfully, IPSec and IKE are constructed with partial defenses against denial of service attacks. These defenses do not defeat all denial of service attacks, but merely increase the cost and complexity to launch them.
IP Packets have no inherent security. It is relatively easy to forge the addresses of IP packets, modify the contents of IP packets, replay old packets, and inspect the contents of IP packets in transit. Therefore, there is no guarantee that IP datagrams received are (1) from the claimed sender (the source address in the IP header); (2) that they contain the original data that the sender placed in them; or (3) that the original data was not inspected by a third party while the packet was being sent from source to destination. IPSec is a method of protecting IP datagrams. This protection takes the form of data origin authentication, connectionless data integrity authentication, and data content confidentiality.
IPSec provides a standard, robust, and extensible mechanism in which to provide security to IP and upper-layer protocols (e.g., UDP or TCP). IPSec protects IP datagrams by defining a method of specifying the traffic to protect, how that traffic is to be protected, and to whom the traffic is sent. IPSec can protect packets between hosts, between network security gateways (e.g., routers or firewalls), or between hosts and security gateways.
The method of protecting IP datagrams or upper-layer protocols is by using one of the IPSec protocols, the Encapsulating Security Payload (ESP) or the Authentication Header (AH). AH provides proof-of-data origin on received packets, data integrity, and antireplay protection. ESP provides all that AH provides in addition to optional data confidentiality. One subtle difference between the two is the scope of coverage of authentication. It should be noted that the ultimate security provided by AH or ESP is dependent on the cryptographic algorithms applied by them.

The Architecture
Figure 3.1. IP packets protected by IPSec in transport mode and tunnel mode
Original IP packet IP Header TCP Header Data
Transport mode protected packet IP Header IPSec Header TCP Header Data

Tunnel mode protected packet IP Header IPSec Header IP Header TCP Header Data

Encapsulating Security Payload (ESP)
ESP is the IPSec protocol that provides confidentiality, data integrity, and data source authentication of IP packets, and also provides protection against replay attacks. It does so by inserting a new header—an ESP header—after an IP header (and any IP options) and before the data to be protected, either an upper-layer protocol or an entire IP datagram, and appending an ESP trailer. ESP is a new IP protocol and an ESP packet is identified by the protocol field of an IPv4 header and the "Next Header" field of an IPv6 header. If the value is 50 it's an ESP packet and immediately following the IP header is an ESP header. RFC2406 defines ESP. The ESP header is not encrypted but a portion of the ESP trailer is.
Encrypted

IP Header ESP Header Protected data ESP Trailor

Authnticated
Figure 3.4. An ESP-protected IP packet

Authentication Header (AH)
Like ESP, AH provides data integrity, data source authentication, and protection against replay attacks. It does not provide confidentiality. Because of this the AH header is much simpler than ESP; it is merely a header and not a header plus trailer (figure 3.5). In addition, all of the fields in the AH header are in the clear.
IP Header AH Header Protected data

Authenticated
Figure 3.5. An AH-protected IP Packet

IPSec Modes :
In this section, we describe how the IPSec protocols, AH and ESP, implement the tunnel and transport modes. There are four possible combinations of modes and protocol: AH in transport mode, AH in tunnel mode, ESP in transport mode, and ESP in tunnel mode. In practice, AH in tunnel mode is not used because it protects the same data that AH in transport mode protects. The AH and ESP header do not change between tunnel or transport mode. The difference is more semantic in nature—what it is they are protecting, IP packet or an IP payload.
Transport Mode
In transport mode, AH and ESP protect the transport header. In this mode, AH and ESP intercept the packets flowing from the transport layer into the network layer and provide the configured security.
Let us consider an example. A and B are two hosts that have been configured so that all transport layer packets flowing between them are encrypted. In this case, transport mode of ESP is used. If the requirement is just to authenticate transport layer packets, then transport mode of AH is used.
When security is not enabled, transport layer packets such as TCP and UDP flow into the network layer, IP, which adds the IP header and calls into the data link layer. When security in transport layer is enabled, the transport layer packets flow into the IPSec component. The IPSec component is implemented as part of the network layer (when intergrated with OS). The IPSec component adds the AH, ESP, or both headers, and invokes the part of the network layer that adds the network layer header.
When both AH and ESP are used in transport mode, ESP should be applied first. The reason is obvious. If the transport packet is first protected using AH and then using ESP, the data integrity is applicable only for the transport payload as the ESP header is added later on as shown in Figure 4.6.
IP ESP AH TCP Hdr Data
Figure 4.6. Packet format with ESP and AH.
This is not desirable because the data integrity should be calculated over as much data as possible.
If the packet is protected using AH after it is protected using ESP, then the data integrity applies to the ESP payload that contains the transport payload as shown in Figure 4.7.
IP Header AH Header ESP Header TCP Payload
Figure 4.7. Packet format with AH and ESP
Tunnel Mode :
IPSec in tunnel mode is normally used when the ultimate destination of the packet is different from the security termination point. The tunnel mode is also used in cases when the security is provided by a device that did not originate packets as in the case of VPNs. It is also used when a router provides security services for packets it is forwarding.
In the case of tunnel mode, IPSec encapsulates an IP packet with IPSec headers and adds an outer IP Header as shown in Figure 4.9.
IP Header ESP IP Header Network payload

Figure 4.9. IPSec tunneled mode packet format.
An IPSec tunneled mode packet has two IP headers—inner and outer. The inner header is constructed by the host and the outer header is added by the device that is providing the security services. This can be either the host or a router. There is nothing that precludes a host from providing tunneled mode security services end to end. However, in this case there is no advantage to using tunnel mode instead of transport mode. In fact, if the security services are provided end to end, transport mode is better because it does not add an extra IP header.
IPSec defines tunnel mode for both AH and ESP. IPSec also supports nested tunnels. The nested tunneling is where we tunnel a tunneled packet
The Internet Key Exchange : IKE, described in RFC2409, is a hybrid protocol. It is based on a framework defined by the Internet Security Association and Key Management Protocol (ISAKMP), defined in RFC2408, and implements parts of two key management protocols—Oakley and SKEME. In addition IKE defines two exchanges of its own.
Oakley is a protocol developed by Hilarie Orman, a cryptographer from the University of Arizona. It is a free-form protocol that allows each party to advance the state of the protocol at its own speed. From Oakley, IKE borrowed the idea of different modes, each producing a similar result—an authenticated key exchange— through the exchange of information. In Oakley, there was no definition of what information to exchange with each message. The modes were examples of how Oakley could be utilized to achieve a secure key exchange. IKE codified the modes into exchanges. By narrowing the flexibility of the Oakley model, IKE limits the wide range of possibilities that Oakley allows yet still provides multiple modes, albeit in a well-defined manner.
SKEME is another key exchange protocol, designed by cryptographer Hugo Krawczyk. SKEME defines a type of authenticated key exchange in which the parties use public key encryption to authenticate each other and "share" components of the exchange. Each side encrypts a random number in the public key of the peer and both random numbers (after decryption) contribute to the ultimate key. One can optionally do a Diffie-Hellman exchange along with the SKEME share technique for Perfect Forward Secrecy (PFS), or merely use another rapid exchange, which does not require public key operations, to refresh an existing key. IKE borrows this technique directly from SKEME for one of its authentication methods (authentication with public key encryption) and also borrows the notion of rapid key refreshment without PFS.
It is upon these three protocols—ISAKMP, Oakley, and SKEME—that IKE is based. It is a hybrid protocol; it uses the foundation of ISAKMP, the modes of Oakley, and the share and rekeying techniques of SKEME to define its own unique way of deriving authenticated keying material and negotiating shared policy.
IKE is a generic protocol that can establish security associations for multiple security services. IPSec is one such service
IKE
ISAKMP doesn't define a key exchange. That is left to other protocols. For IPSec the defined key exchange is IKE—the Internet Key Exchange. IKE uses the language of ISAKMP to define a key exchange and a way to negotiate security services. IKE actually defines a number of exchanges and options that can be applied to the exchanges. The end result of an IKE exchange is an authenticated key and agreed-upon security services—in other words, an IPSec security association. But IKE is not only for IPSec. It is generic enough to negotiate security services for any other protocol that requires them, for instance routing protocols which require authenticated keys like RIP or OSPF.
IKE is a two-phase exchange. The first phase establishes the IKE security association and the second uses that security association to negotiate security associations for IPSec. Unlike ISAKMP, IKE defines the attributes of its security association. It doesn't define the attributes of any other security association, though. That is left up to a domain of interpretation (DOI). For IPSec there exists the Internet IP Security domain of interpretation. Other protocols are free to write their own DOI for IKE.
PPTP
WHAT IS PPTP?
• Point-to-Point-Tunneling Protocol (PPTP) is a networking technology that supports multiprotocol virtual private networks (VPN), enableing remote users to access corporate networks securely across the Microsoft Windows NT® Workstation, Windows® 95, and Windows 98 operating systems and other point-to-point protocol (PPP)-enabled systems to dial into a local Internet service provider to connect securely to their corporate network through the Internet.
What does PPTP do for me? What benefits does it deliver?
• PPTP enables a low-cost, private connection to a corporate network through the public Internet. This is particularly useful for people who work from home or people who travel and must access their corporate networks remotely to check e-mail or perform other activities. Rather than dial a long distance number to remotely access a corporate network, with PPTP, a user could dial a local phone number using V.34 modem or ISDN for an Internet service provider point of presence. That PPTP session could provide a secure connection through the Internet back to the corporate network.
• PPTP is also easy and inexpensive to implement. Many organizations would like to out-source dial-up access to their corporate backbones in a manner that is cost effective, hassle- free, protocol-independent, secure, and that requires no changes to the network addressing that is in place today. Virtual WAN support using PPTP over IP backbones is one very effective way to do this.
• Finally, PPTP also enables dense communications front-ends to Windows NT Server for V.34 and ISDN.
WHO BENEFITS FROM USING PPTP?
• People who work in remote offices, from their homes, or on the road, who need access to their corporate LANs. LAN administrators benefit from the ease of implementation and peace of mind PPTP's security offers. In addition LAN administrators benefit from cost savings related to equipment purchase, carrier service, and ongoing administration/maintenance PPTP enables. Internet Service Providers (ISPs) who can use PPTP to provide an added value service and point of differentiation for their customers. Developers of firewalls and other Internet access products also benefit from PPTP's additional, complementary security features.
HOW IS PPTP DIFFERENT FROM DIALING UP TO THE INTERNET TODAY TO ACCESS A CORPORATE NETWORK CONNECTED TO THE INTERNET?
• Today, businesses have to change their existing network addressing schemes to make this happen, especially if they've set up their network addresses without reserving those addresses. Also, today remote users cannot access heterogeneous corporate networks including IPX/SPX or NetBEUI. With PPTP, businesses can let their remote users access heterogeneous corporate networks without changing existing network addresses. By tunneling PPP, we also enable the corporate LAN administrators to expediently add/remove network access for their employees without incurring delays if this access was managed centrally by the ISP. In other words, the LAN administrator retains control of who is granted remote access to the corporate network and can administer this access efficiently.
DOES THIS PROTOCOL MAKE IT POSSIBLE FOR WINDOWS-BASED TUNNELS AND UNIX-BASED TUNNELS TO COMMUNICATE WITH EACH OTHER? WILL IT EVER?
• Any device that implements PPTP can communicate with any other PPTP server, whether it is UNIX, Windows NT, or others.
IS PPTP PROPRIETARY MICROSOFT TECHNOLOGY?
• No, PPTP is not proprietary. PPTP builds upon two fundamentally important Internet standards supported by the Internet Engineering Task Force (IETF): IP and PPP. In other words, it is a natural evolution from PPP and IP.

L2TP
Feature : The Layer 2 Tunnel Protocol (L2TP) is an emerging Internet Engineering Task Force (IETF) standard that combines the best features of two existing tunneling protocols: Cisco's Layer 2 Forwarding (L2F) and Microsoft's Point-to-Point Tunneling Protocol (PPTP). L2TP is an extension to the Point-to-Point Protocol (PPP), which is an important component for VPNs. VPNs allow users and telecommuters to connect to their corporate intranets or extranets. VPNs are cost-effective because users can connect to the Internet locally and tunnel back to connect to corporate resources. This not only reduces overhead costs associated with traditional remote access methods, but also improves flexibility and scalability.

Figure1-L2TPArchitecture

Benefits
L2TP offers the following benefits:
• Vendor interoperability.
• Can be used as part of the wholesale access solution, which allows ISPs to the telco or service providers offer VPNs to Internet Service Providers (ISPs) and other service providers.
• Can be operated as a client initiated VPN solution, where enterprise customers using a PC, can use the client initiated L2TP from a third party.
• All value-added features currently available with Cisco's L2F, such as load sharing and backup support, will be available in future IOS releases of L2TP.
• Supports Multihop, which enables Multichassis Multilink PPP in multiple home gateways. This allows you to stack home gateways so that they appear as a single entity.
List of Terms :
challenge handshake authentication protocol (CHAP)—A PPP cryptographic challenge/response authentication protocol in which the cleartext password is not passed over the line. This allows the secure exchange of a shared secret between the two endpoints of a connection.
control messages—Exchange messages between the LAC and LNS pairs, operating in-band within the tunnel protocol. Control messages govern the aspects of the tunnel and sessions within the tunnel. Integrated Services Digital Network (ISDN)—Communication protocols offered by telephone companies that permit telephone networks to carry date, voice, and other source traffic.
Layer 2 Tunnel Protocol (L2TP)—A Layer 2 tunneling protocol that is an extension to the PPP protocol used for Virtual Private Networks (VPNs). L2TP merges the best features of two existing tunneling protocols: Microsoft's PPTP and Cisco's L2F. It is the emerging IETF standard, currently being drafted by participants from Ascend, Cisco Systems, Copper Mountain Networks, IBM, Microsoft, and 3Com.
Link Control Protocol (LCP)—A protocol that establishes, configures, and tests data link connections used by PPP.
L2TP access concentrator (LAC)—An L2TP device that the client directly connects to and whereby PPP frames are tunneled to the L2TP network server (LNS). The LAC needs only implement the media over which L2TP is to operate to pass traffic to one or more LNSs. It may tunnel any protocol carried within PPP. The LAC is the initiator of incoming calls and the receiver of outgoing calls. Analogous to the Layer 2 Forwarding (L2F) network access server (NAS).
L2TP network server (LNS)—Termination point for L2TP tunnel and access point where PPP frames are processed and passed to higher layer protocols. An LNS operates on any platform capable of PPP termination. The LNS handles the server side of the L2TP protocol. L2TP relies only on the single media over which L2TP tunnels arrive. The LNS may have a single LAN or WAN interface, yet still be able to terminate calls arriving at any of the LACs full range of PPP interfaces (asynchronous, synchronous, ISDN, V.120, etc.). The LNS is the initiator of outgoing calls and the receiver of incoming calls. Analogous to the Layer 2 Forwarding (L2F) home gateway (HGW
Network Access Server (NAS)—A device providing temporary, on-demand network access to users. The access is point-to-point typically using PSTN or ISDN lines. A NAS may also serve as a LAC, LNS, or both. In Cisco's implementation for L2TP, the NAS serves as a LAC for incoming calls and serves as a LNS for outgoing calls. The NAS is synonymous with LAC.
Network Control protocol (NCP)—PPP protocol for negotiation of OSI Layer 3 (the network layer) parameters.
Password Authentication Protocol (PAP)—A simple PPP authentication mechanism in which a cleartext username and password are transmitted to prove identity. PAP is not as secure as CHAP because the password is passed in "cleartext."
point-of-presence (POP)—The access point to a service provider's network.
Point-to-Point Protocol (PPP)—A protocol that encapsulates network layer protocol information over point-to-point links. The RFC for PPP is RFC 1661.
Point-to-Point Tunneling Protocol (PPTP)—Microsoft's Point to Point Tunneling Protocol. Some of the features in L2TP were derived from PPTP.
public switched telephone network (PSTN)—Telephone networks and services in place worldwide. session—A single, tunneled PPP session. Also referred to as a call.
tunnel—A virtual pipe between the LAC and LNS that can carry multiple PPP sessions.
tunnel ID—A two-octet value that denotes a tunnel between a LAC and LNS
Virtual Private Dialup Networking (VPDN)—A system that permits dial-in networks to exist remotely to home networks, while giving the appearance of being directly connected. VPDNs use L2TP and L2F to terminate the Layer 2 and higher parts of the network connection at the LNS, instead of the LAC.

Functional Description :
Using L2TP tunneling, an Internet Service Provider (ISP), or other access service, can create a virtual tunnel to link customer's remote sites or remote users with corporate home networks. The L2TP access concentrator (LAC) located at the ISP's point of presence (POP) exchanges PPP messages with remote users and communicates by way of L2TP requests and responses with the customer's L2TP network server (LNS) to set up tunnels. L2TP passes protocol-level packets through the virtual tunnel between end points of a point-to-point connection. Frames from remote users are accepted by the ISP's POP, stripped of any linked framing or transparency bytes, encapsulated in L2TP and forwarded over the appropriate tunnel. The customer's home gateway accepts these L2TP frames, strips the L2TP encapsulation, and processes the incoming frames for the appropriate interface. Figure 2 shows the L2TP tunnel detail and how user "lsmith" connects to the LNS to access the designated corporate intranet.
Figure2-L2TPTunnelStructure






Web Security : web security is a set of procedures, practices, and technologies for protecting web
servers, web users, and their surrounding organizations.

1.1.1 Why Worry about Web Security?
The World Wide Web is the fastest growing part of the Internet. Increasingly, it is also the part of the Internet that is most vulnerable to attack.
Web servers make an attractive target for attackers for many reasons:
Publicity
Web servers are an organization's public face to the Internet and the electronic world. A successful attack on a web server is a public event that may be seen by hundreds of thousands of people within a matter of hours. Attacks can be mounted for ideological or financial reasons; alternatively, they can simply be random acts of vandalism.
Commerce
Many web servers are involved with commerce and money. Indeed, the cryptographic protocols built into Netscape Navigator and other browsers were originally placed there to allow users to send credit card numbers over the Internet without fear of compromise. Web servers have thus become a repository for sensitive financial information, making them an attractive target for attackers. Of course, the commercial services on these servers also make them targets of interest.
Securing Windows NT/2000 Servers for the Internet
Proprietary information
Organizations are using web technology as an easy way to distribute information both internally, to their own members, and externally, to partners around the world. This proprietary information is a target for competitors and enemies.
Network access
Because they are used by people both inside and outside an organization, web servers effectively bridge an organization's internal and external networks. Their position of privileged network connectivity makes web servers an ideal target for attack, as a compromised web server may be used to further attack computers within an organization.
Unfortunately, the power of web technology makes web servers and browsers especially vulnerable to attack
as well:
Server extensibility
By their very nature, web servers are designed to be extensible. This extensibility makes it possible to connect web servers with databases, legacy systems, and other programs running on an organization's network. If not properly implemented, modules that are added to a web server can compromise the security of the entire system.
Browser extensibility
In the same manner that servers can be extended, so can web clients. Today, technologies such as ActiveX, Java, JavaScript, VBScript, and helper applications can enrich the web experience with many new features that are not possible with the HTML language alone. Unfortunately, these technologies can also be subverted and employed against the browser's user - often without the user's knowledge.
Disruption of service
Because web technology is based on the TCP/IP family of protocols, it is subject to disruption of service: either accidentally or intentionally through denial-of-service attacks. People who use this technology must be aware of its failings and prepare for significant service disruptions.
Complicated support
Web browsers require external services such as DNS (Domain Name Service) and IP (Internet Protocol) routing to function properly. The robustness and dependability of those services may not be known and can be vulnerable to bugs, accidents, and subversion. Subverting a lower-level service can result in problems for the browsers as well.
Pace of development
The explosive growth of WWW and electronic commerce has been driven by (and drives) a frenetic pace of innovation and development. Vendors are releasing new software features and platforms, often with minimal (or no) consideration given to proper testing, design, or security. Market forces pressure users to adopt these new versions with new features to stay competitive. However, new software may not be compatible with old features or may contain new vulnerabilities unknown to the general population.

1.1.3 What's a "Secure Web Server" Anyway?
In recent years, the phrase "secure web server" has come to mean different things to different people:
• For the software vendors that sell them, a secure web server is a program that implements certain cryptographic protocols, so that information transferred between a web server and a web browser cannot be eavesdropped upon.
• For users, a secure web server is one that will safeguard any personal information that is received or collected. It's one that supports their privacy and won't subvert their browser to download viruses or other rogue programs onto their computer.
• For a company that runs one, a secure web server is resistant to a determined attack over the Internet or from corporate insiders.

1.2 The Web Security Problem
The web security problem consists of three major parts:
• Securing the web server and the data that is on it. You need to be sure that the server can continue its operation, the information on the server is not modified without authorization, and the information is only distributed to those individuals to whom you want it to be distributed.
• Securing information that travels between the web server and the user. You would like to assure that information the user supplies to the web server (usernames, passwords, financial information, etc.) cannot be read, modified, or destroyed by others. Many network technologies are especially susceptible to eavesdropping, because information is broadcast to every computer that is on the local area network.
• Securing the user's own computer. You would like to have a way of assuring users that information, data, or programs downloaded to their systems will not cause damage - otherwise, they will be reluctant to use the service. You would also like to have a way of assuring that information downloaded is controlled thereafter, in accordance with the user's license agreement and/or copyright.
Along with all of these considerations, we may also have other requirements. For instance, in some cases, we have the challenges of:
• Verifying the identity of the user to the server
• Verifying the identity of the server to the user
• Ensuring that messages get passed between client and server in a timely fashion, reliably, and without replay
• Logging and auditing information about the transaction for purposes of billing, conflict resolution, "nonrepudiation," and investigation of misuse
• Balancing the load among multiple servers

Internet character and vulnerability
Internet is 2-way network
• user must be able to get access to the server
• any access is a point of vulnerability
• "active" servers (like Web servers) deal in programs as a form of information and its handling -
hence more vulnerable than other electronic publishing systems such as teletext, fax-back
• extensible servers provide vulnerabilities
• extensible clients - browsers - ditto
• complexity of software and support services
• pace of change
Attacks
• disruption of service
• denial of service
• subversion - change of information
• damage to enterprise - reputation
• damage to enterprise - loss of money
• subversion of interaction - loss/change personal details, transaction, financial
Aspects of Security
• secure servers
• secure transmission
• secure clients and
• security of transaction between user and provider
All first three are sometimes lumped under "secure Web servers" by different viewpoints.
Secure Web servers
Ensure continued operation of server, no unauthorized/unexpected modification of data in store no distribution of data to unauthorized clients.
Secure Web clients
maintain trust of user in service, no unexpected alteration of user programs, data, nor of system nor continued operation.
Issues : Download of active content, User responsibilities, helper applications/plugins, user response to requests.
Plug-Ins: Despite the security caveat, helper applications are quite useful. They are so useful, in fact, that
Netscape developed a system called "plug-ins."
A plug-in is a module that is loaded directly into the address space of the web browser program and is automatically run when documents of a particular type are downloaded. By 1997, most popular helper applications, such as the Adobe Acrobat reader, Progressive Networks' RealAudio player, and Macromedia's Shockwave player, had been rewritten as Netscape plug-ins.
Plug-ins are fundamentally as risky as any other kind of downloaded machine code.

Security of active content

3.1 Java

The main way that Java achieves reliability is by providing :
• Instead of forcing the programmer to manually manage memory with malloc( ) and free( ), Java
has a working garbage collection system. As a result, Java programmers don't need to worry about
memory leaks, or about the possibility that they are using memory in one part of an application that
is still in use by another.
• Java has built-in bounds checking on all strings and arrays. This eliminates buffer overruns, which
are another major source of C and C++ programming errors and security bugs.
• The Java language doesn't have pointers. That's good, because many C/C++ programmers don't
understand the difference between a pointer to an object and the object itself.14
• Java only has single inheritance, making Java class hierarchies easier to understand. And since Java
classes can implement multiple interfaces, the language supports many of the advantages of
multiple-inheritance languages.
• Java is strongly typed, so you don't have problems where one part of a program thinks that an
object has one type, and another part of a program thinks that an object has another type.
• Java has a sophisticated exception handling system.

3.1.3 Java Security
Java was not designed to be a secure programming language. Java eliminates many traditional sources of bugs, such as buffer overflows. But a safe programming language alone cannot protect users from programs that are intentionally malicious.To provide protection against these underlying attacks (and countless others), it's necessary to place limits on what downloaded programs can do.
Java employs a variety of techniques to limit what a downloaded program can do. The main ones are the Java sandbox, the SecurityManager class, the Bytecode Verifier, and the Java Class Loader. These processes are illustrated in Figure 3.2 and described in the following sections.
Figure 3.2. The Java sandbox, SecurityManager class, Bytecode Verifier, and Class Loader



3.1.3.2 Sandbox
Java programs are prohibited from directly manipulating a computer's hardware or making direct calls to the
computer's operating system. Instead, Java programs run on a virtual computer inside a restricted virtual
space.
Sun termed this approach to security the Java "sandbox," likening the Java execution environment to a place
where a child can build things and break things and generally not get hurt and not hurt the outside world.
3.1.3.3 SecurityManager class
The creators of Java believed that programs that are downloaded from an untrusted source, such as the
Internet, should run with fewer privileges than programs that are run directly from the user's hard disk. They
created a special class, called SecurityManager, which is designed to be called before any "dangerous"
operation is executed. The SecurityManager class determines whether the operation should be allowed or
not.
3.1.3.4 Class Loader
Because most of the security checks in the Java programming environment are written in the Java language
itself, it's important to ensure that a malicious piece of program code can't disable the checks. One way to
launch such an attack would be to have a malicious program disable the standard SecurityManager class or
replace it with a more permissive version. Such an attack could be carried out by a downloaded piece of
machine code or a Java applet that exploited a bug in the Java run-time system. To prevent this attack, the
Class Loader examines classes to make sure that they do not violate the run-time system.
3.1.3.5 Bytecode Verifier
To further protect the Java run-time security system, Java employs a Bytecode Verifier. The verifier is
supposed to ensure that the bytecode that is downloaded could only have been created by compiling a valid
Java program. For example, the Bytecode Verifier is supposed to assure that:
• The downloaded program doesn't forge pointers.
• The program doesn't violate access restrictions.
• The program doesn't violate the type of any objects.



3.1.5 Java Security Problems
Most of the security problems discovered by the Princeton team were implementation errors: bugs in the Java run-time system that could be patched. But some of the problems discovered were design flaws in the Java language itself.

3.1.5.1 Java implementation errors
There were three main classes of flaws:
• Bugs with the Java virtual machine that let programs violate Java's type system. Once the type system is violated, it is possible to convince the JVM to execute arbitrary machine code.
• Class library bugs, which allow hostile programs to learn "private" information about the user or, in the case of Sun's HotJava browser, edit the browser's settings.
• Fundamental design errors leading to web spoofing and other problems.
Most of the implementation errors discovered by the group were fixed shortly after they were reported.
3.1.5.2 Java design flaws
The most serious design flaw with the Java system identified by the SIP group is that Java's security model
was never formally specified. Quoting from the literature of computer science, they repeated, "A program that has not been specified cannot be incorrect, it can only be surprising." The group was forced to conclude that many of the apparent problems that they found weren't necessarily security problems because no formal
security model existed.
The second major problem with Java's security is that the security of the entire system depends on
maintaining the integrity of the Java type system. Maintaining that integrity depends on the absolute proper
functioning of the SecurityManager class and the Bytecode Verifier. While the SecurityManager class is 500
lines long in the first set of Java implementations that were commercialized, the Bytecode Verifier was 3500
lines. To make things worse, there was no clear theory or reason as to what makes Java bytecode correct and
what makes it incorrect, other than the operational definition that "valid bytecode is bytecode that passes the
Bytecode Verifier."
These recommendations include:
• Public variables should not be writable across name spaces.
• Java's package mechanism should help enforce security policy.
• Java's bytecode should be simpler to check and formally verify. One way to do this would be to replace the current Java bytecode, which was designed to be small and portable but not designed to be formally verifiable, with a language that has the same semantics as the Java language itself. The SIP group proposes replacing or augmenting Java bytecode with abstract syntax trees.
3.1.5.3 The Java DNS policy dispute
In February 1996, Felten et al. reported that they had discovered a security flaw in the Java run-time system:
Java applets were vulnerable to DNS spoofing. There was no such security flaw in Java, Sun said. The problem was with the Princeton researchers' interpretation of Sun's security policy.

3.2 JavaScript
JavaScript is the native language of Netscape's web browser. For this reason, JavaScript has many functions
specifically designed to modify the appearance of web browsers: JavaScript can make visual elements of the
web browser appear or disappear at will. JavaScript can make messages appear in the status line of web browsers. Some of the earliest JavaScript applications displayed moving banners across the web browser's
status line.
Because JavaScript programs tend to be small functions that tie together HTML files, GIFs, and even other
programs written in JavaScript, many people call JavaScript a "scripting language." But JavaScript is a fullfledged general-purpose programming language, exactly like every other programming language. You could write an accounts receivable system in it if you wanted to.
JavaScript programs should be inherently more secure than programs written in Java or other programming
languages for a number of reasons:
• There are no JavaScript methods for directly accessing the client computer's file system.
• There are no JavaScript methods for directly opening connections to other computers on the
network.
Security problems have been reported with JavaScript. This is because security is far more than protection
against disclosure of information or modification of local files. To date, JavaScript problems have occurred in two main areas: denial-of-service attacks and privacy violations, both described below.
3.2.3 JavaScript and Privacy
• JavaScript could be used to create forms that automatically submitted themselves by email. This
allowed a malicious HTML page to forge email in the name of the person viewing the page.
• JavaScript programs had access to the user's browser "history" mechanism. This allowed a web site
to discover the URLs of all of the other web pages that you had visited during your session. This
feature could be combined with the previous feature to perform a form of automated eavesdropping.
• A JavaScript program running in one window could monitor the URLs of pages visited in other
windows.
3.3 Denial-of-Service Attacks
A denial-of-service attack is an attack in which a user (or a program) takes up so much of a shared resource
that none of the resource is left for other users or uses.
The attack consists of a call to the alert( ) method in a tight loop. Each time the loop is executed, a pop-up "alert" window appears on the user's screen. This attack succeeds because (currently) there is no limit to the number of times that a piece of JavaScript can call the alert( ) method.
Because they take up people's time, denial-of-service attacks can be costly to an organization as well. For
example, in November 1996 a request was placed on a small mailing list for a page containing "really nasty
HTML tables." Simson sent back in response the URL of a page containing 100 tables-within-tables. The page was "nasty" because it caused Netscape Navigator to crash. Within an hour, Simson started to receive email back from people on the mailing list who had clicked on the URL and had their computers lock up. They complained that their computers had crashed and that they had lost work in the process. If 10 people working at the same organization had their computers crash, and if each lost an hour of work in the process (between the time spent working on any files saved, the time spent rebooting the computer, and the time spent sending a nasty email in response), then simply publishing the URL on the mailing list cost the organization anywhere between $500 and $1000 in lost productivity.
3.4 JavaScript-Enabled Spoofing Attacks
Ed Felten at Princeton University notes that people are constantly making security-related decisions. To make
these decisions, people use contextual information provided by their computer. For example, when a user
dials in to a computer, the user knows to type her dial-in username and password. At the same time, most
users know to avoid typing their dial-up username and password into a chat room on America Online.
The combination of Java and JavaScript can be used to confuse the user. This can result in a user's making a
mistake and providing security-related information to the wrong party.

4. Downloading Machine Code with ActiveX and Plug-Ins
One of the most dangerous things that you can do with a computer that is connected to the Internet is to
download a program and run it. That's because most personal computer operating systems place no limits on
what a program can do once it starts running. When you download a program and run it, you are placing
yourself entirely in the hands of the program's author.
Most programs that you download will behave as expected. But they don't have to. Many programs have bugs in them: running them will cause your computer to crash. But some programs are malicious: they might
erase all of the information on your computer's disk. Or the program might seek out confidential information
stored on your computer and transmit it to a secret location on the Internet.

Plug-ins were introduced with Netscape Navigator as a simple way of extending browsers with executable
programs that are written by third parties and loaded directly into Netscape Navigator. One of the simplest
uses for plug-ins is to replace helper applications used by web browsers. Instead of requiring that data be
specially downloaded, saved in a file, and processed by a helper application, the data can be left in the
browser's memory pool and processed directly by the plug-in. But plug-ins are not limited to the display of
information. In the fall of 1996, Microsoft released a plug-in that replaced Netscape's Java virtual machine
with its own. And PGP, Inc., is developing a plug-in that adds PGP encryption to Netscape Communicator's
email package.
Two popular plug-ins are the Macromedia Shockwave plug-in, which can play animated sequences, and the
Adobe Acrobat plug-in, which lets Navigator display PDF files. Both of these plug-ins have been used since
shortly after the introduction of Netscape's plug-in architecture.
Plug-in Security Implications

When running network applications such as Netscape Navigator, it is important tounderstand the security implications of actions you request the application to perform. If you choose to download a plug-in, you should know that:
* Plug-ins have full access to all the data on your machine.
* Plug-ins are written and supplied by third parties.
To protect your machine and its documents, you should make certain you trust both the third party and the site providing the plug-in.
If you download from a site with poor security, a plug-in from even a trusted company couldpotentially be replaced by an alternative plug-in containing a virus or other unwanted behavior.

There are many ways that your computer might be damaged by a plug-in. For example:
• The plug-in might be a truly malicious plug-in, ready to damage your computer when you make the mistake of downloading it and running it.
• The plug-in might be a legitimate plug-in, but a copy might have been modified in some way to exhibit new dangerous behaviors. (Netscape Navigator 4.0 has provisions for digitally signing plugins.
Once modified, a digitally signed plug-in's signature will no longer verify.)
• There might not be a malicious byte of code in your plug-in's executable, but there might be a bug that can be misused by someone else against your best interests.
• The plug-in might implement a general-purpose programming language that can be misused by an attacker.

4.3 ActiveX
ActiveX is a collection of technologies, protocols, and APIs developed by Microsoft that are used for
downloading executable code over the Internet. The code is bundled into a single file called an ActiveX
control. The file has the extension OCX.
Microsoft has confusingly positioned ActiveX as an alternative to Java. ActiveX is more properly thought of as an alternative to Netscape's plug-ins. ActiveX controls are plug-ins that are automatically downloaded and installed as needed, then automatically deleted when no longer required. Adding to the confusion is the fact that ActiveX controls can be written in Java!
Despite the similarities between ActiveX controls and Netscape plug-ins, there are a few significant differences:
• Whereas plug-ins usually extend a web browser so that it can accommodate a new document type, most ActiveX controls used to date have brought a new functionality to a specific region of a web page.
• Traditionally, ActiveX controls are downloaded and run automatically, while plug-ins need to be manually installed.
• ActiveX controls can be digitally signed using Microsoft's Authenticode technology. Internet Explorer can be programmed to disregard any ActiveX control that isn't signed, to run only ActiveX controls that have been signed by specific publishers, or to accept ActiveX controls signed by any registered software publisher. Netscape Navigator 3.0 has no provisions for digitally signing plug-ins, although this capability should be in Navigator 4.0.
4.3.1 Kinds of ActiveX Controls
ActiveX controls can perform simple animations, or they can be exceedingly complex, implementing databases or spreadsheets. They can add menu items to the web browser, access information on the pasteboard, scan the user's hard drive, or even turn off the user's computer.
Fundamentally, there are two kinds of ActiveX controls:
• ActiveX controls that contain native machine code. These controls are written in languages such as C, C++, or Visual Basic. The control's source code is compiled into an executable that is downloaded to the web browser and executed on the client machine.
• ActiveX controls that contain Java bytecode. These are controls that are written in Java or another language that can be compiled into Java bytecode. These controls are downloaded to the web browser and executed on a virtual machine.
These two different kinds of ActiveX controls have fundamentally different security implications. In the first
case, ActiveX is simply a means to quickly download and run a native machine code program. It is the programmer's choice whether to follow the ActiveX APIs, to use the native operating system APIs, or to attempt direct manipulation of the computer's hardware. There is no way to easily audit the ActiveX control's functions on most PC operating systems.
ActiveX controls that are downloaded as Java bytecode, on the other hand, can be subject to all of the same
restrictions that normally apply to Java programs. These controls can be run by the browser within a Java
sandbox. Alternatively, a web browser can grant these controls specific privileges, such as the ability to write within a specific directory or to initiate network connections to a specific computer on the Internet. Perhaps most importantly, the actions of an ActiveX control written in Java can be audited - provided, of course, that the Java run-time environment being used allows such auditing.
What is Electronic Commerce?
Electronic Commerce means buying and selling products and services over the Internet and the World Wide Web. For merchants, electronic commerce means making your products and services available to the millions of potential customers all over the world who are already surfing the Web daily, and the millions more who will be logging on in the near future.
Types of Security: SSL and SET
What is SSL?
SSL, or Secure Socket Layer, is a security technology that encrypts an American Express Cardmember's information and purchase details, helping to ensure that information will not be accessible to anyone but th banking institution and you, the customer, during an electronic transaction.
What is SET?
SET, the Secure Electronic Transaction Protocol, is the latest development of the financial industry's efforts to develop a standard, universal way of conducting electronic commerce that will offer consumers and merchants an unprecedented level of security and assurance. SET will provide the same high levels of security as U.S. users of SSL enjoy today, with the added benefit of digital signature certificates to help authenticate the identity of all parties involved in a transaction. This new level of security is intended to eliminate the possibility of "spoofing" -- where unscrupulous individuals set up fake "electronic storefronts" -- and will dramatically reduce the potential for the use of stolen credit card numbers on the Internet. As the SET protocol is implemented in the coming months, it will mark the beginning of a new, safer era of electronic commerce.
Digital Certificates
What is a Digital Signature Certificate?
A digital signature certificate is an encrypted electronic document that contains information that verifies your company's identity, encoded in a highly secure format. Your server will send your digital signature certificate and electronic countersignature to your customers to authenticate your identity to them, and your customer's browser software sends you their digital signature certificate and electronic countersignature containing verification of their identity.
The following is an overview of SSL and SET, the two main standards for secure financial transactions over the Internet.
SSL (Secure Sockets Layer)
At the present time, almost all online credit card orders involve the Secure Sockets Layer (SSL) protocol, because both the Netscape Navigator and Microsoft Internet Explorer web browsers feature built-in support SSL for online transactions. (By way of contrast, except for limited tests, SET has yet to go live.[First quarter 1998]) In fact, Netscape designed SSL, using RSA's public key cryptography technology. (By the way, Netscape has a good summary of how public key cryptography works.
While Netscape designed SSL, Microsoft and others quickly jumped on the bandwagon, and now nearly every retail site on the web supports online transactions using SSL.
At the most basic level, SSL functions as a protocol layer between a network protocol (TCP/IP in the case of the Internet) and the application protocol (HTTP in the case of almost all web sites). Using SSL allows you to do the following things:
• Mutually identify the client and server
• Use digital certificates to verify data integrity
• Encrypt all data to ensure privacy
To do this on the web requires a secure server, which is why you usually get the kind of warning shown in this dialog box. (By the way, secure servers usually have HTTPS in their URL instead of the more usual HTTP.)

Just as in the olden days when you would dial into a BBS with your old 2400 baud modem, the first thing that happens when you initiate a SSL session with a web server is a handshake sequence. During this sequence, several things happen:
1. The browser and server determine what level and method of encryption they will use for the session
2. The browser and server create a share a session encryption key
3. Optional: the browser requests and receives a digital certificate from the server
4. Optional: the server request and receives a digital certificate from the browser
After the handshake is over, you're ready to complete your transaction. For as long as the session lasts, all data passed between the browser and server is encrypted. When you finish the session, the secure server almost always transfers you back to the non-secure server.
How safe is SSL?
Most of us don't care about how SSL works, we just want to know whether it works -- if it keeps your credit card number secure. For the most part, you'd have to say "yes". As at first quarter 1998, there has not been any real-life compromise of a transaction made during an SSL session. Of course, the encryption used in an SSL transaction can and has been broken. In such cases, however, the hacker used a barrage of computers to break down the encryption protection in a controlled environment. For most applications, SSL is still considered by most experts to offer plenty of protection for almost all credit card transactions online. Of course, SSL could be improved, especially if companies like Netscape and Microsoft were allowed by the US government to use stronger encryption methods, but that gets into government policy, a topic for another day.
What about SET?
So where does SET fit in, you might be asking. To start at the beginning, SET refers to the Secure Electronic Transaction protocol. SET was originally proposed and evangelised by VISA and MasterCard. Late last year, they created a separate company, SET Secure Electronic Transaction LLC, also known as SETCO to act as a central source for SET standards, testing and vendor compliance. Other major players like American Express and JBC (a major Japanese issuer of bank cards) are shortly expected to invest in SETCO.
How does SET differ from SSL? Well, to begin with, it's an entirely different kind of transaction. Once you initiate an SSL session with a commerce site, the transaction isn't any different than when you give your credit card to a clerk in a department store. They tell you the price, you say OK, and you wait while the clerk attempts to validate your card and the purchase amount with the bank card company. If it's approved, the clerk prints out a receipt, which you sign, keeping a copy for your records.
With SET, the bank card company gets involved in the middle of the transaction, essentially acting as middleman. Theoretically, when you enter your card card in an online SET transaction, the commerce site might never see your actual credit card number. Instead, that info would go to the financial institution, who would verify the card and the amount, then make payment to the commerce site, all in exchange for a fee, of course.
What are the advantages of SET? Well, we're speaking theoretically here, since SSL is actually up and running and SET isn't. With that proviso out of the way, SET could offer more security for online transactions, by adding the mandatory authentication between buyer and seller, using digital certificates. It could also speed up the payment settlement process, since the financial institutions are involved up front, rather than after the fact. It might also increase privacy, since you wouldn't necessarily be sharing your credit card number with online retailers, but with your financial institution (which presumably already knows your card.)
So is SET the future?
It's hard to say what kind of acceptance SET will find. It's backed by heavyweights from both the financial and the computer industries (among the latter are Microsoft, Netscape and IBM). However, delays in SET standards-setting and testing have pushed back implementation timetables. Other firms and commentators are increasingly pointing out that SSL seems to be working fine, and are asking whether we really need SET. The concept is also under attack from digital currency companies like CyberCash and smart card firms like Mondex.
Whether SSL hangs on, we migrate to SET, some hybrid of the two, or some other scheme, two things seem certain. The first is that more and more people will buy ever-increasing amounts of goods and services over the web. The second is that for the foreseeable future, there will be much more credit card fraud in physical retailing then in the virtual kind.
Overview of SSL and SET
The following is an overview of SSL and SET, the two main standards for secure financial transactions over the Internet.
Is there a difference in safety with SSL of the encryption and SET in Internet?
1.The code being used.
It is the same as SSL, SET as well in code because a code to use if for the code being used to be the public-key of RSA and better codes than a future become a result, realization can be changed.

2.Who does decoding?
Generally this is prudent, and SET is the reason why there is a worry in the safety with SSL.
In the case of SSL
It has the information which a client (payment system use member) inputted on WEB enciphered with the present condition RSA public-key, and ssl is decoded with the server (joining store or payment system company).
In this case, or it has the possibility that it has data stolen y decoded information is seen on the server from theoutside z. Therefore, it is inside the firewall which decoded data did neatly, and it must be kept under the condition that data are managed.
Decoding is decided to be done in the joining store when generally credit card NO. is transmitted with SSL in Internet. It can think about the case that the control of the data isn't being done with many cases that the technical protection of the server isn't made being, and it is the reason why it is said that it has danger. Because it can think that the firewall spreads individually in the technical side as for the joining store in Internet as well, the rest will become the problem that control of data such as a card number is left in the joining store. But, this problem is not the problem of the Internet, but it is a use copy credit card store stay credit card problem.
It is the form of the electronic payment system each company such as electronic credit, a third party form, a prepaid system to do data after decoding with the server of a payment system company where technically even a human control side feels secure and to cut.
Therefore, these electronic payment systems may say assurance first. You may rather say safety more as much as there is no thing that a member number stays at the joining store like the case of the credit card of the rial shop, rial business.
In the case of SET
The safety of SET is equal to the security system which an electronic payment system each company is enforcing with SSL fundamentally. It becomes decoding with the server of the card issue company of the human control where finally technically the individual information (this case, not card number but the registration number of SET) that it was enciphered with the public-key such as RSA by a member's personal computer is made.
But, it flows from the member to the issue company in the case of SET when information goes through the route (joining store, authentication organ, joining store control company) in some middle. Therefore you must prevent decoding in the middle route and a data theft more than each electronic payment system form.

It is not a mere code signature for its sake which decodes only necessary information with the joining store, the joining store control company, the authentication organ, and two-fold code signature is necessary. An item and the way of the treatment are complex, and there are many preservation plans of the data made securing a security system in that place and decoding y that place z, and it has been getting necessary as for contents of registration to WALLET as well.
How Safe Is Your Mail System?
Many of us make false assumptions about the security of our "personal" or "private" communications. For example, when we phone a friend, we don't think about someone monitoring the call. When we send a letter, we don't consider that someone can intercept it. And when we send email, we don't expect anyone to alter its contents.
The truth is that all these communications methods are subject to interception and intrusion. Analog phone lines can be tapped. Cellular phone calls can be picked up on wireless scanners. Letters can be intercepted, compromised, and re-mailed. And email can be monitored, altered, or forged.
Some people don't worry because they don't send particularly valuable information. However, since worldwide email has emerged as an important component in today's business world, people have been transmitting increasingly important--and extremely valuable--information through the Internet and other public networks (see the sidebar, "Internet Email Architecture," ). Fortunately, new email standards and extensions are emerging to address the need for secure email delivery.
Concepts and Techniques
The most important technique in computer security is encryption. Secure or scrambled voice telephone lines and satellite television signals, and the secret codes that the military uses are examples of encryption. Encryption algorithms, such as the Data Encryption Standard (DES), the International Data Encryption Algorithm (IDEA), and RC2, transform data until no trace of it is left. IDEA is a symmetric-key block-cipher algorithm newer than, but similar to, DES. RC2 is a variable key-size, symmetric-key block-cipher algorithm also similar to DES and popular for exportable cryptographic systems because of its variable-length key.
An encryption key is required to return encrypted data to its original form. An encryption key is a binary value 40 or more bits long. With a good algorithm, you must have every bit of the key correct to retrieve any encrypted information. Even if you have 55 of 56 bits correct, decryption cannot occur. The longer the key is, the stronger the security is.
No encryption scheme can claim to be secure forever. Today, a home computer can crack World War II's best encryption algorithms in just a few minutes. In 100 years, some new device may crack today's algorithms just as easily. Fortunately, most secrets don't need to be kept forever. A monetary consideration is that the value of encrypted information determines whether accessing it is worth the price: If you can ensure secrecy either until no one cares about the information or so that cracking the code costs more than the information is worth, it's "secure enough."
For example a 40-bit key takes about $10,000 worth of supercomputer time and two weeks to crack. Although this key may be adequate to protect my checking account, it's probably not large enough for the accounts of a major corporation.
A slightly longer key of 56 bits re-quires millions of dollars to crack and should protect the information for years to come. A 56-bit encryption key has 256--or 72 quadrillion--possible keys. With 1000 computers, each trying 1,000,000 keys per second, trying them all would take 833 days. On average, you find the key halfway through your search.
An even longer key of 168 bits requires more money and potentially extends the data's secrecy for hundreds of years. With 168-bit keys, there are 2168 possible keys. This number is so staggeringly large that nothing can give you a feel for it. Suffice it to say that 1000 computers wouldn't be even close to trying all the keys by the time the sun finally burns out. That's probably secure enough for anything you want to protect.
The two basic kinds of encryption algorithms are symmetric key and public key.
• Symmetric-key algorithms require that both the person encrypting and the person decrypting have the same key. A real problem with how to securely share keys between two or more parties is that most security systems based solely on symmetric-key algorithms break down in the area of key management. Symmetric-key algorithms, however, are a lot faster than public-key algorithms. DES, IDEA, and RC2 are symmetric-key algorithms.
• Public-key algorithms use two keys: a public key and a private key. The public key is available to everyone you want to have access to your system. Your private key is your secret. For example, you might use your private key to digitally sign your email; then anyone with the public key will know that you--and only you--signed it. Unfortunately, public-key algorithms are extremely slow. To get around the speed issue, you can use a faster technique, such as a message-digest algorithm, to reduce the amount of data and then use a public-key algorithm to encrypt that smaller amount.
One public-key algorithm, the Diffie-Hellman key exchange, lets the sender and the receiver share a symmetric key securely. Thus, you can use a symmetric-key algorithm to encrypt the message. This combination achieves both speed and security.
Security Issues in Email
The most common threat to email is invasion of privacy. You don't want anyone but the intended recipient to read the message. Another threat is to message integrity. You don't want anyone to tamper with, or change, the contents of the message while it's in the mail system. The third major threat is to the authenticity of the sender. You want to ensure that a message really came from the person it claims to be from, not someone else. Let's look at these three issues.
Privacy: The privacy threat is simple to understand and deal with. In most cases, you don't want people privy to the contents of your email. This threat is analogous to tapping a phone conversation. Unlike with phone taps, however, no laws currently restrict the interception of email. In fact, email messages are often considered the property of the organization that owns the email system, rather than of the message sender or receiver. Your best bet is to assume that sending non-secure email is equivalent to posting a notice on a public bulletin board.
You can address your privacy requirements in two ways. You can manually encrypt all messages and give your recipients the decryption keys. Unfortunately, this approach puts the responsibility for security on your shoulders. Or, you can automate encryption by using secure email servers. With the eventual implementation of address book servers that also support public key management, and client software that can obtain public keys with email addresses (e.g., X.500), similar automation will be possible.
Secure email servers encrypt messages as they are transmitted over an unsecured channel, such as the Internet. Secure clients encrypt messages in transit. They can also store email in encrypted form, decrypting it only when the recipient reads it. Because both approaches are automatic, the chance for human error is removed. They are not, however, free from drawbacks.
Many people are surprised that deleting an email message doesn't remove all occurrences of it from the email system. Maintaining privacy through deletion is not a valid option.
Integrity: The integrity threat is similar to the privacy threat. Unfortunately, email is an impersonal medium. Because it lacks many traditional cues such as handwriting, stationery or letterhead, signature, and sealed envelope, detecting email tampering without authentication technology is almost impossible.
Encryption can also handle the tampering threat. To change encrypted mail, you must decrypt the message, make your changes, and then re-encrypt it with the original key. This process is more difficult than just trying to decrypt it.
Authentication: The authentication threat is a little different. You can put mail on the Internet that is indistinguishable in every way from mail really sent by someone else, but it's more difficult. Encryption--with certain key-management techniques--can also handle this threat.
For example, if Bob receives a message encrypted with a key that only he and Arnold have, Bob can be highly confident that the message came from Arnold. However, if the mail system automatically generates and exchanges keys, Bob knows only that it came from someone with a compatible mail system.
Digitally signing email without encrypting the contents is also possible so that the recipient can detect any tampering and determine whether the message really came from the person who claims to have sent it. Both Privacy Enhanced Mail (PEM) and Secure Multipurpose Internet Mail Extensions (S/MIME) routinely authenticate all messages, although encryption is optional. Digitally signing a message requires only that you can access your own secret key; it doesn't interfere with anyone's ability to read the message with or without public keys or secure mail clients. The digital signature is just a special text string added to an otherwise normal-text email. Of course, you cannot verify that digital signature without the signer's public key and a secure email client that supports the digital signature algorithm.
Client-BasedSolutions
A client-based solution adds extensions to the email system but requires no changes to existing Internet mail servers. Messages that the extended clients send must comply with the syntax defined in the Request for Comments (RFC) 822 standard. (You can retrieve RFCs from the Internet at http://www.internic.net/ds/dspg0intdoc.html.) The three primary standards are Pretty Good Privacy (PGP), PEM, and S/MIME.
• PGP is an informal standard that relies on an unconventional decentralized scheme for establishing trust. PGP doesn't fit into hierarchical--corporate or government--organizations very well.
• PEM is an Internet Engineering Task Force (IETF) standard (RFCs 1421 through 1424). It supports digital signatures using Rivest-Shamir-Adelman (RSA) algorithms and various encryption algorithms. One implementation of PEM is Mark Riordan's Internet Privacy Enhanced Mail (RIPEM). Unfortunately, PEM was never widely adopted, mostly because of the difficulty vendors had creating interoperable systems.
• S/MIME has emerged recently as the primary standard for interoperable secure email. Essentially, RSA refined the PEM specification using the Public Key Cryptography Standards (PKCS) so that any mail package meeting that standard is more or less guaranteed to interoperate with other PKCS-compliant products. In the next few months, you'll see release of several S/MIME-compliant Internet mail clients.
Client-based solutions have several advantages.
• Existing Internet mail servers need no changes.
• The solutions provide end-to-end security; the entire path from the sender to the receiver is secure.
• You can implement client-based solutions either to automatically decrypt incoming mail and store it in plain text form or to store it in the received encrypted form, requiring decryption each time it's read. This approach secures even your local message store against unauthorized access.
• Digital signatures can provide good authentication.
Client-based solutions also have some real disadvantages.
• The client software that the sender and the receiver use must be totally interoperable.
• If the sender selects encryption and the receiver doesn't support the encryption scheme, the system doesn't automatically drop back to plain text because that would breach security. Instead, the receiver ends up with an unreadable message.
• The sender, receiver, subject, and other fields in the mail header are not encrypted. They are in plain text, which could be a serious security breach.
• The user must understand a fair amount about security to use the client and must manage public keys. For example, to send me an encrypted message, you must obtain my public key. In the future, Internet Directory Systems will be able to look up any email address and public key and supply them to your client; X.500 is such a scheme. At present, though, obtaining and managing public keys can be a hassle.
Server-BasedSolutions
Instead of putting the security burden on the client software, you can place it on the mail-server software. Specifically, with the Secure Simple Mail Transfer Protocol (S/SMTP) instead of the usual unsecured SMTP protocol, you can automatically secure mail-server-to-mail-server traffic. To use S/SMTP, two sending and receiving mail servers determine whether they both support S/SMTP. If so, they negotiate the strongest encryption algorithm they have in common. Then, they do a Diffie-Hellman key exchange to securely share common key information. After the two servers conclude negotiations, all traffic over the connection is encrypted--not just the message body, but all header information, including names, email addresses, and the message subject.
The primary application for this security approach is protecting email over the Internet. This approach doesn't secure the client-to-server traffic, however. It is currently delivered over a private network link, sent over a non-TCP/IP network (e.g., an Internet Packet eXchange--IPX--network), or protected by a firewall.
The advantages of server-based security are
• It works with all existing SMTP/Post Office Protocol version 3 (POP3) clients User Agents.
• You don't need to worry about key management; key generation and exchange occur automatically when the server-
to-server connection is established.
• All message headers and the message body are encrypted.
• The negotiation phase lets secure servers interoperate with non-secure servers, and domestic-version secure servers can communicate with export-version secure servers.
• No users need to know that a secure server is in use; no disruption of existing Internet mail occurs when you introduce secure servers.
The two major disadvantages to server-based security are that the client-to-server link is not secure and the basic S/SMTP protocol does not address the authentication threat. One minor disadvantage is that you need from five to 25 se-
conds of CPU time--depending on key size and CPU type--to perform the Diffie-Hellman calculations on each connection.
Safe and Secure
A secure email system must protect mail-message privacy and integrity and sender authenticity. Encryption is the key to the first two. Symmetric-key and public-key algorithms each have a place in security systems. Neither is inherently superior or inferior; a combination of the two is more useful than either alone. If authentication is a concern, use S/MIME-compliant mail clients to sign all messages digitally.
In addition, you can choose either a client-based or a server-based solution. Each has pluses and minuses. A client-based solution provides end-to-end (i.e., sender-to-receiver) security but requires interoperable client software at both ends. A server-based solution allows interoperability with non-secure servers, but the client-to-server link is not secure.
Privacy-Enhanced Mail (PEM)
Introduction
On the Internet, the notions of privacy and security are practically non-existent. In order to provide some level of security and privacy in electronic mail messages, the Privacy and Security Research Group (PSRG) of the Internet Research Task Force (IRTF) and the Privacy-Enhanced Electronic Mail Working Group (PEM WG) of the Internet Engineering Task Force (IETF), through a series of meetings, came up with a series of message authentication and encryption procedures known as Privacy-Enhanced Mail (PEM), and standardized in Internet RFC 1421 [1], RFC 1422 [2], RFC 1423 [3], and RFC 1424 [4]
What is PEM?
Privacy-Enhanced Mail (PEM) is an Internet standard that provides for secure exchange of electronic mail. PEM employs a range of cryptographic techniques to allow for confidentiality, sender authentication, and message integrity. The message integrity aspects allow the user to ensure that a message hasn't been modified during transport from the sender. The sender authentication allows a user to verify that the PEM message that they have received is truly from the person who claims to have sent it. The confidentiality feature allows a message to be kept secret from people to whom the message was not addressed.
What does PEM do (re: security)?
PEM provides a range of security features. They include originator authentication, (optional) message confidentiality, and data integrity. Each of these will be discussed in turn.
Originator Authentication
In RFC 1422 [2] an authentication scheme for PEM is defined. It uses a hierarchical authentication framework compatible X.509, ``The Directory --- Authentication Framework.'' Central to the PEM authentication framework are certificates, which contain items such as the digital signature algorithm used to sign the certificate, the subject's Distinguished Name, the certificate issuer's Distinguished name, a validity period, indicating the starting and ending dates the certificate should be considered valid, the subject's public key along with the accompanying algorithm. This hierarchical authentication framework has four entities.
The first entity is a central authority called the Internet Policy Registration Authority (IPRA), acting as the root of the hierarchy and forming the foundation of all certificate validation in the hierarchy. It is responsible for certifying and reviewing the policies of the entities in the next lower level. These entities are called Policy Certification Authorities (PCAs), which are responsible for certifying the next lower level of authorities. The next lower level consists of Certification Authorities (CAs), responsible for certifying both subordinate CAs and also individual users. Individual users are on the lowest level of the hierarchy.
This hierarchical approach to certification allows one to be reasonably sure that certificates coming users, assuming one trusts the policies of the intervening CAs and PCAs and the policy of the IPRA itself, actually came from the person whose name is associated with it. This hierarchy also makes it more difficult to spoof a certificate because it is likely that few people will trust or use certificates that have untraceable certification trails, and in order to generate a false certificate one would need to subvert at least a CA, and possibly the certifying PCA and the IPRA itself.
Message Confidentiality
Message confidentiality in PEM is implemented by using standardized cryptographic algorithms. RFC 1423 [3] defines both symmetric and asymmetric encryption algorithms to be used in PEM key management and message encryption. Currently, the only standardized algorithm for message encryption is the Data Encryption Standard (DES) in Cipher Block Chaining (CBC) mode. Currently, DES in both Electronic Code Book (ECB) mode and Encrypt-Decrypt-Encrypt (EDE) mode, using a pair of 64-bit keys, are standardized for symmetric key management. For asymmetric key management, the RSA algorithm is used.
Data Integrity
In order to provide data integrity, PEM implements a concept known as a message digest. The message digests that PEM uses are known as RSA-MD2 and RSA-MD5 for both symmetric and asymmetric key management modes. Essentially both algorithms take arbitrary-length ``messages,'' which could be any message or file, and produce a 16-octet value. This value is then encrypted with whichever key management technique is currently in use. When the message is received, the recipient can also run the message digest on the message, and if it hasn't been modified in-transit, the recipient can be reasonably assured that the message hasn't been tampered with maliciously. The reason message digests are used is because they're relatively fast to compute, and finding two different meaningful messages that produce the same value is nearly impossible.
What can I use PEM with?
PEM (depending on which implementation you choose to use) can be used with just about any program capable of generating Internet mail and someone else who is using PEM. There are even Emacs elisp files available which simplify the usage of PEM.
What else must I use with PEM?
In order to use PEM, you'll need either RIPEM or TIS/PEM (TIS/MOSS). Then you'll need to generate a key-pair, and make it available. Depending on your preference, and availability, you might want to get your public-key certified by a Certification Authority.
Does anybody really use PEM?
In its current state, I haven't seen much evidence of PEM being used widely. There are hooks for using both PEM, specifically RIPEM although TIS/PEM should work as well, and PGP in the NCSA httpd [8] program for providing secure web communications with NCSA Mosaic. These hooks must be activated with a recompilation. There are also extensions to the Emacs editor which allow one to use either PGP or a PEM implementation in conjunction with mail or any other Emacs buffer. There is also a product put out by SecureWare called SecureMail [9] that implements PEM.
PGP for secure e-mail
What is PGP?
I have set up this page in response to questions I get about PGP - what it is, where to get it, and how to use it.
PGP (short for Pretty Good Privacy), created by Philip Zimmermann, is the de facto standard program for secure e-mail and file encryption on the Internet. Its public-key cryptography system enables people who have never met to secure transmitted messages against unauthorized reading and to add digital signatures to messages to guarantee their authenticity.
Why do we need PGP? E-mail sent over the Internet is more like paper mail on a postcard than mail in a sealed envelope. It can easily be read, or even altered, by anyone with privileged access to any of the computers along the route followed by the mail. Hackers can read and/or forge e-mail. Government agencies eavesdrop on private communications.
Have you ever thought about the privacy of your e-mail? A message sent over the Internet usually passes through several relaying hosts before reaching its destination. Anyone with privileged access to any of those computers can easily read it, just as a post office worker handling a postcard in transit can read its contents. There is also a risk of hackers.
In August 1999, hackers discovered a way to breach the security of Hotmail e-mail accounts, and the details were made public on the Internet, thus putting the privacy of 50 million subscribers at risk. The entire Hotmail system was closed down for a short time, while steps were taken to fix the problem. Can you be confident that your e-mail is secure?
Encryption is often used to ensure privacy. In the more traditional type of cryptography, the same secret key is used to encrypt and decrypt a message. Such a key must be exchanged before the message is sent. However, this is of little use if you want to send a one-off confidential message to someone in a different part of the world. If you have a secure means of transmitting a secret key, you might as well send the message itself!
Even if you can somehow exchange secret keys with all your correspondents (say, by slow postal mail), you would still need to exchange different keys with all of them individually. This would be very cumbersome.
Public-key cryptography
Public-key cryptography overcomes these problems. As explained below, it enables you to communicate securely with people you have never met, over insecure channels, without first exchanging secret keys.
Perhaps you think you have nothing to hide and don't need secure e-mail. Would you ever find it embarrassing if your e-mail is read by your sysadmin, your employer, your ISP, an unknown hacker, or government intelligence agencies? Do you ever use e-mail to transmit confidential information like business plans, character references, credit card numbers, political strategies or love letters? Would you like to use digital signatures to ensure that your e-mail is tamper-proof?
If you can answer "yes" to any of these questions, you will find public-key cryptography useful.
Key pairs
Each user of a public-key cryptosystem has a pair of keys: a public key and a secret key. The public key can be made available to anyone who wants it. It is even advantageous to publish it in an open directory, like telephone numbers in a telephone directory.
The secret key, as its name suggests, is kept secret (in practice, it's strongly encrypted in the user's computer with a passphrase).
The two keys are mathematically related in such a way that any message encrypted with the public key can be decrypted only with the corresponding secret key, and vice versa. Anyone can send you a secure message by encrypting it with your public key. Since you are the only person who has access to your own secret key, no one else will be able to decrypt the message.
For such a system to be secure, it must be designed so that it is computationally infeasible to discover a secret key from a knowledge of the corresponding public key.
Digital signatures
A byproduct of such a cryptosystem is the possibility of creating digital signatures. To see how this is possible, suppose that you encrypt a message with your own secret key. Since the public key reverses the action of the secret key, anyone with access to your public key can decrypt the message. If the message decrypts correctly, this proves that it was created by you, since nobody else has access to your secret key.
Thus, digital signatures can be used to authenticate messages and prevent forgeries or tampering. If a single byte of a message is changed in transmission, the digital signature would not be valid. Digital signatures based on modern cryptosystems are virtually impossible to forge in practice - much more so than ordinary handwritten signatures.
How it works
Public-key cryptosystems are based on what mathematicians call "one-way functions". A one-way function is a relation between two objects A and B such that B can be readily calculated from A, while there is no computationally feasible way of determining A from a knowledge of B.
As an example, consider the relation N = pq, where p and q are prime numbers. (A prime number p is a whole number which has no divisors except 1 and p itself.) Even if p and q have several hundred digits each, a simple program can be written for any modern computer to calculate their product N in a negligible amount of time. However, if only N is given, the problem of finding its prime factors p and q would require many millions of years of computation, using any known technology.
The one-way function described above is essentially the basis of one of the most popular public-key cryptosystems, the so-called RSA system, named after Rivest, Shamir and Adleman, who proposed it in 1978. The extreme difficulty of finding the prime factors of huge numbers explains why it is not feasible to determine a secret key if the corresponding public key is known.
PGP
The RSA system has been implemented in PGP, the standard program for secure e-mail. PGP, which stands for Pretty Good Privacy, was originally created by Philip Zimmermann, the first person to make military-grade cryptography available to the masses. Since then, PGP has undergone numerous revisions. Freeware versions of PGP exist for all major operating systems. With certain limitations, the different versions are interoperable.
PGP provides facilities for generating new key pairs, encrypting or decrypting messages, checking digital signatures, etc. The user need not be concerned with the mechanics of these processes. PGP automatically takes care of all the "bookkeeping".
Readers can download the program itself for many different computer systems, and also further information about PGP for beginners, from my page Introduction to PGP
Applications of hybrid encryption for security for network applications
The combination of private key (symmetric) and public key (asymmetric) encryption appears in two widely used security enhancements for electronic mail - PGP - and Web browsing - SSL.
PGP - Pretty Good Privacy
PGP is a hybrid system for sending enciphered, digitally signed messages usually by email.
Features:
• combination of algorithms in a set of utility software for
• encryption of messages, digests, and keys
• message digest used for digital signature
• key generation for private session key
• key generation for users' public/private key pairs
• key management and certification
Sending a message
1. Attach message signature : 128 bit message digest plus timestamp
is enciphered with sender's private key.
2. Compress message + digest : Removes redundancy - makes more secure (harder to attack)and makes message smaller.
3. Create session key
4. Symmetric encryption for message contents and signature : Method is not DES - uses IDEA (see Tanenbaum p 596). Uses session key.
5. Encrypt session key : The session key is encrypted using RSA on receiver's public key.
6. Transmit (4) & (5)

Session key - digest and encryption
• a session key is generated for each message sent
• changes every time
• made different by timestamp
• made different by timing how user types input on the spot
• key is sent to recipient using receiver's public key + RSA (asymmetric)
• message is sent using session key +IDEA (symmetric encryption)
• message includes digital signature using MD5 and RSA
• authentication
• non-repudiation
• integrity
PGP key management summary
• a public key/private key pair
is generated by PGP utility software for user on request
• user's private key is stored securely on user's disk,
encrypted using user's pass phrase
(from user's wetware memory)
• user's public key is made available in a record for giving away to others
(user id, public key, timestamp)
Certification is a key concept
Client/user will trust a public key if can get a Certificate for that key
•a digitally signed (user id, public key, timestamp)
encrypted with a trusted person's private key
Certificates can be checked by decryption with this trusted person's public key.
This person may be a CA (Certificate Authority).
Any accepted certificates are kept as trusted public keys in a public keyring file
and can be used automatically to check any later incoming certificates.
Leads to a network of trust building up.
PGP Standardisation and legal issues
Legal Issues
Not allowed to use within some countries by law (e.g. France)
Early version violates some USA patents (in USA only)
Zimmerman may be on trial for "exporting munitions"
but see New Scientist this week,
Standards
PGP is not a standard, but is freely available.
Algorithms are open.
Techno-politics
A political agenda is evident in the documentation. Privacy - independence from government
SSL - Secure Socket Layer
Netscape Communications Corporation proprietary protocol.
Protocol is built into Netscape browsers (and servers).
Provides
• privacy by encryption
• authentication by certificates and public/private keys
• data integrity by digital signature
SSL is a replacement for the socket layer - i.e. transport layer - not specific to HTTP alone.
SSL operation
Initial handshake between client and server when making connection
• server sends certificate containing ID and public key to client
- certificate is RSA public key encrypted with a CA's private key
• client checks certificate against own certificate database
or else checks signature of CA on the certificate - Certificate Authority
User can accept certificates and build up own list in client database.
Netscape builds in initial list of trusted Certificate Authorities
see Netscape browser->options->Security preferences->Site certificates
• length/type of secret session keys and algorithm to use are negotiated between client and server
• client creates 4 x private keys for session with a server
• client sends 4 keys to server encrypted with server's public key (known from certificate)
• client sends requests
- encrypted - RC2, RC4 or DES (using key K1)
- digitally signed - MD5 or SHA-1, RSA (using key K2)
• server sends responses similarly encrypted (K3), digitally signed (K4)
For efficiency a session can contain several HTTP requests with a server.

FIREWALLS

Basically, a firewall is a barrier to keep destructive forces away from your property. In fact, that's why its called a firewall. Its job is similar to a physical firewall that keeps a fire from spreading from one area to the next.
With a firewall in place, the landscape is much different. A company will place a firewall at every connection to the Internet (for example, at every T1 line coming into the company). The firewall can implement security rules. For example, one of the security rules inside the company might be:
Out of the 500 computers inside this company, only one of them is permitted to receive public FTP traffic. Allow FTP connections only to that one computer and prevent them on all others.
A company can set up rules like this for FTP servers, Web servers, Telnet servers and so on. In addition, the company can control how employees connect to Web sites, whether files are allowed to leave the company over the network and so on. A firewall gives a company tremendous control over how people use the network.

Definition – a computer network firewall is an electronic blocking mechanism that will not allow unauthorized intruders into a computer system. A computer firewall is a software program that blocks potential hackers from your individual computer or your computer network. Many different computer firewall software packages are available with a broad variety of costs and update options. Any computer that is always connected to the internet needs a firewall package.
Firewalls use one or more of three methods to control traffic flowing in and out of the network:
• Packet filtering - Packets (small chunks of data) are analyzed against a set of filters. Packets that make it through the filters are sent to the requesting system and all others are discarded.
• Proxy service - Information from the Internet is retrieved by the firewall and then sent to the requesting system and vice versa.
• Stateful inspection - A newer method that doesn't examine the contents of each packet but instead compares certain key parts of the packet to a database of trusted information. Information traveling from inside the firewall to the outside is monitored for specific defining characteristics, then incoming information is compared to these characteristics. If the comparison yields a reasonable match, the information is allowed through. Otherwise it is discarded.
Firewalls are customizable. This means that you can add or remove filters based on several conditions. Some of these are:
• IP address - Each machine on the Internet is assigned a unique address called an IP address. IP addresses are 32-bit numbers, normally expressed as four "octets" in a "dotted decimal number." A typical IP address looks like this: 216.27.61.137. For example, if a certain IP address outside the company is reading too many files from a server, the firewall can block all traffic to or from that IP address.
• Domain Names - Because it is hard to remember the string of numbers that make up an IP address, and because IP addresses sometimes need to change, all servers on the Internet also have human-readable names, called domain names. For example, it is easier for most of us to remember www.howstuffworks.com than it is to remember 216.27.61.137. A company might block all access to certain domain names, or allow access only to specific domain names.
• Specific words and phrases - This can be anything. The firewall will sniff (search through) each packet of information for an exact match of the text listed in the filter. For example, you could instruct the firewall to block any packet with the word "X-rated" in it. The key here is that it has to be an exact match. The "X-rated" filter would not catch "X rated" (no hyphen). But you can include as many words, phrases and variations of them as you need.

What It Protects You From ?
There are many creative ways that unscrupulous people use to access or abuse unprotected computers:
• Remote login - When someone is able to connect to your computer and control it in some form. This can range from being able to view or access your files to actually running programs on your computer.
• Application backdoors - Some programs have special features that allow for remote access. Others contain bugs that provide a backdoor, or hidden access, that provides some level of control of the program.
• SMTP session hijacking - SMTP is the most common method of sending e-mail over the Internet. By gaining access to a list of e-mail addresses, a person can send unsolicited junk e-mail (spam) to thousands of users. This is done quite often by redirecting the e-mail through the SMTP server of an unsuspecting host, making the actual sender of the spam difficult to trace.
• Operating system bugs - Like applications, some operating systems have backdoors. Others provide remote access with insufficient security controls or have bugs that an experienced hacker can take advantage of.
• Denial of service - You have probably heard this phrase used in news reports on the attacks on major Web sites. This type of attack is nearly impossible to counter. What happens is that the hacker sends a request to the server to connect to it. When the server responds with an acknowledgement and tries to establish a session, it cannot find the system that made the request. By inundating a server with these unanswerable session requests, a hacker causes the server to slow to a crawl or eventually crash.
• E-mail bombs - An e-mail bomb is usually a personal attack. Someone sends you the same e-mail hundreds or thousands of times until your e-mail system cannot accept any more messages.
• Macros - To simplify complicated procedures, many applications allow you to create a script of commands that the application can run. This script is known as a macro. Hackers have taken advantage of this to create their own macros that, depending on the application, can destroy your data or crash your computer.
• Viruses - Probably the most well-known threat is computer viruses. A virus is a small program that can copy itself to other computers. This way it can spread quickly from one system to the next. Viruses range from harmless messages to erasing all of your data.
• Spam - Typically harmless but always annoying, spam is the electronic equivalent of junk mail. Spam can be dangerous though. Quite often it contains links to Web sites. Be careful of clicking on these because you may accidentally accept a cookie that provides a backdoor to your computer.
• Redirect bombs - Hackers can use ICMP to change (redirect) the path information takes by sending it to a different router. This is one of the ways that a denial of service attack is set up.
• Source routing - In most cases, the path a packet travels over the Internet (or any other network) is determined by the routers along that path. But the source providing the packet can arbitrarily specify the route that the packet should travel. Hackers sometimes take advantage of this to make information appear to come from a trusted source or even from inside the network! Most firewall products disable source routing by default.
One of the best things about a firewall from a security standpoint is that it stops anyone on the outside from logging onto a computer in your private network. While this is a big deal for businesses, most home networks will probably not be threatened in this manner. Still, putting a firewall in place provides some peace of mind.
Is a firewall the ultimate solution? Total reliance on the firewall tool, may provide a false sense of security. The firewall will not work alone (no matter how it is designed or implemented) as it is not a panacea. The firewall is simply one of many tools in a toolkit for IT security policy.
Is a firewall the ultimate solution?
The term “firewall” is already a buzzword in computer literature. Firewall marketing companies have generated a straightforward association in the mentality of budget administrators: “We have a firewall in place and therefore our network must be secure”.
However, total reliance on the firewall tool, may provide a false sense of security. The firewall will not work alone (no matter how it is designed or implemented) as it is not a panacea. The firewall is simply one of many tools in a toolkit for IT security policy.

Let us briefly examine what tasks are attributed to contemporary firewalls. Firewalls control both incoming and outgoing network traffic. They can allow certain packets to pass through or else disable access for them. For example, a firewall can be configured to pass traffic solely to port 80 of the Web server and to port 25 of the email server. This simple example shows that a firewall cannot evaluate the contents of “legitimate” packets and can unknowingly pass through some attacks on the Web server.


There are different types of firewalls. In the firewall market there are many solutions - from firewall imitations to very advanced application filters.

Basically, a firewall removed from its packing and installed between the network and the Internet adds little improvements to the security of the system. Human intervention is also required to decide how to screen traffic and “instruct” the firewall to accept or deny incoming packets. It is de facto a complex and sensitive task. Just a single security policy rule established for the wrong reasons can lead to a system being vulnerable to outside attackers. Once must also remember, that a poorly configured firewall may worsen the system’s effective immunity to attacks. This is because system administrators may believe that their systems are safe inside the “Maginot Line” and will become lax towards internal day to day security standards, if a firewall is in place.
Similarly to “firewall” another buzzword has recently become very popular – “IDS”. IDS solutions are designed to monitor events in an IT system, thus complementing the first line of defense (behind firewalls) against attacks. This article will explain the IDS related terminology and take a closer look at the protection technology basics.


Monitoring IT systems: why and how?
It is common knowledge that a system administrator is similar to a policeman, (or to a security guard, if you like) since he/she is responsible for preventing outside attacks on an IT system. The difference is that policemen work on a shift basis providing round-the-clock coverage, therefore a non-stop watch is guaranteed. However such a situation goes beyond the expectations of the administration of an IT system. Another difference is that the Internet is not inherently secure, therefore contemporary IT systems are exposed to attacks that come from hackers worldwide for criminal intent. So, what should a contemporary “policeman” look like? How can one proactively protect one’s assets against unknown threats derived from unknown sources that are likely to appear at any time during the day or night? The answer is simple – using automatically operated systems to assist “policemen” in their work. IDS tools are those which perform the function of such a “policeman”, by taking care of the security of IT systems and detecting potential intrusions. An important caveat to remember: firewalls are tools only to be used by people.

Although no single Internet security control measure is perfect, one measure, the firewall, has in many respects proven more useful overall than most others. In the most elementary sense, a firewall is a security barrier between two networks that screens traffic coming in and out of the gate of one network to accept or reject connections and service requests according to a set of rules. If configured properly, it addresses a large number of threats that originate from outside a network without introducing any significant security liabilities. Because most organizations are unable to install every patch that CERT advisories describe, for example, these organizations can nevertheless protect hosts within their networks against external attacks that exploit these vulnerabilities by installing a firewall that prevents users external to the network from reaching the vulnerable programs in the first place. A more sophisticated firewall also controls how any connections between a host external to a network and an internal host occur. In addition, an effective firewall also hides information such as names and addresses of hosts within the network as well as the topology of the network it is employed to protect. Firewalls can defend against attacks on hosts (including spoofing attacks), applications protocols, and applications. In addition, firewalls provide a central way of not only administering security for a network, but also for logging incoming and outgoing traffic to allow accountability of user actions and for triggering incident response activity if unauthorized activity occurs.
Firewalls are typically placed at gateways to networks (see Exhibit 1), mainly to protect an internal network from threats originating from an external one (especially from the Internet). In this type of deployment the goal is to create a security perimeter (see Exhibit 1) protecting hosts within from attacks originating from external sources. This scheme is successful to the degree that the security perimeter is not accessible through unprotected avenues of access (Chapman and Zwicky, 1995; Cheswick and Bellovin, 1994). The firewall acts as a “choke” component for security purposes. Note that in Exhibit 1 routers are in front and in back of the firewall. The first (shown above the firewall) is an external router used to initially route incoming traffic, direct outgoing traffic to external networks, and broadcast information that enables other network routers as well as the router to the other side of the firewall to know how to reach it. The other router is an internal router that sends incoming packets to their destination within the internal network, directs outgoing packets to the external router, and broadcasts information concerning how to reach it to the internal network and the external router. This “belt and suspenders” configuration further boosts security by preventing broadcasting of information about the internal network outside of the network that the firewall protects. This information can help an attacker learn of IP addresses, subnets, servers, and other information useful in perpetrating attacks against the network. Hiding information about the internal network is much more difficult if the gate has only one router because this router is the external and internal one, and must thus broadcast information about the internal network to the outside.

Exhibit 1. A Typicl Gate-Based Firewall Architecture
Another way that firewalls are deployed (although, unfortunately, not as frequently) is within an internal network — at the entrance to a subnet within a network — rather than at the gateway to the entire network (see Exhibit 2). The purpose is to segregate a subnetwork (a “screened subnet”) from the internal network at large — a very wise strategy when the subnet has higher security needs than those within the rest of the security perimeter. This type of deployment allows more careful control over access to data and services within a subnet than is otherwise allowed within the network. The gate-based firewall, for example, may allow FTP access to an internal network from external sources. If a subnet contains hosts that store information such as lease bid data or salary data, however, allowing FTP access to this subnet is less advisable. Setting up the subnet as a screened subnet could solve this problem and provide suitable security control — the internal firewall that provides security screening for the subnet could be configured to deny all FTP access, regardless of whether the access requests originated from outside or inside the network.

Exhibit 2. A Screened Subnet
Simply having a firewall, no matter how it is designed and implemented, however, does not necessarily do much good with respect to protecting against externally originated security threats. The benefits of firewalling depend to a large degree on the type of firewall used in addition to how it is deployed and maintained, as explained shortly. The next section of this chapter explains each of the basic types of firewalls and their advantages and disadvantages.

TYPES OF FIREWALLS
Packet Filters : The most basic type of firewall is a packet filter. It receives packets, then evaluates them according to a set of rules that are usually in the form of access control lists. The result is that packets can meet with a variety of fates — be forwarded to their destination, dropped altogether, or dropped with a return message to the originator informing him what happened. The types of filtering rules vary from one vendor’s product to another, but ones such as the following are most frequently applied:
• Source and destination IP address (e.g., all packets from source address 128.44.9.0 through 128.44.9.255 might be accepted but all other packets might be rejected)
• Source and destination port (e.g., all TCP packets originating from or destined to port 25 [the SMTP port] might be accepted, but all TCP packets destined for port 79 [the finger port] might be dropped)
• Direction of traffic (inbound or outbound)
• Type of protocol (e.g., IP, TCP, UDP, IPX, and so forth)
• The packet’s state (SYN or ACK8)

8An ACK (acknowledge) state means that a connection between hosts has already been established.

Packet-filtering firewalls provide a good way to provide a reasonable amount of protection for a network with minimum complications. Packet-filtering rules can be extremely intuitive and can thus be easy to set up. One simple but surprisingly effective rule is to “allow” all packets that are sent from a specific, known set of IP addresses, such as hosts within another network owned by the same organization or corporation. Packet-filtering firewalls also tend to have the least negative impact upon throughput rate at the gateway compared to other types of firewalls. Additionally, they tend to be the most transparent to legitimate users; if the filtering rules are set up appropriately, users will be able to obtain the access they need with little interference from the firewall.
Unfortunately, simplicity has its disadvantages. The rules that this type of firewall implements are based on port conventions. When an organization wants to stop certain service requests (e.g., telnet) from reaching internal (or external) hosts, the most logical rule implementation is to block the port (in this case, port 23) that by convention is used for telnet traffic. Blocking this port, however, does not prevent someone inside the network from allowing telnet requests on a different port that the firewall’s rules leave open. In addition, blocking some kinds of traffic causes a number of practical problems. Blocking X-Windows traffic (which is typically sent to ports 6000 to 6013) superficially would seem to provide a good security solution, because of the many known vulnerabilities in this protocol. Many types of remote log-in requests and graphical applications depend on X-Windows, however. Blocking X-Windows traffic altogether may thus restrict functionality too much, leading to the decision to allow all X-Windows traffic (which makes the firewall a less than effective security barrier). In short, firewalling schemes based on ports do not provide the precision of control that many organizations need. Furthermore, packet-filtering firewalls are often deficient in logging capabilities, particularly in providing logging that can be configured to an organization’s needs (e.g., in some cases to capture only certain events, while in other cases to capture all events), and often also lack remote administration facilities that can save considerable time and effort. Finally, creating and updating filtering rules is prone to logic errors that result in easy conduits of unauthorized access to a network and can be a much larger, more complex task than anticipated.
Like many other security-related tools, many packet filtering firewalls have become more sophisticated over time. Some vendors of packet-filtering firewalls in fact now offer programs that check the logic of filtering rules to discover logical contradictions and other errors. Some packet-filtering firewalls, additionally, offer strong authentication mechanisms such as token-based authentication. Many vendors’ products now also defend against previously successful methods to defeat packet-filtering firewalls. Network attackers can send packets to or from a disallowed address or disallowed port by fragmenting the contents. Fragmented packets cannot be analyzed by a conventional packet-filtering firewall, so the firewall passes them through, but then they are assembled at the destination host. In this manner the network attackers can bypass firewall defenses altogether. However, some vendors have developed a special kind of packet-filtering firewall that prevents these types of attacks by remembering the state of connections that pass through the firewall9. Some state-conscious firewalls can even associate each outbound connection with a specific inbound connection (and vice versa), making enforcement of filtering rules much simpler.

9Because the UDP protocol is connectionless and does not thus contain information about states, these firewalls are still vulnerable to UDP-based attacks unless they track each UDP packet that has already gone through, then determine what subsequent UDP packet sent in the opposite direction (i.e., inbound or outbound) is associated with that packet.

Many routers have packet-filtering capabilities and can thus in a sense be considered as a type of firewall. Using a packet-filtering router as the sole choke component within a gate, however, is not likely to provide sufficient security because routers are more vulnerable to attack than are firewall hosts and also because routers generally do not log traffic very well at all. A screening router is also usually difficult to administer, often requiring that a network administrator download its configuration files, edit them, and then send them back to the router. The main advantage of screening routers is that they provide a certain amount of filtering functionality with (usually) little performance overhead and minimal interference to users (who, because of these routers’ simple functionality, may hardly even realize that the screening router is in place). One option for using packet-filtering routers is to employ this type of router as the external router in a belt and suspenders topology (refer once again to Exhibit 1). The security filtering by the external router provides additional protection for the “real” firewall by making unauthorized access to it even more difficult. Additionally, the gate now has more than one choke component, providing multiple barriers against the person intent on attacking an internal network and helping compensate for configuration errors and vulnerabilities in any one of the choke components.
Application-Gateway Firewalls
A second type of firewall handles the choke function of a firewall in a different manner — by determining not only whether but also how each connection through it is made. This type of firewall stops each incoming (or outgoing) connection at the firewall, then (if the connection is permitted) initiates its own connection to the destination host on behalf of whomever created the initial connection. This type of connection is thus called a proxy connection. Using its data base defining the types of allowed connections, the firewall either establishes another connection (permitting the originating and destination host to communicate) or drops the original connection altogether. If the firewall is programmed appropriately, the whole process can be largely transparent to users.
An application-gateway firewall is simply a type of proxy server that provides proxies for specific applications. The most common implementations of application-gateway firewalls address proxy services (such as mail, FTP, and telnet) so that they do not run on the firewall itself — something that is very good for the sake of security, given the inherent dangers associated with each. Mail services, for example, can be proxied to a mail server. Each connection is subject to a set of specific rules and conditions similar to those in packet-filtering firewalls except that the selectivity rules used by application-gateway firewalls are not based on ports, but rather on the to-be-accessed programs/services themselves (regardless of what port is used to access these programs). Criteria such as the source or destination IP address can, however, still be used to accept or reject incoming connections. Application-level firewalls can go even further by determining permissible conditions and events once a proxy connection is established. An FTP proxy could restrict FTP access to one or more hosts by allowing use of the get command, for example, while preventing the use of the put command. A telnet proxy could terminate a connection if the user attempts to perform a shell escape or to gain root access. Application-gateway firewalls are not limited only to applications that support TCP/IP services; these tools can similarly govern conditions of usage of a wide variety of applications, such as financial or process control applications.
Two basic types of application-gateway firewalls are currently available: (1) application-generic firewalls, and (2) application-specific firewalls. The former provide a uniform method of connection to every application, regardless of which particular one it is. The latter determine the nature of connections to applications on an application-by-application basis. Regardless of the specific type of application-gateway firewall, the security control resulting from using a properly configured one can be quite precise. When used in connection with appropriate host-level controls (e.g., proper file permissions and ownerships), application-gateway firewalls can render externally originated attacks on applications extremely difficult. Application-gateway firewalls also serve another extremely important function — hiding information about hosts within the internal network from the rest of the world, so to speak10. Finally, a number of commercial application-gateway firewalls available today support strong authentication methods such as token-based methods (e.g., use of hand-held authentication devices).

10Some packet-filtering firewalls are also able to accomplish this function.

Application-gateway firewalls currently are the best selling of all types of firewalls. Nevertheless, they have some notable limitations, the most significant of which is that every TCP/IP client for which the firewall provides proxies must be aware of the proxy that the firewall runs on its behalf. This means that each client must be modified accordingly, which is often no small task in today’s typical computing environment. A second limitation is that unless one uses a generic proxy mechanism, every application needs its own custom proxy. This limitation is not formidable in the case of proxies for services such as telnet, FTP, and HTTP, because a variety of proxy implementations are available for these widely used services. Proxies for many other services are at the present time, however, not available, and must be custom written. Third, although some application-gateway firewall implementations are more transparent to users than others, any vendor’s claim that any implementation is completely transparent warrants healthy skepticism. Some application-gatewall firewalls even require users who have initiated connections to make selections from menus before they reach the desired destination. Finally, most application-gateway firewalls are not easy to initially configure and update correctly. To use an application-gateway firewall to the maximum advantage, network administrators should set up a new proxy for every new application accessible from outside a network. Furthermore, network administrators should work with application owners to ensure that specific, useful restrictions on usage are placed on every remote connection to each critical application from outside the network. Seldom, however, are such practices observed because of the time, effort, and complexity involved.
Bastion Hosts
The term "bastion host" is used to describe a host operating system that has had all unnecessary programs and kernel components removed. It is commonly said that a bastion host has been hardened, that it is has been specially configured with security as the primary focus. Bastion hosts often serve as the base operating system for firewalls, VPNs, or IDS devices.
The theory behind a bastion host is that any program, application, or process can contain vulnerabilities that can be potentially exploited by a knowledgeable threat — a theory that has until now seen little reason to be disputed. By removing every element of the operating system not used in actual application of the host computer, the risk is reduced accordingly.
I have heard various arguments as to where the bastion host concept should be applied. Commonly, bastion hosts are specifically configured on systems that will almost certainly face threats — firewalls, DMZ hosts, VPN servers, etc. Some network administrators who apply the bastion host concept to all hosts in the organization use that same argument. If there is no need for the users of host system to use a media player, then the media player applications are removed. Ultimately, where to apply bastion hosts is going to be a function of your final security policy.
3.3.2 The Network Demilitarized Zone (DMZ)
Various sources use different definitions for the DMZ. The strictest definition is the same applied to the border between North and South Korea; a "no man's land" where the land is a buffer and not clearly under the control of either party. A network DMZ is a section of network that acts as a buffer between the external Internet and the internal trusted network (see Exhibit 16). Other definitions of the DMZ include a separate network that contains company Internet servers such as Web servers, external DNS servers, file servers, mail relay, etc. In this case, the network clearly is part of the parent company and under its control but the terminology of the DMZ is often applied in this instance as well.
It would also not be uncommon for a company to have a number of DMZ zones that employ both of the above definitions.
Regardless of its application, the DMZ term has evolved into a term that means a section of the network that has access restrictions that are different from the company internal network.

USING FIREWALLS EFFECTIVELY
Choosing the Right Firewall
Choosing the right firewall is not an easy task. Each type of firewall offers its own set of advantages and disadvantages. Combined with the vast array of vendor firewall products (in addition to the possibility of creating one’s own custom-built firewall), this task can be potentially overwhelming. Schultz (1996a) has presented a set of criteria for selecting an appropriate firewall. One of the most important considerations is the amount and type of security needed. For some organizations with low to moderate security needs, installing a packet-filtering firewall that blocks out only the most dangerous incoming service requests often provides the most satisfactory solution because the cost and effort entailed are not likely to be great. For most organizations such as banks and insurance corporations, packet-filtering firewalls do not generally provide sufficient security capabilities (especially the granularity and control against unauthorized actions usually needed for connecting customers to services that reside within a financial or insurance corporation’s network). Other factors such as the reputation of the vendor, how satisfactory vendor support arrangements are, verifiability of the firewall’s code (to confirm that the firewall does what the vendor claims it does), support for strong authentication, ease of administration, the ability of the firewall to withstand direct attacks, and the quality and extent of logging and alarming capabilities should also be strong considerations in choosing a firewall.
The Importance of a Firewall Policy
The discussion so far has centered on high-level technical considerations. Although these considerations are extremely important, too often people overlook other considerations that, if neglected, can render firewalls ineffective. The most important single consideration in effectively using firewalls is, in fact, developing a firewall policy. A firewall policy is a statement of how a firewall should work — the rules by which incoming and outgoing traffic should be allowed or rejected (Power, 1995). A firewall policy, therefore, is a type of security requirements document for a firewall. As security needs change, firewall policies need to change accordingly. Failing to create and update a firewall policy for each firewall almost inevitably results in gaps between expectations and what each firewall actually does, resulting in uncontrolled security exposures in firewall functionality. Security administrators may, for example, think that all incoming HTTP requests are blocked, but the firewall may actually allow HTTP requests from certain IP addresses, leaving an unrecognized avenue of attack. An effective firewall policy should provide the basis for firewall implementation and configuration; needed changes in the way the firewall works should always be preceded by changes in the firewall policy. An accurate, updated firewall policy also should serve as the basis for evaluating and testing a firewall.
Security Maintenance
Many people who employ firewalls feel a false sense of security once the firewalls are in place. Properly designing and implementing firewalls, after all, can be difficult, costly, and time consuming. The truth, however, is that firewall design and implementation are simply the beginning point of having a firewall, and that firewalls that are not properly maintained soon lose their value as security control tools (Schultz, 1995). One of the most important facets of firewall maintenance is updating both the security policy and rules by which each firewall operates. Firewall functionality nearly invariably needs to change as new services and applications are introduced in (or sometimes removed from) a network. Undertaking the task of inspecting firewall logs on a daily basis to discover attempted and possibly successful attacks on both the firewall and the internal network it protects should be an extremely high priority. Evaluating and testing the adequacy of firewalls for unexpected access avenues to the security perimeter and vulnerabilities that lead to unauthorized access to the firewall itself should also be a frequent, high-priority activity (Schultz, 1996b).
Firewall logs :




ManageEngine Firewall Analyzer is a web-based firewall log analysis tool that collects, correlates, and reports on logs from enterprise-wide firewalls, proxy servers, and Radius servers. This firewall log analysis software forms an important part of the network monitoring and security arsenal, which helps you to track intrusion detection, manage user access, audit traffic, and also help you to manage your network bandwith.
Firewall Analyzer analyzes your firewall logs and answers questions like the following:
 Who are the top Web surfers in the company?
 How many users are trying to access inappropriate content?
 Where are hack attempts originating?
 Which servers receive the most hits?
Firewall Analyzer uses a built-in syslog server to store these logs, and provides comprehensive reports on firewall traffic, security breaches, and more. This helps network administrators to arrive at decisions on bandwidth management, monitor web site visits, audit traffic, and ensure appropriate usage of networks by employees.
Firewall Analyzer Highlights
Compatibility
Firewall Analyzer supports most enterprise firewalls including Check Point, Cisco PIX, SonicWALL, and more. Although WELF format is universally supported, native log format for some of these firewalls is also supported.
Automatic Detection & Configuration
Simply configure firewalls and other devices to send logs to Firewall Analyzer. They are automatically detected and reports are generated.
Flexible Archiving
Firewall Analyzer periodically archives the logs collected from each device. You can later load this archive into the database and view reports for specific firewall activity. Logs are archived periodically, with options to define log intervals and disable archiving to save disk space.
Rule-based Alerting
Firewall Analyzer lets you set up threshold-based alerts and also notify operators by email whenever an alert is triggered. This means that operators are immediately alerted when the network is down, or traffic levels goes high.
Pre-defined Reports
Firewall Analyzer includes pre-defined reports on bandwidth usage, top talkers, Web usage, VPN statistics, virus activity, and more.
Report Scheduling
The report scheduling feature lets you schedule reports to run automatically over user-defined time intervals. You can also choose to receive these reports automatically by email.
Customizable Reports
Apart from the instant reports, Firewall Analyzer lets you create custom reports and report profiles, based on specific criteria. Custom reports can be generated in PDF, scheduled to run automatically, and be sent by email.
Historical Trend Reports
Trend reports in Firewall Analyzer show you trends in bandwidth usage and user activity based on protocols, events, and more. Trend reports are useful in identifying user patterns and also help in long-term capacity planning.
Portability
Firewall Analyzer uses an embedded syslog server to collect logs from firewalls, proxy servers, and Radius servers. A built-in MySQL database is used to store them. This lets you deploy Firewall Analyzer anywhere on your network and generate reports with no additional setup.
Intrusion Detection Systems
ID Basics
Definition: An intrusion is an active sequence of related events that deliberately try to cause harm. Definition refers to both successful and unsuccessful attempts.
• Intrusion – a series of concatenated activities that pose threat to the safety of IT resources from unauthorized access to a specific computer or address domain.
• Incident – violation of the system security policy rules that may be identified as a successful intrusion.
• Attack – a failed attempt to enter the system (no violation committed).
Intrusion Detection Systems – IDS
• IDS Systems can be defined as the tools, methods and resources to help identify, assess, and report unauthorized or unapproved network activity.
• Lossely compare IDS Systems to an alarm system.
• IDS Systems are not a stand-alone protection measure.
• IDSs work at the network layer, they analize packets to find specific patterns, if so an alert is loged.
• Similar to antivirus software, i.e. use known signatures to recognize traffic patterns.
Although IDSs may be used in conjunction with firewalls, which aim to regulate and control the flow of information into and out of a network, the two security tools should not be considered the same thing. Using the previous example, firewalls can be thought of as a fence or a security guard placed in front of a house. They protect a network and attempt to prevent intrusions, while IDS tools detect whether or not the network is under attack or has, in fact, been breached. IDS tools thus form an integral part of a thorough and complete security system. They don’t fully guarantee security, but when used with security policy, vulnerability assessments, data encryption, user authentication, access control, and firewalls, they can greatly enhance network safety.
Intrusion detection systems serve three essential security functions: they monitor, detect, and respond to unauthorized activity by company insiders and outsider intrusion. Intrusion detection systems use policies to define certain events that, if detected will issue an alert. In other words, if a particular event is considered to constitute a security incident, an alert will be issued if that event is detected. Certain intrusion detection systems have the capability of sending out alerts, so that the administrator of the IDS will receive a notification of a possible security incident in the form of a page, email, or SNMP trap. Many intrusion detection systems not only recognize a particular incident and issue an appropriate alert, they also respond automatically to the event. Such a response might include logging off a user, disabling a user account, and launching of scripts.
Why We Need IDS
Of the security incidents that occur on a network, the vast majority (up to 85 percent by many estimates) come from inside the network. These attacks may consist of otherwise authorized users who are disgruntled employees. The remainder come from the outside, in the form of denial of service attacks or attempts to penetrate a network infrastructure. Intrusion detection systems remain the only proactive means of detecting and responding to threats that stem from both inside and outside a corporate network.
Intrusion detection systems are an integral and necessary element of a complete information security infrastructure performing as "the logical complement to network firewalls." [BAC99] Simply put, IDS tools allow for complete supervision of networks, regardless of the action being taken, such that information will always exist to determine the nature of the security incident and its source.
Clearly, corporate America understands this message. Studies show that nearly all large corporations and most medium-sized organizations have installed some form of intrusion detection tool [SANS01]. The February 2000 denial of service attacks against Amazon.com and E-Bay (amongst others) illustrated the need for effective intrusion detection, especially within on-line retail and e-commerce. However, it is clear that given the increasing frequency of security incidents, any entity with a presence on the Internet should have some form of IDS running as a line of defense. Network attacks and intrusions can be motivated by financial, political, military, or personal reasons, so no company should feel immune. Realistically, if you have a network, you are a potential target, and should have some form of IDS installed.
What is Intrusion Detection?
As stated previously, intrusion detection is the process of monitoring computers or networks for unauthorized entrance, activity, or file modification. IDS can also be used to monitor network traffic, thereby detecting if a system is being targeted by a network attack such as a denial of service attack. There are two basic types of intrusion detection: host-based and network-based. Each has a distinct approach to monitoring and securing data, and each has distinct advantages and disadvantages. In short, host-based IDSs examine data held on individual computers that serve as hosts, while network-based IDSs examine data exchanged between computers.
Attacks are categorized as:
• Internal: coming from own enterprise’s employees or their business partners or customers.
• External: coming from outside, frequently via the Internet.
The following types of attacks can be identified:
 Those related to unauthorized access to the resources
– Password cracking and access violation
– Trojan horses
– Interceptions; most frequently associated with TCP/IP stealing and interceptions that often employ additional mechanisms to compromise operation of attacked systems (for example by flooding); man in the middle attacks)
• Spoofing (deliberately misleading by impersonating or masquerading the host identity by placing forged data in the cache of the named server i.e. DNS spoofing)
• Scanning ports and services, including ICMP scanning (Ping), UDP, TCP Stealth Scanning TCP that takes advantage of a partial TCP connection establishment protocol.) Etc.
• Spoofing (deliberately misleading by impersonating or masquerading the host identity by placing forged data in the cache of the named server i.e. DNS spoofing)
• Scanning ports and services, including ICMP scanning (Ping), UDP, TCP Stealth Scanning TCP that takes advantage of a partial TCP connection establishment protocol.) Etc.
• Remote OS Fingerprinting, for example by testing typical responses on specific packets, addresses of open ports, standard application responses (banner checks), IP stack parameters etc.
• Network packet listening (a passive attack that is difficult to detect but sometimes possible)
• Stealing information, for example disclosure of proprietary information
• Authority abuse; a kind of internal attack, for example, suspicious access of authorized users having odd attributes (at unexpected times, coming from unexpected addresses),
• Unauthorized network connections,
• Usage of IT resources for private purposes, for example to access pornography sites,
• Taking advantage of system weaknesses to gain access to resources or privileges
• Unauthorized alteration of resources
o Falsification of identity, for example to get system administrator rights,
o Information altering and deletion,
o Unauthorized transmission and creation of data (sets), for example arranging a database of stolen credit card numbers on a government computer (e.g. the spectacular theft of several thousand numbers of credit cards in 1999),
o Unauthorized configuration changes to systems and network services (servers).
• Denial of Service (DoS):
o Flooding – compromising a system by sending huge amounts of useless information to lock out legitimate traffic and deny services:
 Ping flood (Smurf) – a large number of ICMP packets sent to a broadcast address,
 Send mail flood - flooding with hundreds of thousands of messages in a short period of time; also POP and SMTP relaying
 SYN flood – initiating huge amounts of TCP requests and not completing handshakes as required by the protocol,
 Distributed Denial of Service (DDoS); coming from a multiple source,
Basic process for an IDS is:
• Passively collects data, preprocess and classifies them
• Analysis is done, determine if information falls outside a normal activity, is so, it is matched against a knowledge base
• if a match is found, an alert is sent
Components of IDS :
1. Sensors
• used to obtain certain data and pass them on
Two basic types: network-based and hostbased sensors
Network-based sensors
• capture data in packets traversing the network
• advantage: number of hosts for which they can provide data, do not burden the network with much additional traffic
Host-based sensors
• programs that produce log data such as Event logger
• the output is sent to an analysis program that either runs on the same host or on a central host
Agents
• Agents are a group of processes that run independently and that are programmed to analyze system behaviour or netwok events or both to detect anomalous events and violations of an organization‘s security policy.
• Agents incorporate three functions
– A communication interface
– A listener
– a sender
IDS Agents :These are software programs that reside on servers (HIDS), within critical network segments (NIDS), or on each network node (NNIDS).
Agents are key issues for IDS functioning. They may generate alerts for malicious activities.
2. Database. Here, all data collected by agents is stored. By auditing data gathered by all agents, certain attacks that are threats for the entire network can be detected and also attack trends and patterns on the network may be tracked.
3. Manager Components : This is a console that manages all modules. The manager is the administrator’s interface.Fundamental purpose is to provide a control capability for IDSs
Manager functions are:
• Data management
• Alerting
• Event Correlation
• Monitoring other components
• Policy generation and distribution
• Management Console
• Security management and enforcement
4. Alert Generator. This module is responsible for notifying the administrator about a potential threat. There are a variety of currently available IDS approaches.
• Certain IDS are limited to generate alerts (which may be logged) or others which may be placed on the management console only (Snort, Cisco RealSecure) and are based on outside software for information processing purposes (Cisco recommends, for example, netForensics).
• Other solutions can take the form of a wide range of sophisticated notifications (e-mail, SNMP trap, displaying a console box, fax, SMS, sending messages to the managing software, launching any deliberate program).
5. Report Generator. This is a software module designed to generate reports based on the collected data. The database, the manager and the reporting software are integrated within a single console.
We can broadly have three components:
1. Sensors : (Agents) Generate security events.
2. Console : ( Database, Report generators, and Manager) Monitors events and alerts and controls sensors.
3. Central Engine : Record events logged by the sensor in the database and use system of rules to generate alerts from security events.
Intrusion detection system activities

Intrusion detection system infrastructure

Types of IDS depends on the following :
1. Location of the sensors
2. Type of sensors and method used by the Engines to generate alerts.
How does an IDS operate?

IDS performs a continuous monitoring of events. The intrusion detection software monitors the server and logs any unauthorized access attempts and aberrant behavior patterns. Of course, an IDS must be instructed” to recognize such events. IDS can process various types of data. The most frequent are: traffic eavesdropping, packets flowing into system logs, information on users’ activities. In operational terms, three primary types of intrusion detection systems are available:
- Host-based systems – HIDS,
- Network-based systems – NIDS,
- Network node-based systems - NNIDS.
Host-Based IDS (HIDS)
Host-based systems were the first type of IDS to be developed and implemented. These systems collect and analyze data that originate on a computer that hosts a service, such as a Web server. Once this data is aggregated for a given computer, it can either be analyzed locally or sent to a separate/central analysis machine. One example of a host-based system is programs that operate on a system and receive application or operating system audit logs. These programs are highly effective for detecting insider abuses. Residing on the trusted network systems themselves, they are close to the network’s authenticated users. If one of these users attempts unauthorized activity, host-based systems usually detect and collect the most pertinent information in the quickest possible manner. In addition to detecting unauthorized insider activity, host-based systems are also effective at detecting unauthorized file modification. It requires software that resides on the system and can scan all host recources for activity.
Possible host-based IDS implementations include Windows NT/2000 Security Event Logs.
Intrusion prevention system

The biggest disadvantages of the majority of host-based systems HIDS is that they are passive, which means that they have to wait for an event to be an indication of an attack, and cannot proactively prevent it. Recently, HIDS are being provided with intrusion prevention technology aimed at detecting certain symptoms of an attack and the possibility to resist it before any damage can occur. One such advanced hybrid ID is based on monitoring system Application Programming Interface (API) software calls, (made in OS or kernel), and in capturing calls that are prohibited under the established security policy rules. Thus, a system will not only detect an aberrant action but also prevent it. For example, if a system user (or a process) attains elevated permissions in an operating system as a result of intrusive actions, (or a trojan) and is trying to destroy an important file, a passive system will detect a lack or modification of this file. A proactive system, in addition to notifying the system administrator, will also be able to prevent data from damage. This technique was used, among others, by the Linux Intrusion Detection System (LIDS).

File integrity checking

Increasingly, HIDS are using technologies, which allow them to detect alterations to important system files and assets. As a rule, the files to check are periodically check summed and compared against a checksum database. If a checksum does not match the current result stored in a checksum database, this means that the file integrity has been altered. Obviously, this rule can be used to monitor only critical non-alterable system files.
Certain HIDS are able to verify features of certain assets. It is well known, for example, that system log files are incremental files. Therefore, the system should be configured so that an alarm is triggered as soon as the system detects any abnormal logs.
A number of products that deal with monitoring of files and assets are available on the market. They are denoted with a FIA (File Integrity Assessment) abbreviation. The first program likely to employ file integrity assessment by checksum verification was Tripwire.
When deploying HIDS software, attention must be paid to provide security for the databases used by the system (event detection rule files, checksum files). Imagine if your operating system is brought under attack and the attacker knows that your OS uses HIDS coverage. By making changes to the system, the attacker may also modify the database containing signatures of changed files. Therefore, it is a good idea to store signatures and other databases, as well as configuration files and HIDS binaries using a non-erasable method – for example, a write-protected diskette or a CD-ROM.
Pros: Host-based IDS can analyze activities on the host it monitors at a high level of detail; it can often determine which processes and/or users are involved in malicious activities. Though they may each focus on a single host, many host-based IDS systems use an agent-console model where agents run on (and monitor) individual hosts but report to a single centralized console (so that a single console can configure, manage, and consolidate data from numerous hosts). Host-based IDSs can detect attacks undetectable to the network-based IDS and can gauge attack effects quite accurately. Host-based IDSs can use host-based encryption services to examine encrypted traffic, data, storage, and activity. Host-based IDSs have no difficulties operating on switch-based networks, either.
Cons: Data collection occurs on a per-host basis; writing to logs or reporting activity requires network traffic and can decrease network performance. Clever attackers who compromise a host can also attack and disable host-based IDSs. Host-based IDSs can be foiled by DoS attacks (since they may prevent any traffic from reaching the host where they’re running or prevent reporting on such attacks to a console elsewhere on a network). Most significantly, a host-based IDS does consume processing time, storage, memory, and other resources on the hosts where such systems operate.
Network-Based IDS (NIDS)
As opposed to monitoring the activities that take place on a particular network, Network-based intrusion detection analyzes data packets that travel over the actual network. These packets are examined and sometimes compared with empirical data to verify their nature: malicious or benign. Because they are responsible for monitoring a network, rather than a single host, Network-based intrusion detection systems (NIDS) tend to be more distributed than host-based IDS. Software, or appliance hardware in some cases, resides in one or more systems connected to a network, and is used to analyze data such as network packets. Instead of analyzing information that originates and resides on a computer, network-based IDS uses techniques like “packet-sniffing” to pull data from TCP/IP or other protocol packets traveling along the network. This surveillance of the connections between computers makes network-based IDS great at detecting access attempts from outside the trusted network. In general, network-based systems are best at detecting the following activities:
• Unauthorized outsider access: When an unauthorized user logs in successfully, or attempts to log in, they are best tracked with host-based IDS. However, detecting the unauthorized user before their log on attempt is best accomplished with network-based IDS.
• Bandwidth theft/denial of service: These attacks from outside the network single out network resources for abuse or overload. The packets that initiate/carry these attacks can best be noticed with use of network-based IDS.
Network Node IDS (NNIDS)
This IDS sub-approach is a specific modification of NIDS. Traditional NIDS agents, with the network interface set properly, collect data intended for the whole network. On the other hand, NNIDS are composed of micro agents distributed over each workstation within a network segment. Each micro agent monitors the network traffic directed to that workstation only, greatly reducing the capacity requirements needed by the NIDS. NNIDS weaknesses are associated with the requirement to have and manage a huge amount of micro agents as well as with the fact that NNIDS may have difficulty detecting attacks for which the coverage of the entire network is required, for example, to detect TCP Stealth Scanning. Such a traffic analysis approach is a must in VPNs, when the end-to-end encryption technique is employed. No traditional NIDS will be able to audit such traffic, as it is encrypted. A NNIDS analyses instantly, after decryption is made.

Pros: Network-based IDSs can monitor an entire, large network with only a few well-situated nodes or devices and impose little overhead on a network. Network-based IDSs are mostly passive devices that monitor ongoing network activity without adding significant overhead or interfering with network operation. They are easy to secure against attack and may even be undetectable to attackers; they also require little effort to install and use on existing networks.
Cons: Network-based IDSs may not be able to monitor and analyze all traffic on large, busy networks and may therefore overlook attacks launched during peak traffic periods. Network-based IDSs may not be able to monitor switch-based (high-speed) networks effectively, either. Typically, network-based IDSs cannot analyze encrypted data, nor do they report whether or not attempted attacks succeed or fail. Thus, network-based IDSs require a certain amount of active, manual involvement from network administrators to gauge the effects of reported attacks.
HIDS and NIDS Used in Combination
The two types of intrusion detection systems differ significantly from each other, but complement one another well. The network architecture of host-based is agent-based, which means that a software agent resides on each of the hosts that will be governed by the system. In addition, more efficient host-based intrusion detection systems are capable of monitoring and collecting system audit trails in real time as well as on a scheduled basis, thus distributing both CPU utilization and network overhead and providing for a flexible means of security administration.
In a proper IDS implementation, it would be advantageous to fully integrate the network intrusion detection system, such that it would filter alerts and notifications in an identical manner to the host-based portion of the system, controlled from the same central location. In doing so, this provides a convenient means of managing and reacting to misuse using both types of intrusion detection.
Consequently, intrusion detection systems should rely predominantly on host-based components, but should always make use of NIDS to complete the defense. In short, a truly secure environment requires both a network and host-based intrusion detection implementation to provide for a robust system that is the basis for all of the monitoring, response, and detection of computer misuse.
IDS Techniques :
Anomaly Detection : Designed to uncover abnormal patterns of behavior, the IDS establishes a baseline of normal usage patterns, and anything that widely deviates from it gets flagged as a possible intrusion. What is considered to be an anomaly can vary, but normally, any incident that occurs on frequency greater than or less than two standard deviations from the statistical norm raises an eyebrow. An example of this would be if a user logs on and off of a machine 20 times a day instead of the normal 1 or 2. Also, if a computer is used at 2:00 AM when normally no one outside of business hours should have access, this should raise some suspicions. At another level, anomaly detection can investigate user patterns, such as profiling the programs executed daily. If a user in the graphics department suddenly starts accessing accounting programs or compiling code, the system can properly alert its administrators.
Misuse Detection or Signature Detection : Commonly called signature detection, this method uses specifically known patterns of unauthorized behavior to predict and detect subsequent similar attempts. These specific patterns are called signatures. For host-based intrusion detection, one example of a signature is "three failed logins." For network intrusion detection, a signature can be as simple as a specific pattern that matches a portion of a network packet. For instance, packet content signatures and/or header content signatures can indicate unauthorized actions, such as improper FTP initiation. The occurrence of a signature might not signify an actual attempted unauthorized access (for example, it can be an honest mistake), but it is a good idea to take each alert seriously. Depending on the robustness and seriousness of a signature that is triggered, some alarm, response, or notification should be sent to the proper authorities.
Target Monitoring : These systems do not actively search for anomalies or misuse, but instead look for the modification of specified files. This is more of a corrective control, designed to uncover an unauthorized action after it occurs in order to reverse it. One way to check for the covert editing of files is by computing a cryptographic hash beforehand and comparing this to new hashes of the file at regular intervals. This type of system is the easiest to implement, because it does not require constant monitoring by the administrator. Integrity checksum hashes can be computed at whatever intervals you wish, and on either all files or just the mission/system critical files.
Stealth Probes : This technique attempts to detect any attackers that choose to carry out their mission over prolonged periods of time. Attackers, for example, will check for system vulnerabilities and open ports over a two-month period, and wait another two months to actually launch the attacks. Stealth probes collect a wide-variety of data throughout the system, checking for any methodical attacks over a long period of time. They take a wide-area sampling and attempt to discover any correlating attacks. In effect, this method combines anomaly detection and misuse detection in an attempt to uncover suspicious activity.
NIDS HIDS
• Broad in scope
• Better for detecting attacks from outside
• Examines packet headers
• OS independent Detects network attacks, as payload is analyzed.
• Analyze all traffic within a specific network segment
• A NIDS agent functions as an appropriate software module that resides on one of servers within a LAN segment .
• Near real-time response - volume of packets sent over contemporary LANs is enormous. And this volume must be analyzed by a NIDS. Thus NIDS must not fail to collect the packets. Therefore, functioning under a regime close to the real-time mode is a must for a NIDS
• Pattern Matching: Each packet on the network is received by the listening system, the NIDS, it is then filtered and compared on a byte-code matching basis, against a database of known attack signatures (or patterns).



• Narrow in scope
• Better for detecting attacks from inside.
• Does not see packet headers.
• Detects local attacks before they hit the network.
• It Collects data flowing into the system Logs and searches for any information that triggers alert.
• OS dependent --- as primary task of HIDS is to audit system logs and for that it has to rely on the system’s appropriate Configuration of OS log mechanisms. e.g., Eventlog in Windows based system ,SysLog in Unix.
• File integrity checking : Detect alterations to important system files and assets.Various products that deal with monitoring of files and assets are available in the market and are called as FIA (File Integrity Assessment ). First program likely to employ file integrity assessment by checksum verification was Tripwire.
• Attention must be paid while deploying the HIDS S/w to provide database security. The Attacker can modify the database containing signatures of changed files Therefore, it is a good idea to store signatures and other databases, as well as configuration files using a non-erasable method – for example, a write-protected diskette or a CD-ROM.
• Responds after a suspicious log entry
• HIDS are passive which means that they have to wait for an event to be an indication of an attack, and cannot proactively prevent it .
But now-a-days we have proactive systems, that not only notifying the system administrator, but also prevents data from damage. E.g.,
Linux Intrusion Detection System (LIDS)

Intrusion Detection Pro and Cons
PRO CONS
• Can detect external hackers as well as internal network based attacks
• Scales easily to provide protection for the entire network.
• Offers centralized mgmt for correlation of distributed attacks.
• Provides defense in depth.
• Provides an additional layer of protection • Generates false positive and negative alerts.
• Reacts to attacks rather than preventing them.
• Generates an enormous amount of data to be analyzed.
• Susceptible to “ low and slow” attacks.
• Cannot deal with encrypted network traffic


By implementing the following techniques, IDSs can fend off expert and novice hackers alike. Although experts are more difficult to block entirely, these techniques can slow them down considerably:
• Breaking TCP connections by injecting reset packets into attacker connections causes attacks to fall apart.
• Deploying automated packet filters to block routers or firewalls from forwarding attack packets to servers or hosts under attack stops most attacks cold—even DoS or DDoS attacks. This works for attacker addresses and for protocols or services under attack (by blocking traffic at different layers of the ARPA networking model, so to speak).
• Deploying automated disconnects for routers, firewalls, or servers can halt all activity when other measures fail to stop attackers (as in extreme DDoS attack situations, where filtering would only work effectively on the ISP side of an Internet link, if not higher up the ISP chain, as close to Internet backbones as possible).
• Actively pursuing reverse DNS lookups or other ways of attempting to establish hacker identity is a technique used by some IDSs, generating reports of malicious activity to all ISPs in the routes used between the attacker and the attackee. Because such responses may themselves raise legal issues, experts recommend obtaining legal advice before repaying hackers in kind.
Criteria of merit for IDS Evaluation
• Speed
• Accuracy (hit, miss, false alarm)
• Response latency
• Overhead
• Noise
• Stimulus load
• Variability
• Usability
When these figures are used, they must be dependable.









Steganography
Steganography conceals the fact that a message is being sent. It is a method akin to covert channels, spread spectrum communication and invisible inks which adds another step in security. A message in ciphertext may arouse suspicion while an invisible message will not.
The art and science of hiding information by embedding messages within other, seemingly harmless messages. Steganography works by replacing bits of useless or unused data in regular computer files (such as graphics, sound, text, HTML, or even floppy disks ) with bits of different, invisible information. This hidden information can be plain text, cipher text, or even images.

Steganography sometimes is used when encryption is not permitted. Or, more commonly, steganography is used to supplement encryption. An encrypted file may still hide information using steganography, so even if the encrypted file is deciphered(Decoded), the hidden message is not seen.
Special software is needed for steganography, and there are freeware versions available at any good download site.
It includes a vast array of methods of secret communications that conceal the very existence of the message. Among these methods are invisible inks, microdots, character arrangement (other than the cryptographic methods of permutation and substitution), digital signatures, covert channels and spread-spectrum communications.
Cryptographic techniques "scramble" messages so if intercepted, the messages cannot be understood. Steganography, in an essence, "camouflages" a message to hide its existence and make it seem "invisible" thus concealing the fact that a message is being sent altogether. An encrypted message may draw suspicion while an invisible message will not.

PC Software that Provide Steganographic Services
In the computer, an image is an array of numbers that represent light intensities at various points (pixels1) in the image. A common image size is 640 by 480 and 256 colors (or 8 bits per pixel). Such an image could contain about 300 kilobits of data.
There are usually two type of files used when embedding data into an image. The innocent looking image which will hold the hidden information is a "container." A "message" is the information to be hidden. A message may be plain-text, ciphertext, other images or any thing that can be embedded in the least significant bits (LSB) of an image.
For example: Suppose we have a 24-bit image 1024 x 768 (this is a common resolution for satellite images, electronic astral photographs and other high resolution graphics). This may produce a file over 2 megabytes in size (1024x768x24/8 = 2,359,296 bytes). All color variations are derived from three primary colors, Red, Green and Blue. Each primary color is represented by 1 byte (8 bits). 24-bit images use 3 bytes per pixel. If information is stored in the least significant bit (LSB) of each byte, 3 bits can be a stored in each pixel. The "container" image will look identical to the human eye, even if viewing the picture side by side with the original. Unfortunately, 24-bit images are uncommon (with exception of the formats mentioned earlier) and quite large. They would draw attention to themselves when being transmitted across a network. Compression would be beneficial if not necessary to transmit such a file. But file compression may interfere with the storage of information.
Kurak and McHugh identify two kinds of compression, lossless and lossy. Both methods save storage space but may present different results when the information is uncompressed.
• Lossless compression is preferred when there is a requirement that the original information remain intact (as with steganographic images). The original message can be reconstructed exactly. This type of compression is typical in GIF2 and BMP3 images.
• Lossy compression, while also saving space, may not maintain the integrity of the original image. This method is typical in JPG4 images and yields very good compression.
To illustrate the advantage of lossy compression, Renoir's Le Moulin de la Galette was retrieved as a 175,808 byte JPG image 1073 x 790 pixels with 16 million possible colors. The colors were maintained when converting it to a 24-bit BMP file but the file size became 2,649,019 bytes! Converting again to a GIF file, the colors were reduced to 256 colors (8-bit) and the new file is 775,252 bytes. The 256 color image is a very good approximation of Renoir's painting.
Most steganographic software available does not support, nor recommends, using JPG files (an exception is noted later in the paper). The next best alternative to 24-bit images, is to use 256 color (or gray-scale) images. These are the most common images found on the Internet in the form of GIF files. Each pixel is represented as a byte (8-bits). Many authors of the steganography software and articles stress the use of gray-scale images (those with 256 shades of gray or better).The importance is not whether the image is gray-scale or not, the importance is the degree to which the colors change between bit values.
Techniques of Steganography
1. The key innovation in recent years was to choose an innocent looking cover that contains plenty of random information, called white noise. You can hear white noise as a the nearly silent hiss of a blank tape playing. The secret message replaces the white noise, and if done properly it will appear to be as random as the noise was. The most popular methods use digitized photographs, so let's explore these techniques in some depth. Digitized photographs and video also harbor plenty of white noise. A digitized photograph is stored as an array of colored dots, called pixels. Each pixel typically has three numbers associated with it, one each for red, green, and blue intensities, and these values often range from 0-255. Each number is stored as eight bits (zeros and ones), with a one worth 128 in the most significant bit (on the left), then 64, 32, 16, 8, 4, 2, and a one in the least significant bit (on the right) worth just 1.

A difference of one or two in the intensities is imperceptible, and, in fact, a digitized picture can still look good if the least significant four bits of intensity are altered -- a change of up to 16 in the color's value. This gives plenty of space to hide a secret message. Text is usually stored with 8 bits per letter, so we could hide 1.5 letters in each pixel of the cover photo. A 640x480 pixel image, the size of a small computer monitor, can hold over 400,000 characters. That's a whole novel hidden in one modest photo!
Hiding a secret photo in a cover picture is even easier. Line them up, pixel by pixel. Take the important four bits of each color value for each pixel in the secret photo (the left ones). Replace the unimportant four bits in the cover photo (the right ones). The cover photo won't change much, you won't lose much of the secret photo, but to an untrained eye you're sending a completely innocuous picture.
Unfortunately, anyone who cares to find your hidden image probably has a trained eye. The intensity values in the original cover image were white noise, i.e. random. The new values are strongly patterned, because they represent significant information of the secret image. This is the sort of change which is easily detectable by statistics. So the final trick to good steganography is make the message look random before hiding it.
One solution is simply to encode the message before hiding it. Using a good code, the coded message will appear just as random as the picture data it is replacing. Another approach is to spread the hidden information randomly over the photo. "Pseudo-random number" generators take a starting value, called a seed, and produce a string of numbers which appear random. For example, pick a number between 0 and 16 for a seed. Multiply your seed by 3, add 1, and take the remainder after division by 17. Repeat, repeat, repeat. Unless you picked 8, you'll find yourself somewhere in the sequence 1, 4, 13, 6, 2, 7, 5, 16, 15, 12, 3, 10, 14, 9, 11, 0, 1, 4, . . . which appears somewhat random. To spread a hidden message randomly over a cover picture, use the pseudo-random sequence of numbers as the pixel order. Descrambling the photo requires knowing the seed that started the pseudo-random number generator.
2. Most of the newer applications use steganography like a watermark, to protect a copyright on information. Photo collections, sold on CD, often have hidden messages in the photos which allow detection of unauthorized use. The same technique applied to DVDs is even more effective, since the industry builds DVD recorders to detect and disallow copying of protected DVDs.
3. Even biological data, stored on DNA, may be a candidate for hidden messages, as biotech companies seek to prevent unauthorized use of their genetically engineered material. The technology is already in place for this: three New York researchers successfully hid a secret message in a DNA sequence and sent it across the country. Sound like science fiction? A secret message in DNA provided Star Trek's explanation for the dubious fact that all aliens seem to be humans in prosthetic makeup!

Network Management
What Is Network Management?
Network management means different things to different people. In some cases, it involves a solitary network consultant monitoring network activity with an outdated protocol analyzer. In other cases, network management involves a distributed database, autopolling of network devices, and high-end workstations generating real-time graphical views of network topology changes and traffic. In general, network management is a service that employs a variety of tools, applications, and devices to assist human network managers in monitoring and maintaining networks.
Network Management Architecture
Most network management architectures use the same basic structure and set of relationships. End stations (managed devices), such as computer systems and other network devices, run software that enables them to send alerts when they recognize problems (for example, when one or more user-determined thresholds are exceeded). Upon receiving these alerts, management entities are programmed to react by executing one, several, or a group of actions, including operator notification, event logging, system shutdown, and automatic attempts at system repair.
Management entities also can poll end stations to check the values of certain variables. Polling can be automatic or user-initiated, but agents in the managed devices respond to all polls. Agents are software modules that first compile information about the managed devices in which they reside, then store this information in a management database, and finally provide it (proactively or reactively) to management entities within network management systems (NMSs) via a network management protocol. Well-known network management protocols include the Simple Network Management Protocol (SNMP) and Common Management Information Protocol (CMIP). Management proxies are entities that provide management information on behalf of other entities. Figure 6-1 depicts a typical network management architecture.

Figure 6-1: A Typical Network Management Architecture Maintains Many Relationships

ISO Network Management Model
The ISO has contributed a great deal to network standardization. Its network management model is the primary means for understanding the major functions of network management systems. This model consists of five conceptual areas, as discussed in the next sections.
Performance Management
The goal of performance management is to measure and make available various aspects of network performance so that internetwork performance can be maintained at an acceptable level. Examples of performance variables that might be provided include network throughput, user response times, and line utilization.
Performance management involves three main steps. First, performance data is gathered on variables of interest to network administrators. Second, the data is analyzed to determine normal (baseline) levels. Finally, appropriate performance thresholds are determined for each important variable so that exceeding these thresholds indicates a network problem worthy of attention.
Management entities continually monitor performance variables. When a performance threshold is exceeded, an alert is generated and sent to the network management system.
Each of the steps just described is part of the process to set up a reactive system. When performance becomes unacceptable because of an exceeded user-defined threshold, the system reacts by sending a message. Performance management also permits proactive methods: For example, network simulation can be used to project how network growth will affect performance metrics. Such simulation can alert administrators to impending problems so that counteractive measures can be taken.
Configuration Management
The goal of configuration management is to monitor network and system configuration information so that the effects on network operation of various versions of hardware and software elements can be tracked and managed.
Each network device has a variety of version information associated with it. An engineering workstation, for example, may be configured as follows:
• Operating system, Version 3.2
• Ethernet interface, Version 5.4
• TCP/IP software, Version 2.0
• NetWare software, Version 4.1
• NFS software, Version 5.1
• Serial communications controller, Version 1.1
• X.25 software, Version 1.0
• SNMP software, Version 3.1
Configuration management subsystems store this information in a database for easy access. When a problem occurs, this database can be searched for clues that may help solve the problem.
Accounting Management
The goal of accounting management is to measure network utilization parameters so that individual or group uses on the network can be regulated appropriately. Such regulation minimizes network problems (because network resources can be apportioned based on resource capacities) and maximizes the fairness of network access across all users.
As with performance management, the first step toward appropriate accounting management is to measure utilization of all important network resources. Analysis of the results provides insight into current usage patterns, and usage quotas can be set at this point. Some correction, of course, will be required to reach optimal access practices. From this point, ongoing measurement of resource use can yield billing information as well as information used to assess continued fair and optimal resource utilization.
Fault Management
The goal of fault management is to detect, log, notify users of, and (to the extent possible) automatically fix network problems to keep the network running effectively. Because faults can cause downtime or unacceptable network degradation, fault management is perhaps the most widely implemented of the ISO network management elements.
Fault management involves first determining symptoms and isolating the problem. Then the problem is fixed and the solution is tested on all-important subsystems. Finally, the detection and resolution of the problem is recorded.
Security Management
The goal of security management is to control access to network resources according to local guidelines so that the network cannot be sabotaged (intentionally or unintentionally) and sensitive information cannot be accessed by those without appropriate authorization. A security management subsystem, for example, can monitor users logging on to a network resource and can refuse access to those who enter inappropriate access codes.
Security management subsystems work by partitioning network resources into authorized and unauthorized areas. For some users, access to any network resource is inappropriate, mostly because such users are usually company outsiders. For other (internal) network users, access to information originating from a particular department is inappropriate. Access to Human Resource files, for example, is inappropriate for most users outside the Human Resources department.
Security management subsystems perform several functions. They identify sensitive network resources (including systems, files, and other entities) and determine mappings between sensitive network resources and user sets. They also monitor access points to sensitive network resources and log inappropriate access to sensitive network resources.
Review Questions
Q—Name the different areas of network management.
A—Configuration, accounting, fault, security, and performance.
Q—What are the goals of performance management?
A—Measure and make available various aspects of network performance so that internetwork performance can be maintained at an acceptable level.
Q—What are the goals of configuration management?
A—Monitor network and system configuration information so that the effects on network operation of various versions of hardware and software elements can be tracked and managed.
Q—What are the goals of accounting management?
A—Measure network utilization parameters so that individual or group uses on the network can be regulated appropriately.
Q—What are the goals of fault management?
A—Detect, log, notify users of, and automatically fix network problems to keep the network running effectively.
Q—What are the goals of security management?
A—Control access to network resources according to local guidelines so that the network cannot be sabotaged and so that sensitive information cannot be accessed by those without appropriate authorization.
Configuration Management Overview
Goal and Objectives
Configuration management is a critical process responsible for identifying, controlling, and tracking all versions of hardware, software, documentation, processes, procedures, and all other inanimate components of the information technology (IT) organization. The goal of configuration management is to ensure that only authorized components, referred to as configuration items (CIs), are used in the IT environment and that all changes to CIs are recorded and tracked throughout the component’s life cycle. To achieve this goal, the configuration management process includes the following objectives:
• To identify configuration items and their relationships and add them to the configuration management database (CMDB).
• To enable access to the CMDB and CIs by other SMFs.
• To update and change CIs following changes to IT components during the release management process...
• To establish a review process that ensures that the CMDB accurately reflects the production IT environment.
Key Definitions
Configuration baseline. A configuration of a product or system established at a specific point in time, which captures both the structure and details of that product or system and enables that product or system to be rebuilt at a later date.
Configuration control. Activities that control changes to configuration items. They include evaluation, coordination, approval, or rejection of changes.
Configuration item (CI). An IT component that is under configuration management control. Each CI can be composed of other CIs. CIs may vary widely in complexity, size, and type, from an entire system (including all hardware, software, and documentation) to a single software module or a minor hardware component.
Configuration item attributes. The information recorded in the CMDB for every configuration item identified, ranging from the item's name, description, and location to technically detailed configuration settings and options.
Configuration management database (CMDB). A database that contains all relevant details of each configuration item (CI) and details of the important relationships between CIs. The database can include ID code, copy and serial number, category, status, version, model, location, responsibility, or historical information about the item. The level of detail contained in this database depends either on the aims or on the degree to which information is to be available.
Configuration manager. The role that is responsible for managing the activities of the configuration management process for the IT organization. The role also selects, assigns responsibilities to, and trains the configuration management staff.
Processes and Activities
Process Flow Summary : Configuration management is graphically represented in the form of a process flow diagram (Figure 1) that identifies the activities needed to successfully manage and control key components of an IT infrastructure.

Figure 1. Configuration management process flow
This high-level overview can be further broken down into a number of detailed activities and process flows, which are summarized below. Together these detailed activities and process flows provide a comprehensive blueprint for the configuration management process.
Establish Configuration Items (CIs) : Assuming the need to track and control changes to an IT component, the process of adding an item to the CMDB involves first deciding upon the appropriate level of detail necessary to track and control change. Next, configuration items (CIs) are created in the database to permit management of components at this level.
One of the key benefits configuration management provides, in addition to asset management, is the modeling of relationships between IT components. These relationships need to be identified and connections built between configuration items in order to model the real-world situation. For example, a workstation is made up of a desktop computer, operating system, and applications, and the workstation is connected to and uses the network. The proper understanding and documentation of relationships between IT components makes it possible to perform detailed impact analysis on a proposed change.
Access Configuration Items : After information about IT components and relationships has been added to the CMDB, it can then be used by other SMFs. Change management, for example, uses the relationships defined within the CMDB to determine the impact of a change on other components within the IT environment. Problem management uses the CMDB as a resource to identify which CIs are the root cause of incidents, and so on.
Change Configuration Items : As release management begins to make changes to IT components, corresponding changes must be made to the CMDB. Without accurate and up-to-date information, the value of configuration management is lost. This process should be done automatically, wherever possible. The amount of information and the frequency of change make manual data entry impractical for all but the smallest organizations.
Review Configuration Items :The accuracy of the information stored in the CMDB is crucial to the success of the Change Management and Incident Management SMFs, as well as other service management functions. A review process that ensures that the database accurately reflects the production IT environment needs to be established.
Note A more fundamental review should also be carried out at periodic intervals to establish whether the information in the CMDB is relevant to the business and is being managed at the correct level of detail.
Setup Activities
Prior to initiating the configuration management process flow activities described above, a number of detailed setup and planning activities must be completed in order to use configuration management effectively. The following process flowchart (Figure 2) identifies these activities and the sequence in which they should be performed.

Agree on Purpose : The first and most important step in any project leading to the deployment of configuration management is to reach an agreement on its purpose, articulated in terms of key aims and objectives. When discussing aims and objectives, representatives from all parts of the organization that have a responsibility for IT components and services should be included.
Agree on Boundaries of Management : Ideally, information about all components and services of interest to change management would be recorded in a single CMDB. In practice, however, organizational issues, such as a widespread geographic structure, delegated administration, and ownership of specific IT components, will dictate the contents of a particular CMDB. In most cases, the impact of a change to the local environment is restricted to the country in which the change is applied, so there is little need to maintain configuration data about IT components in other countries. A best practice would be to create processes and procedures that ensure other groups are notified whenever certain types of changes are proposed.
Establish Standards for Configuration Management : Configuration management is only as good as the policies and procedures governing its activities. This includes the procedures that are used in the performance of such configuration management tasks as updating the CMDB, performing audits, reconciling the CMDB, and preparing management reports. All configuration process activities should be clearly defined and documented. Definition and documentation of configuration management standards is a best practice, as is maintaining the standards as a single document in a secure location.
Discover IT Components : All of the IT components that exist within the agreed management boundary must be identified before it can be determined which ones are important enough to be tracked and controlled using configuration management. In this context, IT components include process documentation, reference guides, technical manuals, and build documents—in addition to software applications, operating systems, network routers, workstations, and servers.
Decide What Needs to Be Managed : Managing and tracking change for every single component within even a small IT environment would be impractical. Best practice calls for managing only those components that:
• Are necessary for the effective operation of the business.
• Support the provision of IT services.
• Can be seriously impacted by changes to other components within the environment.
The decision to include a component in the CMDB should be reviewed at periodic intervals.
Build CMDB : The CMDB provides a single source of information about the components of the IT environment. This information can be used within the organization to improve system reliability, availability, and control. By establishing a database to track IT components, known as configuration items (CIs), configuration management can verify that only authorized components are used in the IT environment.

IT ACT

OFFENCES

65. Tampering with computer source documents.
Whoever knowingly or intentionally conceals, destroys or alters or intentionally or knowingly causes another to conceal, destroy or alter any computer source code used for a computer, computer programme, computer system or computer network, when the computer source code is required to be kept or maintained by law for the time being in force, shall be punishable with imprisonment up to three years, or with fine which may extend up to two lakh rupees, or with both.
Explanation.—For the purposes of this section, "computer source code" means the listing of programmes, computer commands, design and layout and programme analysis of computer resource in any form.
66. Hacking with computer system.
(1) Whoever with the intent to cause or knowing that he is likely to cause wrongful loss or damage to the public or any person destroys or deletes or alters any information residing in a computer resource or diminishes its value or utility or affects it injuriously by any means, commits hack:
(2) Whoever commits hacking shall be punished with imprisonment up to three years, or with fine which may extend upto two lakh rupees, or with both.
67. Publishing of information which is obscene in electronic form.
Whoever publishes or transmits or causes to be published in the electronic form, any material which is lascivious or appeals to the prurient interest or if its effect is such as to tend to deprave and corrupt persons who are likely, having regard to all relevant circumstances, to read, see or hear the matter contained or embodied in it, shall be punished on first conviction with imprisonment of either description for a term which may extend to five years and with fine which may extend to one lakh rupees and in the event of a second or subsequent conviction with imprisonment of either description for a term which may extend to ten years and also with fine which may extend to two lakh rupees.
68. Power of Controller to give directions.
(1) The Controller may, by order, direct a Certifying Authority or any employee of such Authority to take such measures or cease carrying on such activities as specified in the order if those are necessary to ensure compliance with the provisions of this Act, rules or any regulations made thereunder.
(2) Any person who fails to comply with any order under sub-section (1) shall be guilty of an offence and shall be liable on conviction to imprisonment for a term not exceeding three years or to a Fine not exceeding two lakh rupees or to both.
70. Protected system.
(1) The appropriate Government may, by notification in the Official Gazette, declare that any computer, computer system or computer network to be a protected system.
(2) The appropriate Government may, by order in writing, authorise the persons who are authorised to access protected systems notified under sub-section (1).
(3) Any person who secures access or attempts to secure access to a protected system in contravention of the provisions of this section shall be punished with imprisonment of either description for a term which may extend to ten years and shall also be liable to fine.
71. Penalty for misrepresentation.
Whoever makes any misrepresentation to, or suppresses any material fact from, the Controller or the Certifying Authority for obtaining any licence or Digital Signature Certificate, as the case may be. shall be punished with imprisonment for a term which may extend to two years, or with fine which may extend to one lakh rupees, or with both.
72. Penalty for breach of confidentiality and privacy.
Save as otherwise provided in this Act or any other law for the time being in force, any person who, in pursuance of any of the powers conferred under this Act, rules or regulations made thereunder, has secured access to any electronic record, book, register, correspondence, information, document or other material without the consent of the person concerned discloses such electronic record, book.
register, correspondence, information, document or other material to any other person shall be punished with imprisonment for a term which may extend to two years, or with fine which may extend to one lakh rupees, or with both.
73. Penalty for publishing Digital Signature Certificate false in certain particulars.
(1) No person shall publish a Digital Signature Certificate or otherwise make it available to any other person with the knowledge that—(a) the Certifying Authority listed in the certificate has not issued it; or (b) the subscriber listed in the certificate has not accepted it; or (c) the certificate has been revoked or suspended,
unless such publication is for the purpose of verifying a digital signature created prior to such suspension or revocation.
(2) Any person who contravenes the provisions of sub-section (1) shall be punished with imprisonment for a term which may extend to two years, or with fine which may extend to one lakh rupees, or with both.
74. Publication for fraudulent purpose.
Whoever knowingly creates, publishes or otherwise makes available a Digital Signature Certificate for any fraudulent or unlawful purpose shall be punished with imprisonment for a term which may extend to two years, or with fine which may extend to one lakh rupees, or with both.
Cyber Forensics
Criminal investigators rely on recognized scientific forensic disciplines, such as medical pathology, to provide vital information used in apprehending criminals and determining their motives. Today, an increased opportunity for cyber crime exists, making it imperative for advances in the law enforcement, legal, and forensic computing technical arenas. Cyber forensics is the discovery, analysis, and reconstruction of evidence extracted from any element of computer systems, computer networks, computer media, and computer peripherals that allow investigators to solve the crime.1 Cyber forensics focuses on real-time, on-line evidence gathering rather than the traditional off-line computer disk forensic technology.
Two distinct components exist in the emerging field of cyber forensics. The first, computer forensics, deals with gathering evidence from computer media seized at the crime scene. Principle concerns with computer forensics involve imaging storage media, recovering deleted files, searching slack and free space, and preserving the collected information for litigation purposes. Several computer forensic tools are available to investigators. The second component, network forensics, is a more technically challenging aspect of cyber forensics. It gathers digital evidence that is distributed across large-scale, complex networks. Often this evidence is transient in nature and is not preserved within permanent storage media. Network forensics deals primarily with in-depth analysis of computer network intrusion evidence, while current commercial intrusion analysis tools are inadequate to deal with today's networked, distributed environments.
Similar to traditional medical forensics, such as pathology, today's computer forensics is generally performed postmortem (i.e., after the crime or event occurred). In a networked, distributed environment, it is imperative to perform forensic-like examinations of victim information systems on an almost continuous basis in addition to traditional postmortem forensic analysis. This is essential to continued functioning of critical information systems and infrastructures. Few, if any, forensic tools are available to assist in preempting the attacks or locating the perpetrators. In the battle against malicious hackers, investigators must perform cyber forensic functions in support of various objectives, including timely cyber attack containment, perpetrator location and identification, damage mitigation, and recovery initiation in the case of a crippled, yet still functioning, network. Standard intrusion analysis includes examination of many sources of data evidence (e.g., intrusion detection system logs, firewall logs, audit trails, and network management information). Cyber forensics adds inspection of transient and other frequently overlooked elements such as contents or state of the following: memory, registers, basic input/output system, input/output buffers, serial receive buffers, L2 cache, front side and back side system caches, and various system buffers (e.g., drive and video buffers).

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.