Wednesday, February 18, 2009

ODBMS - Security Authorization

5.c) Discuss briefly about the implementation aspects of security. [6]Database security is the system, processes, and procedures that protect a database from unintended activity. Unintended activity can be categorized as authenticated misuse, malicious attacks or inadvertent mistakes made by authorized individuals or processes. Database security is also a specialty within the broader discipline of computer security.
Databases provide many layers and types of information security, typically specified in the data dictionary, including:
• Access control
• Auditing
• Authentication
• Encryption
• Integrity controls
• Database security can begin with the process of creation and publishing of appropriate security standards for the database environment. The standards may include specific controls for the various relevant database platforms; a set of best practices that cross over the platforms; and linkages of the standards to higher level polices and governmental regulations.
• An important procedure when evaluating database security is performing vulnerability assessments against the database
The results of the scans should be used to harden the database in order to mitigate the threat of compromise by intruders. vulnerability assessment is a preliminary procedure to determine risk where a compliance program is the process of on-going risk assessment
Two crucial aspects of database security compliance include patch management and the review and management of permissions (especially public) granted to objects within the database. Database objects may include table or other objects listed in the Table link. The permissions granted for SQL language commands on objects are considered in this process.
Application level authentication and authorization mechanisms should be considered as an effective means of providing abstraction from the database layer. The primary benefit of abstraction is that of a single sign-on capability across multiple databases and database platforms. A Single sign-on system should store the database user's credentials (login id and password), and authenticate to the database on behalf of the user
Another security layer of a more sophisticated nature includes the real-time monitoring of database protocol traffic (SQL) over the network, and/or local monitoring of database activity using software agents. Analysis can be performed on the traffic for known exploits or network traffic baselines can be captured overtime to build a normal pattern used for detection of anomalous activity that could be indicative of intrusion. These systems can provide a comprehensive Database audit trail in addition to the intrusion detection (and potentially protection) mechanisms
After an incident occurs, the usage of Database Forensics can be employed to determine the scope.
A database security program should include the regular review of permissions granted to individually owned accounts and accounts used by automated processes. The accounts used by automated processes should have appropriate controls around password storage such as sufficient encryption and access controls to reduce the risk of compromise. For individual accounts, a two-factor authentication system should be considered in a database environment where the risk is commensurate with the expenditure for such an authentication system


1.g) List out the main types of threat that could affect a database system. [4]
Database Vulnerabilities (The many fronts of the security war!)
Basically database security can be broken down into the following key points of interest.

• Server Security
• Database Connections
• Table Access Control
• Restricting Database Access
Server Security
Server security is the process of limiting actual access to the database server itself, and in my humble opinion it is the most important angle of security and should be carefully planned.
The basic idea is this, "You can't access what you can't see". Why in the name of the Almighty (or whoever else you believe in, or if you are an Atheist, substitute your own name here) would you let your database server be visible to the world. This is not a web server here, there should be no such thing as an anonymous connection. Now some people would say, "Well, what if your database server is supplying information to dynamic web pages?", well I'll turn that around and say, "Your database back end should never be on the same machine as your web server, not just for security, but for performance!" If your database server is supplying information to a web server then it should be configure to allow connections only from that web server. Now that bring mes to the next point of discussion:
Here Trusted IP Access has
limited the database server
to only answering information
requests from the known IP of
the web server.


Trusted IP addresses
Every server, should be configured to only allow trusted IP addresses. You don't allow just anyone to come into your house and talk to your children. In the same respect you should know exactly who should be allowed to "talk" to your database server.
If it's a back end for a web server., then only that web server's address should be allowed to access that database server. If the database server is supplying information to a homegrown application that is running on the internal network, then it should only answer to addresses from within the internal network.
Also please none of this cheap mentality of hosting your web databases on the same server that houses internal database information. Why would you have internal information out in the DMZ, its not called the DMZ for nothing.
Database Connections
These days with the number of Dynamic Applications it becomes tempting to allow immediate unauthenticated updates to a database. I say, "Ney!" to such laziness. If you are going to allow users to make updates to a database via a web page, ensure that you validate all updates to ensure that all updates are warranted and safe. For example ensure that you are removing any possible SQL code from a user supplied input. If a normal user should never be inputting it don't allow the data to ever be submitted.
If you are one of those administrators that feels the need to use ODBC connections ensure that every connection uses it's own unique user to access the shared data. It personally makes my skin crawl when I see the user account "sa" used for every connection and data source on the server. Does every employee in your company have keys to every room in the building? I'll let you address that problem quietly if they do.
Table Access Control
Table access control is probably one of the most overlooked forms of database security because of the inherent difficult in applying it. Properly using Table access control will require the collaboration of both system administrator and database developer, and we all know that "collaboration" is a foreign word in the IT industry.
An example would be allowing read access to user imputed information to the public. If a user just imputed the information whey would they have to look at it within the same session. Or, if a table is just used for system reference why should it have any other permissions beside read available?

Unfortunately table structure and proper relational database structure and development is a little out of the scope of this article. But, keep a look out for it in my upcoming articles.
Restricting Database Access
Now being that we have completed a basic overview of database security I want to dive a little further into the specifics of server security. Mainly into the network access of the system. Specifically targeting Internet based databases, since they have been the most recent targets of attacks. All web-enabled applications have ports that they listen to ( I know this is pretty basic to most of you but, it needs to be said for the beginners!).
Most cyber criminals ( I always refrain from the media sensationalized term "Hackers" or "Crackers") are going to do a simple "port scan" to look for ports that are open that popular database systems use by default. Now I say by default, because you can change the ports a service listens on, which I personally feel is a great way to throw off a criminal.
First they will attempt to determine if a machine is even at a specific address. They will do this by pinging the system. (If you don't know what ping is quietly close this article, you need to do some studying first!) This is done by simply opening up a command line and typing "ping".
C:\ ping 127.0.0.1
or
root@localhost: ~$: ping 127.0.0.1
The response should look like this:
Pinging 127.0.0.1 with 32 bytes of data:
Reply from 127.0.0.1: bytes=32 time<10ms TTL=128
Reply from 127.0.0.1: bytes=32 time<10ms TTL=128
Reply from 127.0.0.1: bytes=32 time<10ms TTL=128
Reply from 127.0.0.1: bytes=32 time<10ms TTL=128
Ping statistics for 127.0.0.1:
Packets: Sent = 4, Received = 4, Lost = 0 (0%
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
An example of the ping command
being used on a windows box.
Click to enlarge .


The criminal now knows there is a system answering at this address. First thing to prevent this is to disable any ICMP packets. This will prevent ping requests from being replied to.
There are many ways to prevent open access from the Internet and each database system has it's own set of unique features as well as each OS. So I am merely going to touch on a few methods.
• Trusted IP addresses - UNIX servers are configured to answer only pings from a list of trusted hosts. In UNIX, this is accomplished by configuring the rhosts file, which restricts server access to a list of specific users.
• Server account disabling- If you suspend the server ID after three password attempts, attackers are thwarted. Without user ID suspension, an attacker can run a program that generates millions of passwords until it guesses the user ID and password combination.
• Special tools -Products such as RealSecure by ISS send an alert when an external server is attempting to breach your system's security.
Oracle has a wealth of authentication methods:
• Kerberos security- This popular "ticket"-based authentication system sidesteps several security risks.
• Virtual private databases- VPD technology can restrict access to selected rows of tables.
• Role-based security- Object privileges can be grouped into roles, which can then be assigned to specific users.
• Grant-execute security- Execution privileges on procedures can be tightly coupled to users. When a user executes the procedures, they gain database access, but only within the scope of the procedure.
• Authentication servers-Secure authentication servers provide positive identification for external users.
• Port access security - All Oracle applications are directed to listen at a specific port number on the server. Like any standard HTTP server, the Oracle Web Listener can be configured to restrict access.
Databases need to have level of security in order to protect the database against both malicious and accidental threats. A threat is any type of situation that will adversely affect the database system. Some factors that drive the need for security are as follows:

- Theft and fraud
- Confidentiality
- Integrity
- Privacy
- Database availability

Threats to database security can come from many sources. People are a substantial source of database threats. Different types of people can pose different threats. Users can gain unauthorised access through the use of another person's account. Some users may act as hackers and/or create viruses to adversely affect the performance of the system. Programmers can also pose similar threats. The Database Administrator can also cause problems by not imposing an adequate security policy.

Some threats related to the hardware of the system are as follows:

- Equipment failure
- Deliberate equipment damage (e.g. arson, bombs)
- Accidental / unforeseen equipment damage (e.g. fire, flood)
- Power failure
- Equipment theft

Threats can exist over the communication networks that an organisation uses. Techniques such as wire tapping, cable disruption (cutting / disconnecting), and electronic interference can all be used to disrupt services or reveal private information.
Countermeasures

Some countermeasures that can be employed are outlined below:

- Access Controls (can be Discretionary or Mandatory)
- Authorisation (granting legitimate access rights)
- Authentication (determining whether a user is who they claim to be)
- Backup
- Journaling (maintaining a log file - enables easy recovery of changes)
- Encryption (encoding data using an encryption algorithm)
- RAID (Redundant Array of Independent Disks - protects against data loss due to disk failure)
- Polyinstantiation (data objects that appear to have different values to users with different access rights / clearance)
- Views (virtual relations which can limit the data viewable by certain users)
Threats to Databases
• Loss of confidentiality
– Protection of data from improper disclosure
• Loss of integrity
– Information be protected from improper modification
• Loss of availability
– Making data available to a user with a legitimate right

Introduction to Database Security
Database security begins with physical security for the computer systems that host the DBMS. No DBMS is safe from intrusion, corruption, or destruction by people who have physical access to the computers. After physical security has been established, database administrators must protect the data from unauthorized user and from unauthorized access by authorized users.There are three main objects when designing a secure database application, and anything prevents from a DBMS to achieve these goals would be consider a threat to Database Security.
(1)Integrity

Database integrity refers to the requirement that information be protected from improper modification. Modification of data includes creation, insertion, modification, changing the status of data, and deletion.Integrity is lost if unauthorized changes are made to the data by either intentional or accidental acts.
To prevent the loss of integrity from happening-->Only authorized users should be allowed to modify data.
e.g. Students may be allowed to see their grades, yet not allowed to modify it.
(2)Availability
Authorized user or program should not be denied access. For example, an instructor who wishes to change a grade should be allowed to do so.
(3)Secrecy

Information should not be disclosed to unauthorized users. For example, a student should not be allowed to examine other students’ grades.
To achieve these objectives, a clear and consistent security policy should be developed to describe what security measures must be enforced. In particular, we must determine what part of the data is to be protected and which users get access to which portions of the data. Next, the security mechanisms of the underlying DBMS and operating system, as well as external mechanisms, such as securing access to buildings, must be utilized to enforce the policy. We emphasize that security measures must be taken at several levels.
Why is database security important?

If the loss of system or data integrity is not corrected, continued use of the contaminated system or corrupted data could result in inaccuracy, fraud, or erroneous decisions. In addition, unauthorized, unanticipated, or unintentional disclosure could result in loss of public confidence, embarrassment, or legal action against the organization.
Some database crime news
Credit card database hacked
Cops tap database to harass, intimidate


Countermeasures to database security threats
(1) Inference control -->The corresponding countermeasure to statistical database security.

Statistical database is a database which contains specific information on individuals or events but is intended to permit only statistical queries. (e.g. averages, sums, counts, maximums, minimums and standard deviations. However, it is possible to obtain confidential data on individuals by using only statistical queries. Inference control technique are used to prevent this from happening. (e.g. we can prohibit sequences of queries that refer repeatedly to the same population of tuples.
(2) Flow Control

"Flow control regulates the distribution or flow of information among accessible objects. A flow between object X and object Y occurs when a program reads values from X and writes values into Y. Flow controls check that information contained in some objects does not flow explicitly or implicitly into less protected objects. Thus, S user cannot get indirectly in Y what he or she cannot get directly from X." Elmasri,Navathe(P747)
(3) Encryption

"The idea behind encryption is to apply an encryption algorithm to the data, using a user-specified or DBA-specified encryption key. The output of the algorithm is the encrypted version of the data. There is also a decryption algorithm, which takes the encrypted data and a decryption key as input and then returns the original data." Elmasri,Navathe(P709)
(4) Access Control
A database for an enterprise contains a great deal of information and usually has several groups of users. Most users need to access only a small part of the database to carry out their tasks. Allowing users unrestricted access to all the data can be undesirable, and a DBMS should provide mechanisms to control access to data. The main idea behind access control is to protect unauthorized persons from accessing the system.
Traditional Ways(70s~80s)
How it works?
1:Discretionary Access Control
Discretionary access control is based on the idea of access rights, or privileges, and mechanisms for giving users such privileges. A privilege allows a user to access some data object in a certain manner (e.g. to read or modify). A user who creates data object such as a table or a view automatically gets all applicable privileges on that object and the user can also propagate privileges using "Grant Option". The DBMS subsequently keeps track of how these privileges are granted to other users, and possibly revoked, and ensures that at all times only users with the necessary privileges can access an object.
SQL Syntax
SQL supports discretionary access control through the GRANT and REVOKE commands.
The GRANT command gives users privileges to base tables and views.
The REVOKE command cancels uses' privileges.
For example: GRANT privilege1, privilege2, ... ROVOKE privilege1, privilege2, ...
ON object_name ON object_name
TO user1, user2, ... ; FROM user1, user2, ... ;

GRANT SELECT, ALTER ROVOKE SELECT, ATLER
ON student ON student
TO db2_14 FROM db2_14
Example from Textbook (R.Elmasri, S. B. Navathe, Fundamentals of Database Systems, Ed.4, Addison-Wesley, 2003.Chapter 23)
Suppose that A1 creates the two base relations EMPLOYEE and DEPARTMENT
EMPLOYEE
NAME SSN BDATE ADDRESS SEX SALARY DNO
DEPARTMENT
DNUMBER DNAME MGRSSN
A1 is then the owner of these two relations and hence has all the relation privileges on each of them. A1 wants to grant to account A2 the privilege to insert and delete tuples in both of these relations
GRANT INSERT, DELETE ON EMPLOYEE, DEPARTMENT TO A2;
A2 cannot grant INSERT and DELETE privileges on the EMPLOYEE and DEPARTMENT tables, because A2 was not given the GRANT OPTION in the preceding command.
GRANT SELECT ON EMPLOYEE, DEPARTMENT TO A3 with GRANT OPTION;
The clause WITH GRANT OPTION means that A3 can now propagate the privilege to other accounts by using GRANT. For example, A3 can grant the SELECT privilege on the EMPLOYEE relation to A4 by issuing the following command:
GRANT SELECT ON EMPLOYEE TO A4;
Now suppose that A1 decides to revoke the SELECT privilege on the EMPLOYEE relation from A3; A1 then can issue this command:
REVOKE SELECT ON EMPLOYEE FROM A3;
The DBMS must now automatically revoke the SELECT privilege on EMPLOYEE from A4, too, because A3 granted that privileges to A4 and A3 does not have the privilege any more.

MySQL grant revoke syntax
Limits on propagation of privileges

The techniques to limit the propagation of privileges have been developed, but they have not been implemented in most DBMSs and are not a part of SQL.
Horizontal propagation limits:
An account B given the GRANT OPTION can grant the privilege to at most i other accounts.
Vertical Propagation limits:
It limits the depth to which an account can pass on the privilege in terms of levels.
Pros and Cons of discretionary access control
Advantages:
Being flexible and suitable for various types of systems and application like commercial and industrial environtments.
Disadvantages:
Not providing real assurance on the satisfaction of the protection requirements.
Not imposing any restriction on the usage of information once it is obtained by a user and makes system vulnerable to attacks.
2:Mandatory Access control
Mandatory access control are aimed at addressing such loopholes in discretionary access control. The popular model for mandatory access control called the Bell-LaPadula model, is described in terms of objects, subjects, security classes, and clearances. Each database object is assigned a security class, and each subject is assigned clearance for a security class.
The Bell-LaPadula model imposes two restrictions on all reads and writes of database objects:
1: Simple Security Property: Subject S is allowed to read object O only if class(S)≥ class(O). For example, a user with TS (top
secret) clearance can read a table with C (confidential) clearance, but a user with C(Confidential) clearance is not allowed to
read a table with TS (top secret) classification.
2. *-Property: Subject S is allowed to write object O only if class(S)≤ class(O). For example, a user with S (secret) clearance can
write only objects with S (secret) or TS (top secret) classification.
If discretionary access controls are also specified, these rules represent additional restrictions. Therefore, to read or write a database object, a user must have the necessary privileges and the security classes of the user and the object must satisfy the preceding restrictions.
Advantages: Mandatory policies ensure a high degree of protection.-->suitable for military types of applications, which require a
high degree of protection.
Disadvantages: Applicable to very few environment for being too rigid.
Current State and Future-->Role-Based Access Control
Role-Based Access Control emerged rapidly in the 1990s and it's adopted by most DBMS since then. Its basic concept is that privileges are associated with roles, and users are assigned to appropriate roles. Roles can then be granted to users and other roles. (Roles can be created and destroyed using the CREATE ROLE and DROP ROLE commands.) RBAC appears to be a viable alternative to traditional discretionary and mandatory access controls; it ensures that only authorized users given access to certain data or resources.
Advantages of RBAC
A properly-administered RBAC system enables users to carry out a broad range of authorized operations, and provides great flexibility and breadth of application. System administrators can control access at a level of abstraction that is natural to the way that enterprises typically conduct business. This is achieved by statically and dynamically regulating users' actions through the establishment and definition of roles, role hierarchies, relationships, and constraints. Thus, once an RBAC framework is established for an organization, the principal administrative actions are the granting and revoking of users into and out of roles. Role associations can be established when new operations are instituted, and old operations can be deleted as organizational functions change and evolve. This simplifies the administration and management of privileges; roles can be updated without updating the privileges for every user on an individual basis. With these outstanding features and the easier deployment over the Internet, Role-Based Access Control undoubtedly will continue to be dominant in the future.


1. f) Why does the need of encryption still require when a DBMS already supports discretionary and mandatory access control? [4]

Database data can be encrypted by a key that is generated by DBMS. Access to this key is protected by encryption and decryption keys that are provided by the client. This allows a client to change (for example) the user password without requiring the entire database to be re-encrypted.
It is not possible to turn a secure database into a plain-text database and vice-versa—this can only be done by copying the data from one database to another
It is important to note that encrypting a database has an impact on performance.
The RDbStoreDatabase and RDbNamedDatabase classes provide variants of the Create() and Open() functions that can create and open a secure database. The access key can be changed using the ChangeSecurity() member function of the RDbDatabase base class.
ENCRYPTION ISSUES
While there are many good reasons to encrypt data, there are many bad reasons to encrypt data.
Encryption does not solve all security problems, and may even make some problems worse. The
following section describes some of the misconceptions about encryption of stored data.
Issue 1: Encryption is not access control
Most organizations need to limit access to data to those who have a “need to know.” For example, a
human resources system may limit employees to reviewing only their own employment records, while
managers of employees may see the employment records of those employees working for them. Human
resources specialists may also need to see employee records for multiple employees.
This type of security policy  limiting data access to those with a need to see it  is typically
addressed by access control mechanisms. The Oracle database has provided strong, independently-
evaluated access control mechanisms for many years. Recently, Oracle8i has added the ability to
enforce access control to an extremely fine level of granularity, through its Virtual Private Database
capability.
Because human resources records are considered sensitive information, it’s tempting to think that this
information should all be encrypted “for better security.” However, encryption cannot enforce the type
of granular access control described above, and may actually hinder data access. In the human resources
example, an employee, his manager, and the HR clerk all need to access the employee’s record. If
employee data is encrypted, then each person also has to be able to access the data in unencrypted form.
Therefore, the employee, the manager and the HR clerk would have to share the same encryption key to
decrypt the data. Encryption would therefore not provide any additional security in the sense of better
access control, and the encryption might actually hinder the proper functioning of the application. There
is the additional issue that it is very difficult to securely transmit and share encryption keys among
multiple users of a system.
A basic principle behind encrypting stored data is that it must not interfere with access control. For
example, a user who has SELECT privilege on EMP should not be limited by the encryption mechanism
from seeing all the data he is otherwise allowed to see. Similarly, there is little benefit to encrypting,
(for example) part of a table with one key and part of a table with another key if users need to see all
encrypted data in the table; it merely adds to the overhead of decrypting data before users can read it.
Provided that access controls are implemented well, there is little additional security provided within the
database itself from encryption; any user who has privilege to access data within the database has no
more nor less privilege as a result of encryption. Therefore, encryption should never be used to solve
access control problems.

Issue 2: DBAs can access all data
Some organizations are concerned that database administrators (DBAs), because they typically have all
privileges, are able to see all data in the database. These organizations feel that the DBAs should
merely administer the database, but should not be able to see the data that the database contains. Some
organizations are also concerned about the concentration of privilege in one person, and would prefer to
partition the DBA function, or enforce two-person rules.
It’s tempting to think that encrypting all data (or significant amounts of data) will solve the above
problems, but there are better ways to accomplish these objectives. First of all, Oracle does support
limited partitioning of DBA privilege. Oracle9i provides native support for SYSDBA and SYSOPER
________________________________________
Page 5
Database Encryption in Oracle9I
3
February 2001
users. SYSDBA has all privileges, but SYSOPER has a limited privilege set (e.g. startup and shutdown
of the database). Furthermore, an organization can create smaller roles encompassing a number of
system privileges. A JR_DBA role might not include all system privileges, but only those appropriate to
a more junior database administrator (such as CREATE TABLE, CREATE USER, etc.) Oracle does
not audit the actions taken by SYS (or SYS-privileged users) but does audit startup and shutdown of the
database in the operating system records.
Furthermore, the DBA function by its nature is a trusted position. Even organizations with the most
sensitive data  such as intelligence agencies  do not typically partition the DBA function. Instead,
they vet their DBAs strongly, because it is a position of trust.
Encryption of stored data must not interfere with the administration of the database, or larger security
issues can result than you were attempting to address with encryption. For example, if by encrypting
data, you corrupt the data, you’ve created a security problem: data is not meaningful and may not be
recoverable.
Encryption can be used to mitigate the ability of a DBA  or other privileged user  to see data in the
database, but it is not a substitute for vetting a DBA properly, or for limiting the use of powerful system
privileges. If an untrustworthy user has significant privilege, there are multiple threats he can pose to an
organization, which may be far more significant than viewing unencrypted credit card numbers.
Issue 3: Encrypting everything does not make data secure
It’s a pervasive tendency to think that if storing some data encrypted strengthens security, then
encrypting everything makes all data secure.
We’ve already seen why encryption does not address access control issues well. Consider the
implications of encrypting an entire production database. All data must be decrypted to be read,
updated, or deleted, and, as discussed earlier, the encryption must not interfere with normal access
controls. Encryption is innately a performance-intensive operation; encrypting all data will significantly
affect performance. Availability is a key aspect of security and if, by encrypting data, you make data
unavailable, or the performance adversely affects availability, you have created a new security problem.
Encryption keys must be changed regularly as part of good security practice, which necessitates that the
database be inaccessible while the data is being decrypted and reencrypted with a new key or keys. This
also adversely affects availability.
While encrypting all or most data in a production database is clearly a problem, there may be
advantages to encrypting data stored off-line. For example, an organization may store backups for a
period of six months to a year off-line, in a remote location. Of course, the first line of protection is to
secure the data in a facility to which access is controlled, a physical measure. However, there may be a
benefit to encrypting this data before it is stored, and since it is not being accessed on-line, performance
need not be a consideration. While Oracle9i does not provide this facility, there are vendors who can
provide such encryption services. Organizations considering this should thoroughly test that data (that is
encrypted before off-line storage) can be decrypted and re-imported successfully before embarking on
large-scale encryption of backup data.
________________________________________
Page 6
Database Encryption in Oracle9I
4
February 2001
SOLUTIONS FOR STORED DATA ENCRYPTION IN ORACLE
Oracle9i Data Encryption Capabilities
While there are many security threats that encryption cannot address well, it is clear that one can obtain
an additional measure of security by selectively encrypting sensitive data before storage in the database.
Examples of such data could include:
• credit card numbers
• national identity numbers
• passwords for applications whose users are not database users
To address the above needs, Oracle8i (release 8.1.6) introduced a PL/SQL package to encrypt and
decrypt stored data. The package, DBMS_OBFUSCATION_TOOLKIT, is provided in both Standard
Edition and Enterprise Edition Oracle9i. The package is documented in the Oracle9i Supplied PL/SQL
Packages Reference Guide.
The package currently supports bulk data encryption using the Data Encryption Standard (DES)
algorithm, and includes procedures to encrypt (DESEncrypt) and decrypt (DESDecrypt) using DES.
The package does not currently support the Advanced Encryption Standard, the successor algorithm to
DES, though this is planned for a future release of Oracle9i.
Key management is programmatic, that is, the application (or caller of the function) has to supply the
encryption key, which means that the application developer has to find a way of storing and retrieving
keys securely. The relative strengths and weaknesses of various key management techniques are
discussed later in this paper. The DBMS_OBFUSCATION_TOOLKIT package, which can handle both
string and raw data, requires the submission of a 64-bit key. The DES algorithm itself has an effective
key length of 56-bits. The DBMS_OBFUSCATION_TOOLKIT package is granted to PUBLIC by
default.
Oracle has added support for triple DES (3DES) encryption in Oracle8i release 8.1.7. The
DBMS_OBFUSCATION_TOOLKIT package includes additional functions to encrypt and decrypt
using 2-key and 3-key 3DES, in outer cipher-block-chaining mode. They will require key lengths of 128
and 192 bits, respectively.
Oracle8i release 8.17 also added support for cryptographic checksumming using the MD5 algorithm
(using the MD5 procedure of the DBMS_OBFUSCATION_TOOLKIT package). Cryptographic
checksums can ensure data integrity; that is, that data has not been tampered with. For example, an
organization concerned that users not change salary values randomly could store a checksum of salary
values in a separate table. Only users changing the salary through an application (e.g. through executing
a procedure) would also have the privileges to insert a checksum for the new salary into a salary audit
table.
Partner Applications
Organizations seeking a more robust implementation of encrypting stored data in the Oracle database
can consider a product offering from Oracle partners such as Protegrity. Protegrity has provided
database encryption capabilities since Oracle8. In the Protegrity solution, key management is automatic.
There are multiple customers using it in production who are happy with the functionality.
________________________________________
Page 7
Database Encryption in Oracle9I
5
February 2001
Performance may be an issue in the Protegrity product, depending on how much data the customer wants
to encrypt. Since Protegrity has built their solution using the extensibility features of the server, their
“data cartridge” does not run in the server address space, and thus does not perform as well as a native
implementation of encryption in Oracle would. The tradeoff of more automatic key management may
well be worth it for Oracle customers, however.
CHALLENGES OF ENCRYPTION
This paper has already discussed why encryption should not be used to address threats better addressed
by access control mechanisms, and some of the reasons why encryption is not a security cure-all. Even
in cases where encryption can provide additional security, it is not without technical challenges, as
described in the following sections.
Encrypting Indexed Data
Special difficulties arise in handling encrypted data which is indexed. For example, suppose a company
uses national identity number (e.g. U.S. Social Security Number (SSN)) as the employee number for its
employees. The company considers employee numbers to be very sensitive data, and the company
therefore wants to encrypt data in the EMPLOYEE_NUMBER column of the EMPLOYEES table.
Since EMPLOYEE_NUMBER contains unique values, the database designers want to have an index
on it for better performance.
If the DBMS_OBFUSCATION_TOOLKIT (or another mechanism) is used to encrypt data in a
column, then an index on that column will also contain encrypted values. While the index can still be
used for equality checking (i.e. ‘SELECT * FROM emp WHERE employee_number = ‘123245’), the
index is essentially unusable for other purposes. Oracle therefore recommends that developers not
encrypt indexed data (and in fact, we do not support encrypting indexed data).
In the above example, a company that wants to encrypt social security number (or national identity
number) could create an alternate unique identifier for its employees, create an index on this employee
number, but retain the employee number in clear text. The national identity number could be a separate
column, and the values encrypted therein by an application, which would also handle decryption
appropriately. The national identity number could be obtained when necessary, but would not be used as
a unique number to identify employees.
Given the privacy issues associated with overuse of national identity numbers (for example, identity
theft), the fact that some allegedly unique national identity numbers have duplicates (US Social Security
Numbers), and the ease with which a sequence can generate a unique number, there are many good
reasons to avoid using national identity numbers as unique IDs.
Binary Large Objects (BLOBS)
Certain datatypes require more work to encrypt. For example, Oracle supports storage of binary large
objects (BLOBs), which lets users store very large objects (e.g. gigabytes) in the database. A BLOB
can be either stored internally as a column, or stored in an external file.
To use the DBMS_OBFUSCATION_TOOLKIT, the user would have to split the data into 32767
character chunks (the maximum that PL/SQL allows) and then would have to encrypt the chunk and
append it to the BLOB. To decrypt, the same procedure would have to be followed in reverse.
________________________________________
Page 8
Database Encryption in Oracle9I
6
February 2001
Key Management
Key management, including both generation of and secure storage of cryptographic keys, is arguably
one of the most important aspects of encryption. If keys are poorly-chosen or stored improperly, then it
makes it far easier for an attacker to break the encryption. Rather than using a “brute force” attack (that
is, cycling through all the possible keys in hopes of finding the correct decryption key), cryptanalysts
often seek weaknesses in the choice of keys, or the way in which keys are stored.
Key generation is an important aspect of encryption. Typically, keys are generated automatically
through a random-number generator, from a cryptographic seed. Provided that the random number
generation is sufficiently strong, this can be a secure form of key generation. However, if random
numbers are not well-formed, but have elements of predictability, the security of the encryption may be
easily compromised. Netscape had a well-publicized vulnerability in their SSL implementation several
years ago when it was discovered that two of the three elements of their random number generation
were not random (e.g. machine serial number and time of day). The encryption key for SSL sessions had
an effective key length of 9 bits, rather than the advertised 40 bits, because of the weakness of the key
generation. An SSL session key could be easily broken, not because the encryption algorithm was
weak, but because the key was easily derived.
To address the issue of secure cryptographic key generation, Oracle9i adds support for a secure random
number generation, the GetKey procedure of the DBMS_OBFUSCATION_TOOLKIT. The GetKey
procedure calls the secure random number generator (RNG) that has previously been certified against
the Federal Information Processing Standard (FIPS)-140 as part of the Oracle Advanced Security FIPS-
140 evaluation.
Developers should not use the DBMS_RANDOM package. The DBMS_RANDOM package generates
pseudo-random numbers; as RFC-1750 states, “The use of pseudo-random processes to generate secret
quantities can result in pseudo-security.”
Key Transmission
If the key is to be passed by the application to the database, then it must be encrypted. Otherwise, a
snooper could grab the key as it is being transmitted. Use of network encryption, such as that provided
by Oracle Advanced Security, will protect all data in transit from modification or interception, including
cryptographic keys.
Key Storage
Key storage is one of the most important, yet difficult, aspects of encryption. To recover data encrypted
with a symmetric key, the key must be accessible to the application or user seeking to decrypt data. The
key needs to be easy enough to retrieve that users can access encrypted data, without significant
performance degradation. The key needs to be secure enough that it’s not easily recoverable by someone
trying to maliciously access encrypted data he is not supposed to see.
The three basic options available to a developer are:

store the key in the database

store the key in the operating system

have the user manage the key
________________________________________
Page 9
Database Encryption in Oracle9I
7
February 2001
Storing the keys in the database cannot always provide “bullet-proof” security if you are trying to
protect data against the DBA accessing encrypted data (since an all-privileged DBA could access tables
containing encryption keys), but it can often provide quite good security against the casual snooper, or
against someone compromising the database file on the operating system.
As a trivial example, suppose you create a table (EMP) that contains employee data. You want to
encrypt each employee’s Social Security Number (one of the columns). You could encrypt each
employee’s SSN using a key which is stored in a separate column. However, anyone with SELECT
access on the entire table could retrieve the encryption key and decrypt the matching SSN.
While this encryption scheme seems easily defeatable, with a little more effort you can create a solution
that is much harder to break. For example, you could encrypt the SSN using a technique that performs
some additional data transformation on the employee_number before using it to encrypt the SSN,
something as simple as XORing the employee_number with the employee’s birthdate, for example.
As additional protection, a PL/SQL package body performing encryption can be “wrapped,” (using the
wrap utility) which obfuscates the code so that the package body cannot be read. For example, putting
the key into a PL/SQL package body and then wrapping it makes the package body — including the
embedded key — unreadable to the DBA and others. A developer could wrap a package body called
KEYMANAGE as follows:
wrap iname=/mydir/keymanage.sql
A developer can then have a function in the package call the DBMS_OBFUSCATION_TOOLKIT with
the key contained in the wrapped package.
While wrapping is not unbreakable, it makes it harder for a snooper to get the key. To make it even
harder, the key could be split up in the package and then have the procedure re-assemble it prior to use.
Even in cases where a different key is supplied for each encrypted data value, so that the value of the
key is not embedded within a package, wrapping the package that performs key management (i.e. data
transformation or padding) is recommended. Additional information about the Wrap Utility is available
in the PL/SQL User's Guide and Reference.
An alternative would be to have a separate table in which to store the encryption key, and envelope the
call to the keys table with a procedure. The key table can be joined to the data table using a primary
key-foreign key relationship; for example, EMPLOYEE_NUMBER is the primary key in the
EMPLOYEES table, that stores employee information and the encrypted SSN.
EMPLOYEE_NUMBER is a foreign key to the SSN_KEYS table, that stores the encryption keys for
each employee’s SSN. The key stored in the SSN_KEYS table can also be transformed before use (i.e.
through XORing), so the key itself is not stored unencrypted. The procedure itself should be wrapped, to
hide the way in which keys are transformed before use.
The strengths of this approach are:
• users who have direct table access cannot see the sensitive data unencrypted, nor can they retrieve
the keys to decrypt the data
• access to decrypted data can be controlled through a procedure that selects the (encrypted) data,
retrieves the decryption key from the key table, and transforms it before it can be used to decrypt
the data
________________________________________
Page 10
Database Encryption in Oracle9I
8
February 2001
• the data transformation algorithm is hidden from casual snooping by wrapping the procedure, which
obfuscates the procedure code
• SELECT access to both the data table and the keys table does not guarantee that the user with this
access can decrypt the data, because the key is transformed before use
The weakness in this approach is that a user who has SELECT access to both the key table and the data
table, who can derive the key transformation algorithm, can break the encryption scheme.
The above approach is not bullet-proof, but it is good enough to protect against easy retrieval of
sensitive information stored in clear (e.g. credit card numbers).
Storing keys in the operating system (e.g. in a flat file) is another option. Oracle9i allows you to make
callouts from PL/SQL, which you could use to retrieve encryption keys. However, if you store keys in
the operating system (O/S) and make callouts to it, then your data is only as secure as the protection on
the O/S. If your primary security concern driving you to encrypt data stored in the database is that the
database can be broken into from the operating system, then storing the keys in the operating system
arguably makes it easier for a hacker to retrieve encrypted data than storing the keys in the database
itself.
Having the user supply the key, assumes the user will be responsible with the key. Consider that 40%
of help desk calls are from users who have forgotten passwords, and you can see the risks of having
users manage encryption keys. In all likelihood, users will either forget an encryption key, or write the
key down, which then creates a security weakness. If a user forgets an encryption key or leaves the
company, then your data is unrecoverable.
If you do elect to have user-supplied or user-managed keys, then you need to make sure you are using
network encryption so the key is not passed from client to server in the clear. You also must develop key
archive mechanisms, which is also a difficult security problem. Arguably, key archives or ‘backdoors’
create the security weaknesses that encryption is attempting to address in the first place.
Changing Encryption Keys
Prudent security practice dictates that you periodically change encryption keys. For stored data, this
requires periodically unencrypting the data, and reencrypting it with another well-chosen key. This
would likely have to be done while the data is not being accessed, which creates another challenge,
especially so for a web-based application encrypting credit card numbers, since you do not want to
bring the entire application down while you switch encryption keys.
SUMMARY
The growth of e-business necessitates that some especially sensitive information be stored in encrypted
form. While encryption cannot address all security threats, the selective encryption of stored data can
add to the security of a well-implemented application that uses a well-configured Oracle9i database.
Oracle9i provides native encryption capabilities that enable application developers to provide additional
measures of data security through selective encryption of stored data.
________________________________________
Page 11
Database Encryption in Oracle9I
9
February 2001
Appendix A
ENCRYPTION EXAMPLE
Following is a sample PL/SQL program to encrypt data. Segments of the code are numbered and contain
narrative text explaining portions of the code.
DECLARE
input_string VARCHAR2(16) := ’tigertigertigert’;
key_string VARCHAR2(8) := ’scottsco’;
encrypted_string VARCHAR2(2048);
decrypted_string VARCHAR2(2048);
error_in_input_buffer_length EXCEPTION;
PRAGMA EXCEPTION_INIT(error_in_input_buffer_length, -28232);
INPUT_BUFFER_LENGTH_ERR_MSG VARCHAR2(100) :=
’*** DES INPUT BUFFER NOT A MULTIPLE OF 8 BYTES ***’;
1. Test string data encryption and decryption-- The interface
for encrypting raw data is similar.
BEGIN
dbms_output.put_line(’> ========= BEGIN TEST =========’);
dbms_output.put_line(’> Input String :
’ ||
input_string);
BEGIN
dbms_obfuscation_toolkit. input_string => input_string,
key_string => key_string, encrypted_string =>
encrypted_string );
dbms_output.put_line(’> encrypted string : ’
||
encrypted_string);
dbms_obfuscation_toolkit.DESDecrypt(input_string =>
encrypted_string,
key => raw_key, decrypted_string =>
decrypted_string);
dbms_output.put_line(’> Decrypted output : ’
||
decrypted_string);
dbms_output.put_line(’> ’);
if input_string =
decrypted_string THEN
dbms_output.put_line(’> DES Encryption and Decryption
successful’);
END if;
EXCEPTION
WHEN error_in_input_buffer_length THEN
dbms_output.put_line(’> ’ ||
INPUT_BUFFER_LENGTH_ERR_MSG);
END;

OODBMS - Object Oriented Analysis and Design(OOAD)


January-2004 [29]
1.c) State the difference between persistent and transient objects. How persistence is handled In Object Oriented (OO) database systems? [4]


A) Transient Objects are temporary in nature. Object which is permanent in nature till the end of the program or remain active until the accidental termination of the program is called as a persistent object. On the otherhand rest nature of the object type is called an transient object
2. Transient object:They can't be serialized, its value is not persistent and stored in heap
Persistent Object:They can be serialized, its value is persistent as name implies and stored in memory

3. A superclass PersistentObject encapsulates the mechanisms for an object of any class to store itself in, or retrieve itself from a database. This superclass implements operations to get an object by object identifier, store, delete and update objects and to iterate through a set of objects (write and read operations).
Each persistent class could be responsible for its own storage
For each business class that needs to be persistent, there will be an associated database brokerclass.
The broker class provides the mechanisms to materialize objects from the database and dematerialize them back to the database
The database broker object is responsible for: “materialising” objects, “dematerialising” objects,
caching objects.
Application classes are insulated from storage. Allows migration of storage sub-systems, e.g., implement on existing relational system.
Application programs unaffected by change
Objects can be cached for efficiency. The cache is a collection maintained by the database broker.
When an object is requested, the cache is searched first. If the object sought is not in the cache it is materialised by the database broker from the database
To manage transactions, we need to keep track of all changes made by a transaction, in case the transaction is aborted before it completes execution (and commits all its changes.)
Multiple caches can be used for transaction management:
new clean cache: newly created objects
new dirty cache: newly created objects that have been amended
new delete objects: newly created objects that have been deleted
old clean cache: objects retrieved from the database
old dirty cache: retrieved objects that have been ammended
old delete objects: retrieved objects that have been deleted

d) How are relationships are represented in Object Oriented (OO) data model? Discuss also the importance of inverse references in Object Oriented (OO) data model. [4]

``For data to be useful in providing information, they need to be organized so that they can be processed effectively. In data modelling we try to organize data so that:

they represent as closely as possible the real world
they are amenable to representation by computers.''
Components of a data model
Data models (Figure 2.1) have been introduced to help software developers to achieve these two potentially conflicting requirements. As stated by E.F. Codd in his 1981 Turing Award Lecture,
``A data model is a combination of at least three components:


a set of data structure types,
a collection of operators which can be applied to any instance of the data types, to retrieve, derive or modify data from any part of those structures in any combination desired,
a set of integrity rules, which define the consistent database states - these rules are general in the sense that they apply to any database using the model.''
Design advantage
A data model provides the designer and the programmer with substantial advantages: the detailed data design makes use of a few building blocks, and the manipulation of data is performed by a few operators that are the same for all data structures.
The Entity-Relationship (ER) model views the world as consisting of entities and relationships between them, where

an entity is a ``thing'' that can be distinctly identified, e.g. a chamber, a wire, a point, a track, a vertex, etc.,
a relationship is an association between entities, e.g. point belongs to track is an association between a point and a track.
Entity sets
The information about an entity is expressed by several (Attribute, Value) pairs. For example a single point could be:

( X, 0.56 )
( Y, 10.75 )
( Z, 29.14 )

where X is the attribute and 0.569 is its value, etc. An Entity Set (ESet) is a set of entities defined by the same attributes. Figure 2.2 shows an example of an ESet called Point with four entities and with attributes X, Y, Z.
The complete data model for a high energy physics experiment is large, containing many entity sets. In designing such a model it is useful to display it pictorially, particularly the entity sets and the relationships between them. Such a picture is called an Entity Relationship Diagram.

Small ER diagram
Figure 2.3 presents a small section of such an ER diagram showing the entity sets Point and Track.The ESet Point is the same as in Figure 2.2. Track has 8 attributes: the 3 components of the momentum and their errors and the rank and file coordinates (Rhit and Fhit) of the point at which the track strikes the calorimeter.
The relationship between these two ESets is expressed by the arrow joining the two entity sets: the double headed arrow shows a many-to-one relationship (many points may lie on one track), while the vertical bar shows that not all points are related to a track (some points may not lie on a track).


Representing one-to-one relationships and partiality
A one-to-one relationship is represented by a single-headed arrow. Partiality, i.e. having at least one entity in an ESet not related to another entity via the relationship, is represented by a vertical bar on the end of the arrow corresponding to the ESet in question.

Types of relationships
The cardinality of an ESet with respect to a relationship is the range of the number of entities that a single entity in the ESet can be related to. The lower bound of the range is called the minimum cardinality and the upper bound is called the maximum cardinality. These will be referred to as CardMin and CardMax respectively.

Examples of cardinalities
In ER diagrams, only 3 cases are generally of interest: 0, 1 or * (many). Example: Consider ESets Track and Cluster.

The cardinality of Track with respect to the relationship is [0,1]: CardMin is 0 because some tracks do not contribute to any cluster. CardMax is 1 because a given track may only contribute to one cluster.
The cardinality of Cluster with respect to the relationship is [0,*]: CardMin is 0 because some clusters are not associated to known tracks. CardMax is * because a cluster generally results from several tracks.
Possible cardinality values
The possible cardinalities of an ESet with respect to a relationship are [0,1], [1,1], [0,*] and [1,*]. Since ADAMO allows one-to-one and many-to-one but neither one-to-many nor many-to-many relationships only one of any two ESets may have CardMax = * with respect to the same relationship. This implies 8 possible types of relationships, which are shown in Figure 2.4.

Many-to-many relationships
By symmetry, a one-to-many relationship may always be modelled as a many-to-one relationship. Sometimes relationships can be many-to-many. An example would be if a single point could lie on more than one track, e.g. in a region where tracks cross. Such many-to-many relationships are not explicitly supported in ADAMO but must be broken down into two many-to-one relationships and an intermediate entity set.

Intermediate ESet
The intermediate ESet need not have any attributes, its only purpose is to support the two relationships.

Example
Figure 2.8 shows the ER diagram and corresponding tables of a simple many-to-many relationship. The case modelled is that of having the point with ID 5 being also the intersection point of tracks with ID 1 and 2. Note that since the point with ID 4 does not lie on any track there is no reference to it in the table TrackPt
A complete ER diagram
Shown in Figure 2.5 is the complete ER diagram of the example used in this guide. Included in the model is a calorimeter consisting of cells that are capable of making energy measurements. An algorithm may group adjacent cells with energy deposits above given thresholds into clusters. Tracks may originate from secondary vertices. More than one track may hit a given cluster but a track need not produce a cluster (it could miss the calorimeter). A track may not produce more than one cluster and a cluster need not have a track pointing to it (a neutral cluster).

Attribute of a relationship
A further algorithm associates tracks and clusters according to the calculated quantity Chisq which is a measure of how well the two match. Thus Chisq is an attribute of the relationship between Track and Cluster and not of either one individually: it is shown hanging from the relationship arrow.

Convenience considerations
The attributes Rhit and Fhit of Track are really attributes of a relationship between Track and some other ESet called perhaps Calorimeter. While this is strictly true it illustrates the point that frequently we produce a model that is convenient to use rather than one which is more logically correct.

Representing a key of an entity set
Finally, the * character on the attributes Rank and File in the ESet Cell indicate that a key is to be defined on these. ESet keys are used to identify individual entities in an ESet . The Data Definition Language (DDL) description of a data structure is a number of plain text DDL files written by the programmer. Experience suggests that it is wise to divide the model into sections known as subschemas (Section 2.6) of approximately 10 ESets in each and to have a separate DDL file for each subschema. This way, the ER diagram corresponding to a subschema will fit nicely on an A4 sheet of paper. The whole is known as the schema.

Calorimeter subschema
The subschema corresponding to the ER diagram of Section 2.5 is stored in the file calori.ddl and is shown in its entirety in Figure 3.2 to Figure 3.4. It contains the definitions of the entity sets, attributes, relationship sets, and dataflows that constitute the model. The order of definition is not important.

Definition of ESets
Each box in the ER diagram corresponds to an entry in DEFINE ESET, e.g. Cluster. The definition of Cluster's four attributes is local to Cluster. Note that, in addition to the information contained in the ER diagram, the DDL also specifies data types and ranges of validity and allows the inclusion of comments.

Definition of attributes types
Attribute types may also be defined globally in a DEFINE ATTRIBUTE section. In this example, Coord is such an attribute type. The attributes X, Y and Z in both ESets Point and Vertex are then each defined as Coord.

Definition of identifier keys
The identifier Number in the ESet Cell is defined in the KEY ... END KEY block and it corresponds to the *s on the attributes in the ER diagram. This enables any entity in Cell to be uniquely identified by the values of the two attributes Rank and File. The key Number is also used to create an index (Figure 7.9) on Rank and File which will allow the table Cell to be accessed according to the values of these two attributes.

f) What Is versioning? Why Is It Important? [4]
Information, in whatever form: documents, spreadsheets, web pages, or source code, changes over time. Tracking changes of a document over time is what a versioning system does.
In particular a versioning system records who made a specific change, it allows backing-out undesirable changes, it gives the ability to record why a specific change was made and to know what the document contents were at a specific point in time.
Versioning a content is basically archiving important document versions, to have control on changes over time.

4.
c) A car rental company maintains a vehicle database for all vehicles in Its current fleet. For all vehicles, It Includes the vehicle identification number, license number, manufacturer, model, data of purchase and color. Special data are Included for certain types of vehicles;
Trucks:
Sports car. Vans:
Off-road vehicles:
cargo capacity;
horse power, rental age requirement
number of passengers
ground clearance, drlvertraln (four or two wheel drive)
Construct an object oriented database schema definition for this database. Use Inheritance wherever appropriate. [8]
5.
b) Discuss with an example Chen-Chen methodology for object-oriented design. [9]

Chen's original method is the basis for many writings on ERD's. While the traditional aspects of entities and relationships are represented as boxes and lines (respectively), there are a number of unique attributes to his present method:
1. The cloud symbol represents aggregations of smaller ER diagrams, in the context of a logical design (as opposed to a physical design).
2. The double-box represents a weak entity, which is dependent on its parent entity. Note: Visio does not support this feature.
3. A diamond symbol represents a type of relationship.
4. Relationship adornments are added adjacent to the lines to indicate connectivity (I, M, N) and cardinality, e.g. "(0,3)".
5. The data-structure diagram (an alternate form of the ERD) uses arrows to show I:M relationships.
6. Circles represent the attributes of an entity, although Visio labels this icon as value.
7. A human figure represents a user icon.


July-2004 [32]
1.
a) Under what circumstances a relationship is its own inverse? Explain. [4]
An inverse or negative relationship is a mathematical relationship in which one variable decreases as another increases. For example, there is an inverse relationship between education and unemployment — that is, as education increases, the rate of unemployment decreases

f) Class diagrams developed using Booch's methodology can serve as the functional specification of a system. Justify whether this statement is true or false. [4]

The Booch software engineering methodology [#!booch!#] provides an object-oriented development in the analysis and design phases. The analysis phase is split into steps. The first step is to establish the requirements from the customer perspective. This analysis step generates a high-level description of the system's function and structure. The second step is a domain analysis. The domain analysis is accomplished by defining object classes; their attributes, inheritance, and methods. State diagrams for the objects are then established. The analysis phase is completed with a validation step. The analysis phase iterates between the customer's requirements step, the domain analysis step, and the validation step until consistency is reached.

Once the analysis phase is completed, the Booch software engineering methodology develops the architecture in the design phase. The design phase is iterative. A logic design is mapped to a physical design where details of execution threads, processes, performance, location, data types, data structures, visibility, and distribution are established. A prototype is created and tested. The process iterates between the logical design, physical design, prototypes, and testing.

The Booch software engineering methodology is sequential in the sense that the analysis phase is completed and then the design phase is completed. The methodology is cyclical in the sense that each phase is composed of smaller cyclical steps. There is no explicit priority setting nor a non-monotonic control mechanism. The Booch methodology concentrates on the analysis and design phase and does not consider the implementation or the testing phase in much detail.

5.b) Explain the differences between triggers and integrity constraints. [6]
A declarative integrity constraint is a statement about the database that is always true. A constraint applies to existing data in the table and any statement that manipulates the table.

A trigger does not apply to data loaded before the definition of the trigger, therefore, it does not guarantee all data in a table conforms to the rules established by an associated trigger.

A trigger can be used to enforce transitional constraints where as a declarative integrity constraint cannot be used.
A trigger is a piece of code which gets automatically executed upon occurrence of an event. It may not be meant for enforcing integrity. For example you may want to save the user info as soon as he makes an entry into a particular table. This will be achieved by means of a trigger.

An integrity constraint defines basic rules for a table's columns. It will always be enforced, for all data which was there in the table before the rule was added and to the new data which will be added to the table in future
An integrity constraint defines a business rule for a table
column which automatically takes care by Oracle internally.
Intefrity Constraints are NOT NULL,UNIQUE,CHECK ,PRIMARY
KEY,FOREIGN KEY.


A database trigger is a procedure written in PL/SQL and
Will run implicitly when data is modified or when some user
or system actions occur.Triggers are database objects which
gets stored in database and get fires when INSERT or UPDATE
statement attempts to violate the integrity rule, Oracle
must roll back the statement and return an error if trigger
exists.Database triggers are BEFOR/AFTER/{BEFORE|AFTER}
{row|statement etc.
Intigrity constraints are Proactive-- Like they take the cols into consideration before sitting on it like if U levy a PK on a col having duplicate or null values it will not obey to sit on that col

Triggers are reactive-- Triggres react only when a DML occurs. When it comes and sit on a table then its starts working and never considers what happened before it arrival

Both are used to perform business rules

January-2005 [50]

1. Briefly explain the following:
b) How do IS-A and HAS-A relationships help in developing object-oriented design? [4]
In database design and object oriented program architecture, has-a is a relationship where one object (often called the composited object) "belongs" to (is a part or member of) another object (called the composite type), and behaves according to the rules of ownership. Multiple has-a relationships will combine to form a possessive hierarchy. This is contrasted with an Is-a relationship which constitutes a different kind of hierarchy (subtyping). The decision whether the most logical relationship for an object and its subordinate is not always clearly has-a or is-a. Confusion over such decisions have necessitated the creation of these metalinguistic terms. A good example of the has-a relationship is containers in the C++ STL.
Inheritance defines a is-a relationship
Laptop is-a Computer
Circle is-a Shape
Shape is-a Object
One misuse of inheritance is confusing the has-a relationship with the is-a relationship
The has-a relationship means that one class has the second class as an attribute e.g. Circle class has-a Point instance variable, center. Point is another class.

In knowledge representation and object-oriented programming and design, is-a is a relationship where one class D is a subclass of another class B (and so B is a superclass of D).
In other words "D is a B" usually means that concept D is a specialization of concept B, and concept B is a generalization of concept D.

For instance, a "fruit" is a generalization of "apple", "orange", "mango" and many others. One can say that an apple is a fruit.

In object-oriented programming the is-a relationship arises in the context of inheritance concept. One can say that "apple" may inherit all the properties common to all fruits, such as being a fleshy container for the seed of a plant.

The is-a relationship is contrasted with the has-a relationship which constitutes a different kind of hierarchy.

When designing a model (e.g., a computer program) of the real-world relationship between an object and its subordinate a common error is confusing the relations has-a or is-a.

2.b) Declare a class for a Rational Number ADT. (A rational number is represented by P/Q where P and Q are integers). [6]

ADT: Rational Number
Concrete implementation II


typedef struct {

int ar[2] ;

}RATIONAL;


RATIONAL makerational (int x, int y) {

RATIONAL r;

r.ar[0] = x;

r.ar[1] = y;

return r;

}


RATIONAL reduce (RATIONAL r) {

int g;

g = gcd (r.numerator,r.denominator);

r.a[0] /= g;

r.a[1] /= g;

return r;

}


int numerator (RATIONAL r) {

return r.a[0];

}

int denominator (RATIONAL r) {

return r.a[1];

}

The List ADT

A list : of size N.
Special list of size 0 : an empty list
Operations:
makenull () : returns an empty list
makelist (elem) : makes a list containing a single element
printlist (list)
search(elem, list) : searches whether a key is in the list
insert (elem, list)
delete (elem, list)
findKth (list)

Array Implementation of List

typedef int ETYPE;

typedef struct {

ETYPE elements[MAXS];

int size;

} LIST;


LIST makenull () ;

LIST makeList (ETYPE) ;

void printList (LIST) ;

int IsEmpty (LIST) ;

int search (ETYPE, LIST) ;

void delete (ETYPE, LIST * );

void insert (ETYPE, LIST * )

Complex Number ADT

typedef struct {

float real;

float imag;

} COMPLEX;

COMPLEX makecomplex (float, float) ;

COMPLEX addc (COMPLEX, COMPLEX);

COMPLEX subc (COMPLEX, COMPLEX);

COMPLEX multc (COMPLEX, COMPLEX);

COMPLEX divc (COMPLEX, COMPLEX);

SET ADT

Interface functions (1):
SET makenullset () ;

int member (ETYPE, SET) ;

SET adjoin (ETYPE, SET);

SET union (SET, SET) ;

SET intersection (SET, SET);

Void printset (SET) ;


Interface functions (2):

SET makenullset () ;

int member (ETYPE, SET) ;

void adjoin(ETYPE, SET *);

void union (SET, SET, SET*);

void intersection (SET, SET, SET*);

Void printset (SET) ;

Concrete implementation of SET ADT


typedef struct {

ETYPE elem[MAX];

int size;

} SET;

Implementation 1 : sorted array

adjoin : Sorted insert

member : Binary search

delete : ?

union : merge 2 sorted arrays

intersection : ?

Concrete implementation of SET ADT


typedef struct {

ETYPE elem[MAX];

int size;

} SET;

Implementation 2 : unsorted array

keep the elements in the array unsorted.

adjoin : Insert at the end

member : Search till found or till the end

delete : Go through the array sequentially until

element is found, or reach the end.

Then left shift the array.

union , intersection ?

3. Consider the following details of a Library Management system (LMS), which is required by an academic institute to automate book/periodical issuing activities. Carry-out the jobs (a) – (b) listed below: -
- Library books and periodicals are issued o three types of members – faculty, student and staff members.
- All members have a name (string), an address (string) and an Id (integer). In addition, faculty-members have a few research interests (an array of strings) and Office-telephone number (integer); student-numbers have an academic program number (integer) and staff-members have an employee-number (integer).
- You may further assume that a faculty can issue a book for 4 months, a staff for 2 months and a student for 15 days. The Issuing period for a periodical for a faculty-member is 7 days; periodicals are not issued to staff and students.

a) Identify classes and their relationships and represent them using notations of Booch methods. [9]
HOW TO DRAW BOOCH DIAGRAMS


What is Booch?
Grady Booch's Object-Oriented Design (OOD), also known as Object-Oriented Analysis and Design(OOAD), is a precursor to the Unified Modeling (UML). The Booch method includes six types of diagrams: class, object, state transition, interaction, module, and process.
Booch's Static Diagrams
Booch's class and object diagrams differentiate this methodology (at least in notation) from similar object oriented systems.

A Booch class diagram
Booch's Class and Object Diagram Notations


Classes
Illustrate classes using a cloud shape with a dashed border.
You can find the Booch symbol libraries in SmartDraw under Software Design and Other Symbologies. Simply drag and drop shapes to the page and edit them.
Learn how to edit text on this symbol.


Objects
Draw objects using a cloud shape with a solid border.
Learn how to arrange objects on a page.
More Booch's Class and Object Diagram Notations


Class Adornments
Use adornments to provide additional information about a class. You can create an adornment notation using the basic triangle shape in SmartDraw. Place a letter inside the triangle to represent the following:
A - Abstract. An abstract class cannot be instantiated because it represents a wide variety of object classes and does not represent any one of them fully. For example, mammal could be thought of as an abstract class.
F - Friend. A friend class allows access to the nonpublic functions of other classes.
S - Static. A static class provides data.
V - Virtual. A virtual class is a shared base class, the most generalized class in a system.


Metaclass
A metaclass is a class whose instances are also classes.

Class Categories
A class category represents a cluster of similar classes. Illustrate a class category by drawing a rectangle with two compartments.


Class Templates
Draw a template using the standard class symbol attached to a box with a dashed outline. List template parameters or formal arguments in this box. When you draw a class created from a template, replace the dashed border with a solid one.
Learn how to resize grouped objects.


Class Utilities
Class utilities describe a group of non-member functions or subprograms. Illustrate a class utility with a shadowed cloud.
More Booch's Class and Object Diagram Notations


Class Visibility
Visibility markers signify who can access the information contained within a class. Public visibility allows an attribute or an operation to be viewed by any other class. Private visibility means that the attribute or the operation is only accessible by the class itself and its friends. Protected visibility makes an attribute or operation visible only to friend classes and classes that inherit it. Implementation restricts the accessibility of an attribute to the class only (not even friends have access to these attributes).
Place visibility markers next to the attribute they refer to.


Object Visibility
Draw a visibility marker on a link to signify the relationship between the connected objects. These markers can be:
G - Global
P - Parameter
F - Field
L - Local

Relationships
Indicate relationships between objects using lines and arrows. Learn how to connect two objects.

Relationship
Meaning


Aggregation (has)

Aggregation by value

Aggregation by reference

Uses

Instantiates - Compatible type

Instantiates - New type

Inherits - Compatible type

Inherits - New type

Metaclass

Undefined
Booch's Dynamic Diagrams
Use state transition and interaction diagrams to illustrate the dynamic nature of an application. Below is a table that lists what each of the dynamic Booch diagrams corresponds to in UML.
Booch (OOD) Unified Modeling Language (UML)
State transition diagram Statechart diagram

Interaction diagram Sequence diagram



Booch's Dynamic Diagram Notations

State
States represent situations during the life of an object. Draw a Booch state symbol using a rectangle with rounded corners and two compartments. Use the oval-shaped H symbol to indicate the most recently visited state.
Learn how to edit text on this object.
For other dynamic diagram notations and symbols, visit our tutorial on UML statechart and sequence diagrams.
Booch's Physical Diagrams
Draw module and process diagrams to illustrate the physical design of your application. Module diagrams correspond to UML's component diagrams and process diagrams correspond to UML's deployment diagrams.

Processor
Processors are hardware pieces that execute programs. Use devices and processors to draw process diagrams.
Learn how to resize grouped objects.

Device
Use a device notation to illustrate hardware that is not capable of executing programs.

Modules
A module is a unit of code that serves as a building block for the physical structure of a system. The specification module contains the declaration of entities and the body module contains the definition of entities. Learn how to resize grouped objects like modules in SmartDraw.

Main Module
This module is the main program from which the system is activated.

Subsystem
A subsystem refers to a collection of related modules.
Learn how to arrange objects on a page.



5.c) Describe the main strategies that can be used to create persistent objects? [6]

Working with persistent objects (Create, Update, Delete)

Working with persistent object state
To work with persistent object state, a Genome Context has to be instantiated by the client. A persistent object is represented by a Genome proxy, which is independent of the Context. Switching the Context means switching state for the Genome proxy. The proxy itself remains valid after a Context switch or even without any associated Context. However, as long as there is no Context instantiated, the proxy cannot retrieve any state of the referred object.
A Genome proxy holds the object identity values (primary key values) of the referred persistent object. The proxy implicitly looks up the current Context to set or retrieve object state. The Context maintains an identity map to ensure that each object instance state is maintained only once within the Context. The Context also performs automatic lazy loading when the object state does not exist yet. Finally, the Context tracks all updates to persistent objects and commits them to the database when the Context is committed.
Current Context, Context Stack and Context Stack Binding
As previously mentioned, the Genome proxy locates the current Context to work with persistent state in the AppDomain. Genome maintains a Context Stack for storing the current Context and all previously activated Contexts. The client can push and pop Contexts to the Context stack as needed, but only the top element of the stack is used as the current Context by Genome proxies.
Genome implements a strategy pattern to locate the Context Stack providing the current Context to the Genome proxies. By default, Genome implements a context stack binding strategy that associates the Context stack with the current thread, providing an independent Context Stack (and so a separate current Context) for each thread. For other architectural scenarios, different context stack binding strategies might be useful. For example, the Genome Extensions for Web Applications provide a strategy to bind the Context Stack to the HttpContext of the current request that provides a Context Stack for every request. The NorthwindDataBinding sample implements a simple strategy which binds the Context Stack to the AppDomain, providing a single AppDomain-wide Context Stack.
Creating and disposing a Context
The Context might hold references to scarce resources, such as database transactions, and thus should be explicitly disposed when no longer used; hence it implements the IDisposable interface.
Genome provides different factory methods for instantiating Contexts, which can configure Context behaviour for caching, locking and transactional consistency. A Context has to be pushed to the Context stack to become the current Context.
After working with a Context, it has to be popped from the Context stack to make the previous Context current again. Although not mandatory, in most cases the Context is also disposed when popped from the Context stack. If it is to be reused later on (e.g. for caching state from the database retrieved by the Context), it can be kept and pushed to the Context Stack again later.
Since the Context is generally pushed to the Context stack when instantiated and disposed when popped from the Context stack, the Context.Push method returns a context stack cleaner object that implements IDisposable to pop the Context from the Context stack and optionally also disposes the Context. In C# the using statement can be utilised to ensure a Context is popped from the stack and disposed after working with it:
using(Context.Push(LocalContext.Create()))
{
// work with Context here
}
Context creation and activation with the C# using statement and the context stack cleaner
LocalContext.Create() is one of the Genome factory methods for creating a Context. Context.Push() is a static method of Context to push a given Genome Context to the Context stack.
The code above can also be expressed explicitly:
Context ctx;
ctx = LocalContext.Create();
Context.Push(ctx);
try
{
// work with Context here
}

finally
{
Context.Pop();
ctx.Dispose();
}
Explicit context creation and disposal protected with a try/finally to ensure the Context is popped of the stack and disposed
Creating persistent objects
To create persistent objects, a factory method of the DataDomain has to be used. The factory method takes an arbitrary list of parameters to call the appropriate constructor of the class.
The previously implemented Product class specifies a constructor that takes a string for the name of the product and a decimal for the price of the product. A new product can thus be instantiated like this:
Product product;

//Create
using (Context.Push(LocalContext.Create()))
{
product = dd.New("x", 12.00m);
Context.CommitCurrent();
}
Code snippet showing how to create new persistent objects
The product instance is not written to the database until the Context is committed. Disposing a Context that has not been committed discards all recorded changes to persistent objects.
When the product is created, the constructor of the Product class is executed. Please note that the constructor of Product is only executed when it is created in the DataDomain. When product is loaded from the DataDomain with a query, the OnLoad() event of the object is called instead, if it implements the ILoadCallback interface.
Updating persistent objects
Persistent objects can be updated by simply setting the desired property values of the object proxy. When the Context is committed, the changes are written to the database. Otherwise, the changes will be discarded.
We will reuse the previously initialised Genome proxy pointing to the product instance we have created. As we have disposed the Context already, we now create a new Context to perform the update:
//Update
using (Context.Push(LocalContext.Create()))
{
p.Name = "y";
Context.CommitCurrent();
}
Code snippet showing how to update persistent objects
Note that when you set p.Name, the Genome proxy tries to look up the object state of p in the current Context. As the Context was just instantiated, the state of p has not been loaded yet and a lazy load of p occurs.
When the Context is committed, an update statement that sets the Name property of the p is sent to the database.
Deleting persistent objects
Persistent objects can be deleted using the Context.DeleteCurrent() method (same as Context.Current.Delete()). We will again reuse the Genome proxy pointing to p together with a new Context.
//Delete
using (Context.Push(LocalContext.Create()))
{
Context.DeleteCurrent(p);
Context.CommitCurrent();
}
Code snippet showing how to delete persistent objects
Running Sample_CUD()
After putting everything together, the sample looks like this:
static void Sample_CUD()
{
Product p;

//Create
using (Context.Push(LocalContext.Create()))
{
p = Helper.dd.New("x", 12.00m);
Context.CommitCurrent();
}

//Update
using (Context.Push(LocalContext.Create()))
{
p.Name = "y";
Context.CommitCurrent();
}

//Delete
using (Context.Push(LocalContext.Create()))
{
Context.DeleteCurrent(p);
Context.CommitCurrent();
}
}
Sample_CUD() of Client/Program.cs
After replacing the call to InitialiseDatabase() with Sample_CUD() in Main(), the Console shows the following output:

Console output of Client running Sample_CUD()
Note that the product has to be loaded up again for each new Context.
Let us modify the sample to use a single Context for all operations that is only committed in the end:
static void Sample_CUD_SingleContext()
{
Product p;

using(Context.Push(LocalContext.Create()))
{

//Create
p = Helper.dd.New("x", 12.00m);

//Update
p.Name = "y";

//Delete
Context.DeleteCurrent(p);

Context.CommitCurrent();
}
}
Sample_CUD_SingleContext() of Client/Program.cs
Because Genome tries to optimise updates sent by the Context to the database, running Sample_CUD_SingleContext() from Main() yields the following output on the Console:

Console output of Client running Sample_CUD_SingleContext()
The “Genome basics” section, covered in Step1, ends here.


d) What is check pointing? Why is it needed? [4]
Checkpointing modified buffers is a critical aspect of buffer management because it reduces crash recovery times.
scenario: if a batch program reads an input file and updates/inserts/deletes from database into db2 tables, if it abends before the end of the job, is it possible to tell how many records were processed? do we need to start the job from beginning or are there any transactions happened with any of the records?
Assume that commit logic was not coded for large batch jobs that process millions of records.if an abend occurs all database updates will be rolled back and the job can be resubmitted from the beginning.
if an abend occurs near the end of the process, the rollback of all the updates is performed.also, db2 will maintain a large number of locks for a long period of time, reducing concurrency in the system.in fact, the program may abend if it tries to acquire more than the installation-defined maximum number of locks.
Program without commit logic causes excessive memory consumption. So this will not provide workload balancing.
These applications will cause the coupling facility to be over commited with large number of locks and huge storage requirements. To avoid this difficulties, commit-restart logic is recommended for all the batch programs performing transactions with database. This invloves setting up batch-restart control table (checkpoint_restart) to be set up to store the last input record processed and other control information.
Checkpoint/restart logic: to allow the interrupted program to be restarted from the last unit of recovery (commit) or at a point other than the beginning of the program we should have a checkpoint/restart logic. Basically, we need: •a place to store the details (checkpoint-commit record) pertaining to the current execution of the program, like various counts (number of inserts/deletes/updates/selects), number of records processed, processing dates, and other details which are needed in the program after a restart. •a reliable file re-positioning logic with minimal changes to the existing procjcl. •flexibility, to modify the commit frequency without changing the program code. Where we can store this checkpoint-commit record? we can store the checkpoint-commit record, commit-frequency and other relevant information in a db2 table.
Most conventional database systems (including Oracle) follow the no-force-at-commit policy for data blocks [Haerder83] because of its significant performance benefits.The use of this policy implies that a page modified inmemory may need recovery if there is a system crash. A database checkpoint is critical for ensuring quick crash recovery when the no-force-at-commit policy is employed since it limits the amount of redo log that needs to be scanned and applied during recovery.
As the amount of memory available to a database increases, it is possible to have database buffer caches as large as several million buffers. A large buffer cache imposes two requirements on checkpointing. First, it requires that the algorithms be scalable with the size of the buffer cache. Second, it requires that the database checkpoint advance frequently to limit recovery time, since
infrequent checkpoints and large buffer caches can exacerbate crash-recovery times significantly.

Oracle supports a shared-disk architecture; the sharedmemory and group of Oracle processes that run on each node in a multi-node shared disk cluster are collectively known as an instance of Oracle

For the purpose of this discussion, the log may be thought of as an ever-growing file containing redo records generated by an instance. An RBA (redo byte address) indicates a position in the redo log. Oracle uses a set of a dedicated processes (called the database writers or DBWRs) for writing data blocks to disk. A dedicated process (called the checkpoint process or CKP7) records checkpoint information to the Control File which represents stable storage for maintaining bookkeeping information (such as checkpoint progress) for an Oracle database

January-2006 [36]
1. a) Compare object oriented design with procedure-oriented design. [4]
When you are procedure oriented you are concerned about the steps to take to achive some results or a given goal, and how to organize them into logical operations.

When you are object oriented you are concerned about what kind of objects you need and how they should interact in order to reach your goal. In object-oriented design each object comes with a set of methods that can be applied to it to perform some operations (with or without involving or referring to other objects). These are analogous to procedures in procedure-oriented (or structural) design, except they now have something to do with the object they apply on.

Programmers often consider object-oriented design better than procedure-oriented design, although when it comes to the hard work you need both anyway.
Procedure Oriented Programming
1.Prime focus is on functions and procedures that operate on data
2.Large programs are divided into smaller program units called functions
3.Data and the functions that act upo it are treated as separate entities.
4.Data move freely around the systems from one function to another.
5.Program design follows “Top Down Approach”.
--------------------------------------...
Object Oriented Programming
1.Here more emphasis is laid on the data that is being operated and not the functions or procedures
2.Programs are divided into what are called objects.
3.Both data and functions are treated together as an integral entity.
4.Data is hidden and cannot be accessed by external functions.
5.Program design follows “Bottom UP Approach”.
c) What is meant by separation of interface and implementation in object-oriented design? [4]
Any implementation change should not affect user interface. To ease design process: Separation of Design definition, Implementation, Usage through data encapsulation.
Separating implementation from interface allows the use of generic algorithms to process different implementations of an object through a common interface thereby reducing code duplication and development effort. On a larger scale, it permits evolution of the OAPI with minimal impact on the clients of the library- in most cases, only a relink will be necessary to incorporate new features and enhancements to the OAPI in client software.
For example, current OPUS software uses the file system (specifically, file names) to store and communicate run-time state information for processes and data in the pipeline. Using the file system greatly simplifies interprocess communication and automatically leverages the robust, fault tolerant nature and near universal compatibility of NFS
d) Distinguish between coupling and cohesion. Which is encouraged by object-oriented technology? [4]
Cohesion of a single module/component is the degree to which its responsibilities form a meaningful unit; higher cohesion is better. 'Cohesion is inversely proportional to the number of responsibilities a module/component has.' Coupling between modules/components is their degree of mutual interdependence; lower coupling is better.
size: number of connections between routines
intimacy: the directness of the connection between routines
visibility: the prominence of the connection between routines
flexibility: the ease of changing the connections between routines
A first-order principle of software architecture is to increase cohesion and reduce coupling.
2. a) Define inheritance relationship, composition relationship and association relationship in object-oriented technology. Also define and discuss their role in system development. [6]
An association relationship is a structural relationship between two model elements that shows that objects of one classifier (actor, use case, class, interface, node, or component) connect and can navigate to objects of another classifier. Even in bidirectional relationships, an association connects two classifiers, the primary (supplier) and secondary (client),
Inheritance enables one entity type to be derived from another entity type in the Entity Data Model (EDM). For example, Employee and Customer types can both inherit from the Contact type. In this case, Contact is called the base type. Employee and Customer are called derived types.
An inheritance relationship is represented on the design surface as a line that connects the base type and the derived type. The connector has a hollow arrow at the end that points to the base type.
class Fruit {

//...
}

class Apple extends Fruit {

//...
}

In this simple example, class Apple is related to class Fruit by inheritance, because Apple extends Fruit. In this example, Fruit is the superclass and Apple is the subclass.
I won't be talking about multiple inheritance of interfaces through interface extension. That topic I'll save for next month's Design Techniques article, which will be focused on designing with interfaces.
Here's a UML diagram showing the inheritance relationship between Apple and Fruit:

Figure 1. The inheritance relationship
About composition By composition, I simply mean using instance variables that are references to other objects. For example:
class Fruit {

//...
}

class Apple {

private Fruit fruit = new Fruit();
//...
}

In the example above, class Apple is related to class Fruit by composition, because Apple has an instance variable that holds a reference to a Fruit object. In this example, Apple is what I will call the front-end class and Fruit is what I will call the back-end class. In a composition relationship, the front-end class holds a reference in one of its instance variables to a back-end class.
The UML diagram showing the composition relationship has a darkened diamond, as in:

Figure 2. The composition relationship
Dynamic binding, polymorphism, and change
When you establish an inheritance relationship between two classes, you get to take advantage of dynamic binding and polymorphism. Dynamic binding means the JVM will decide at runtime which method implementation to invoke based on the class of the object. Polymorphism means you can use a variable of a superclass type to hold a reference to an object whose class is the superclass or any of its subclasses.
One of the prime benefits of dynamic binding and polymorphism is that they can help make code easier to change. If you have a fragment of code that uses a variable of a superclass type, such as Fruit, you could later create a brand new subclass, such as Banana, and the old code fragment will work without change with instances of the new subclass. If Banana overrides any of Fruit's methods that are invoked by the code fragment, dynamic binding will ensure that Banana's implementation of those methods gets executed. This will be true even though class Banana didn't exist when the code fragment was written and compiled.
Thus, inheritance helps make code easier to change if the needed change involves adding a new subclass. This, however, is not the only kind of change you may need to make.
Changing the superclass interface
In an inheritance relationship, superclasses are often said to be "fragile," because one little change to a superclass can ripple out and require changes in many other places in the application's code. To be more specific, what is actually fragile about a superclass is its interface. If the superclass is well-designed, with a clean separation of interface and implementation in the object-oriented style, any changes to the superclass's implementation shouldn't ripple at all. Changes to the superclass's interface, however, can ripple out and break any code that uses the superclass or any of its subclasses. What's more, a change in the superclass interface can break the code that defines any of its subclasses.
For example, if you change the return type of a public method in class Fruit (a part of Fruit's interface), you can break the code that invokes that method on any reference of type Fruit or any subclass of Fruit. In addition, you break the code that defines any subclass of Fruit that overrides the method. Such subclasses won't compile until you go and change the return value of the overridden method to match the changed method in superclass Fruit.
Inheritance is also sometimes said to provide "weak encapsulation," because if you have code that directly uses a subclass, such as Apple, that code can be broken by changes to a superclass, such as Fruit. One of the ways to look at inheritance is that it allows subclass code to reuse superclass code. For example, if Apple doesn't override a method defined in its superclass Fruit, Apple is in a sense reusing Fruit's implementation of the method. But Apple only "weakly encapsulates" the Fruit code it is reusing, because changes to Fruit's interface can break code that directly uses Apple.
The composition alternative
Given that the inheritance relationship makes it hard to change the interface of a superclass, it is worth looking at an alternative approach provided by composition. It turns out that when your goal is code reuse, composition provides an approach that yields easier-to-change code.
Code reuse via inheritance For an illustration of how inheritance compares to composition in the code reuse department, consider this very simple example:
class Fruit {

// Return int number of pieces of peel that
// resulted from the peeling activity.
public int peel() {

System.out.println("Peeling is appealing.");
return 1;
}
}

class Apple extends Fruit {
}

class Example1 {

public static void main(String[] args) {

Apple apple = new Apple();
int pieces = apple.peel();
}
}

When you run the Example1 application, it will print out "Peeling is appealing.", because Apple inherits (reuses) Fruit's implementation of peel(). If at some point in the future, however, you wish to change the return value of peel() to type Peel, you will break the code for Example1. Your change to Fruit breaks Example1's code even though Example1 uses Apple directly and never explicitly mentions Fruit.
Here's what that would look like:
class Peel {

private int peelCount;

public Peel(int peelCount) {
this.peelCount = peelCount;
}

public int getPeelCount() {

return peelCount;
}
//...
}

class Fruit {

// Return a Peel object that
// results from the peeling activity.
public Peel peel() {

System.out.println("Peeling is appealing.");
return new Peel(1);
}
}

// Apple still compiles and works fine
class Apple extends Fruit {
}

// This old implementation of Example1
// is broken and won't compile.
class Example1 {

public static void main(String[] args) {

Apple apple = new Apple();
int pieces = apple.peel();
}
}

Code reuse via composition Composition provides an alternative way for Apple to reuse Fruit's implementation of peel(). Instead of extending Fruit, Apple can hold a reference to a Fruit instance and define its own peel() method that simply invokes peel() on the Fruit. Here's the code:
class Fruit {

// Return int number of pieces of peel that
// resulted from the peeling activity.
public int peel() {

System.out.println("Peeling is appealing.");
return 1;
}
}

class Apple {

private Fruit fruit = new Fruit();

public int peel() {
return fruit.peel();
}
}

class Example2 {

public static void main(String[] args) {

Apple apple = new Apple();
int pieces = apple.peel();
}
}

In the composition approach, the subclass becomes the "front-end class," and the superclass becomes the "back-end class." With inheritance, a subclass automatically inherits an implemenation of any non-private superclass method that it doesn't override. With composition, by contrast, the front-end class must explicitly invoke a corresponding method in the back-end class from its own implementation of the method. This explicit call is sometimes called "forwarding" or "delegating" the method invocation to the back-end object.
The composition approach to code reuse provides stronger encapsulation than inheritance, because a change to a back-end class needn't break any code that relies only on the front-end class. For example, changing the return type of Fruit's peel() method from the previous example doesn't force a change in Apple's interface and therefore needn't break Example2's code.
Here's how the changed code would look:
class Peel {

private int peelCount;

public Peel(int peelCount) {
this.peelCount = peelCount;
}

public int getPeelCount() {

return peelCount;
}
//...
}

class Fruit {

// Return int number of pieces of peel that
// resulted from the peeling activity.
public Peel peel() {

System.out.println("Peeling is appealing.");
return new Peel(1);
}
}

// Apple must be changed to accomodate
// the change to Fruit
class Apple {

private Fruit fruit = new Fruit();

public int peel() {

Peel peel = fruit.peel();
return peel.getPeelCount();
}
}

// This old implementation of Example2
// still works fine.
class Example1 {

public static void main(String[] args) {

Apple apple = new Apple();
int pieces = apple.peel();
}
}

This example illustrates that the ripple effect caused by changing a back-end class stops (or at least can stop) at the front-end class. Although Apple's peel() method had to be updated to accommodate the change to Fruit, Example2 required no changes.
Comparing composition and inheritance
So how exactly do composition and inheritance compare? Here are several points of comparison:
• It is easier to change the interface of a back-end class (composition) than a superclass (inheritance). As the previous example illustrated, a change to the interface of a back-end class necessitates a change to the front-end class implementation, but not necessarily the front-end interface. Code that depends only on the front-end interface still works, so long as the front-end interface remains the same. By contrast, a change to a superclass's interface can not only ripple down the inheritance hierarchy to subclasses, but can also ripple out to code that uses just the subclass's interface.
• It is easier to change the interface of a front-end class (composition) than a subclass (inheritance). Just as superclasses can be fragile, subclasses can be rigid. You can't just change a subclass's interface without making sure the subclass's new interface is compatible with that of its supertypes. For example, you can't add to a subclass a method with the same signature but a different return type as a method inherited from a superclass. Composition, on the other hand, allows you to change the interface of a front-end class without affecting back-end classes.
• Composition allows you to delay the creation of back-end objects until (and unless) they are needed, as well as changing the back-end objects dynamically throughout the lifetime of the front-end object. With inheritance, you get the image of the superclass in your subclass object image as soon as the subclass is created, and it remains part of the subclass object throughout the lifetime of the subclass.
• It is easier to add new subclasses (inheritance) than it is to add new front-end classes (composition), because inheritance comes with polymorphism. If you have a bit of code that relies only on a superclass interface, that code can work with a new subclass without change. This is not true of composition, unless you use composition with interfaces. Used together, composition and interfaces make a very powerful design tool. I'll talk about this approach in next month's Design Techniques article.
• The explicit method-invocation forwarding (or delegation) approach of composition will often have a performance cost as compared to inheritance's single invocation of an inherited superclass method implementation. I say "often" here because the performance really depends on many factors, including how the JVM optimizes the program as it executes it.
• With both composition and inheritance, changing the implementation (not the interface) of any class is easy. The ripple effect of implementation changes remain inside the same class.
Choosing between composition and inheritance
So how do all these comparisons between composition and inheritance help you in your designs? Here are a few guidelines that reflect how I tend to select between composition and inheritance.
Make sure inheritance models the is-a relationship My main guiding philosophy is that inheritance should be used only when a subclass is-a superclass. In the example above, an Apple likely is-a Fruit, so I would be inclined to use inheritance.
An important question to ask yourself when you think you have an is-a relationship is whether that is-a relationship will be constant throughout the lifetime of the application and, with luck, the lifecycle of the code. For example, you might think that an Employee is-a Person, when really Employee represents a role that a Person plays part of the time. What if the person becomes unemployed? What if the person is both an Employee and a Supervisor? Such impermanent is-a relationships should usually be modelled with composition.
Don't use inheritance just to get code reuse If all you really want is to reuse code and there is no is-a relationship in sight, use composition.
Don't use inheritance just to get at polymorphism If all you really want is polymorphism, but there is no natural is-a relationship, use composition with interfaces.

6.a) What is object serialization? How is the concept linked to object-persistence? How does a persistent programming language help in object-oriented databases? [6]
Serialization is a way of flattening, pickling, sizzling, serializing, or freeze-drying Objects so that they can be stored on disk, and later read back and reconstituted, with all the links between Objects intact Object serialization is the process of saving an object's state to a sequence of bytes, as well as the process of rebuilding those bytes into a live object at some future time .
When you serialize an object, you create a string representation of it. Anything that can be serialized can be unserialized. So, by serializing an object and storing the result somewhere (a file, database, memory, etc) you can later access the string and unserialize it to recreate the object.
Persistent programming languages – extend object-oriented programming language to deal with databases by adding concepts such as persistence and collections. Persistent Programming languages allow objects to be created and stored in a database, and used directly from a programming
July-2006 [18]

1.c) While using object oriented analysis multiple inheritance in type hierarchy occurs when a certain subtype T is a subtype of one or more than one types and hence inherits the function of one or more than one super type. State whether the sentence is true or false. Justify your answer. [4]
Multiple inheritance in a type hierarchy occurs when a certain subtype T is a subtype of two (or more) types and hence inherits the functions (attributes and methods) of both supertypes.
multiple inheritance, which allows a subtype to have multiplesupertypes
For example, we may create a subtype ENGINEERING_MANAGER that is a subtype of both MANAGER and ENGINEER. This leads to the creation of a type lattice rather than a type hierarchy. Consider the following type definitions
type Person abstract supertypes: Object
operations: String social_security (P: Person) String name (P: Person)
end type
type Employee supertypes: Person
operations: Department dept (E: Employee) Money salary (E: Employee)
end type
In this example, type Employee is declared to be a subtype of type Person.In the Core Object Model this implies that

1. all instances of type Employee are also instances of type Person,and so an Employee object can be used wherever a Person object is expected.
2. Employee inherits the name and social_security operations from Person.
Furthermore, Person is declared to be an abstract type and so it cannothave any direct instances.
9 Substitutability means being able to substitutean object of some type S when an object of type T is expected, where Tis a supertype of S, while guaranteeing that that the substituted objectwill support the same operations as specified by the supertype T.
10 Whether Ops(S) is a superset of Ops(T) or thetwo are disjoint sets is an implementation issue and does not affect theCore semantics.
g) What is persistent programming language? How do they make object persistent? [4]
Persistent data: data that continue to exist even after the program that created it has terminated.
A persistent programming language is a programming language extended with constructs to handle persistent data. It distinguishes with embedded SQL in at least two ways:
In a persistent program language, query language is fully integrated with the host language and both share the same type system. Any format changes required in databases are carried out transparently.
Comparison with Embedded SQL where (1) host and DML have different type systems, code conversion operates outside of OO type system, and hence has a higher chance of having undetected errors; (2) format conversion takes a substantial amount of code.
Using Embedded SQL, a programmer is responsible for writing explicit code to fetch data into memory or store data back to the database.
In a persistent program language, a programmer can manipulate persistent data without having to write such code explicitly.
Drawbacks: (1) Powerful but easy to make programming errors that damage the database; (2) harder to do automatic high-level optimization; and (3) do not support declarative querying well.
Several approaches have been proposed to make the objects persistent.
persistence by class. Declare class to be persistent: all objects of the class are then persistent objects. Simple, not flexible since it is often useful to have both transient and persistent objects in a single class. In many OODB systems, declaring a class to be persistent is interpreted as ``persistable'' -- objects in the class potentially can be made persistent.
persistence by creation. Introduce new syntax to create persistent objects.
persistence by marking. Mark an object persistent after it is created (and before the program terminates).
persistence by reference. One or more objects are explicitly declared as (root) persistent objects. All other objects are persistent iff they are referred, directly or indirectly, from a root persistent object. It is easy to make the entire data structure persistent by merely declaring the root of the structure as persistent, but is expensive to follow the chains in detection for a database system.
2.b) What is the importance of checkpoints in the database management system? How checkpoints are used in the system log file of database management system? [4]