Ans 1.
Data Independence
A database system normally contains a lot of data in addition to users’ data. For example, it stores data about data, known as metadata, to locate and retrieve data easily. It is rather difficult to modify or update a set of metadata once it is stored in the database. But as a DBMS expands, it needs to change over time to satisfy the requirements of the users. If the entire data is dependent, it would become a tedious and highly complex job.Metadata itself follows a layered architecture, so that when we change data at one layer, it does not affect the data at another level. This data is independent but mapped to each other.
Logical Data Independence
Logical data is data about database, that is, it stores information about how data is managed inside. For example, a table (relation) stored in the database and all its constraints, applied on that relation.Logical data independence is a kind of mechanism, which liberalizes itself from actual data stored on the disk. If we do some changes on table format, it should not change the data residing on the disk.
Physical Data Independence
All the schemas are logical, and the actual data is stored in bit format on the disk. Physical data independence is the power to change the physical data without impacting the schema or logical data.For example, in case we want to change or upgrade the storage system itself − suppose we want to replace hard-disks with SSD − it should not have any impact on the logical data or schemas.
ANS
2.
Generalization
Generalization is a bottom-up approach in which two lower level entities combine to form a higher level entity. In generalization, the higher level entity can also combine with other lower level entity to make further higher level entity.Specialization
Specialization is opposite to Generalization. It is a top-down approach in which one higher level entity can be broken down into two lower level entity. In specialization, some higher level entities may not have lower-level entity sets at all.
ANS
3.
A
database administrator (DBA) directs or performs all activities
related to maintaining a successful database
environment. Responsibilities include designing, implementing, and
maintaining the database system; establishing policies and procedures
pertaining to the management, security,
maintenance, and use of the database
management system; and training employees in database management
and use. A DBA is expected to stay abreast of emerging technologies
and new design approaches. Typically, a DBA has either a degree in
Computer Science and some on-the-job training with a particular
database product or more extensive experience with a range of
database products. A DBA is usually expected to have experience with
one or more of the major database management products, such as
Structured
Query Language, SAP,
and Oracle-based database management software.
DBA
ROLE...
Maintaining all databases
required for development, testing, training and production usage
-
Maximizing uptime of databases
-
Managing share resources used amongst applications
-
Administrates all database objects, including tables, views, indexes, stored procedures, functions, packages, sequences and clusters
-
Enforces and maintains database constraints to ensure integrity of the database
-
Installation and configuration of DBMS server software and related products.
-
Upgrading and patching/hot-fixing of DBMS server software and related products.
-
Assists with impact analysis of any changes made to the database objects.
-
-
Migrate database to another server.
-
Evaluate DBMS features and DBMS related products.
-
Ensure that the site is running the products that are most appropriate
-
ensure that any new product usage or release upgrade takes place with minimal impact
-
-
Establish and maintain sound backup and recovery policies and procedures.
-
Take care of the Database design and implementation.
-
Implementing HA Solution (Replication, Clustering, Mirroring and Log Shipping)
-
-
Implement and maintain database security (create and maintain logins, users and roles, assign privileges).
-
Performance tuning and health monitoring on DBMS, OS and application.
-
Setting Up Server Level, Database Level and Operating System Alerts
-
Implementation of robust maintenance plan (Index Defrag, Stats Update, DBCC, Reorg etc)
-
SQL Server T-SQL/ Oracle PL-SQL Tuning
-
-
Setup and maintain documentation and standards.
-
Perform reviews on the design and code frequently to ensure the standards are being adhered to
-
-
Plan growth and changes (capacity planning).
-
Do general technical troubleshooting and give consultation to development teams.
-
Troubleshooting on DBMS and Operating System performance issue
-
-
Documentation of any implementation and changes (database changes, reference data changes and application UI changes etc)
-
Be able to provide a strategic database direction for the organization.
-
Expert
level knowledge of DBMS Architecture, all features in all versions
and troubleshooting skill
·
Excellent knowledge of DMBS backup and recovery
scenarios.
·
Good skills in all DMBS tools.
·
A good knowledge of DMBS security management.
·
A good knowledge of how DMBS acquires and
manages resources.
·
Sound knowledge of the applications at your
site.
ANS
4.
Authorization is the process of
giving someone permission to do or have something. In multi-user
computer systems, a system administrator defines for the system which
users are allowed access
to the system and what privileges of use (such as access to which
file directories, hours of access, amount of allocated storage space,
and so forth). Assuming that someone has logged in to a computer
operating
system or application,
the system or application may want to identify what resources the
user can be given during this session. Thus, authorization is
sometimes seen as both the preliminary setting up of permissions by a
system administrator and the actual checking of the permission values
that have been set up when a user is getting access.
Permission Statements
The three Transact-SQL permission statements are described in the following table.Permission Statement | Description |
---|---|
GRANT | Grants a permission. |
REVOKE | Revokes a permission. This is the default state of a new object. A permission revoked from a user or role can still be inherited from other groups or roles to which the principal is assigned. |
DENY | DENY revokes a permission so that it cannot be inherited. DENY
takes precedence over all permissions, except DENY does not apply
to object owners or members of sysadmin .
If you DENY permissions on an object to the public
role it is denied to all users and roles except for object owners
and sysadmin members. |
ANS
5.
A).
A primary key, also called
a primary keyword, is a key in a relational database that is unique
for each record. It is a unique identifier, such as a driver license
number, telephone number (including area code), or vehicle
identification number (VIN). A relational database must always have
one and only one primary key. Primary keys typically appear as
columns in relational database tables.
B).Total
Participation
- Total Participation is when each entity in the entity set occurs
in at least one relationship in that relationship set.
For instance, consider the relationship borrower between customers and loans. A double line from loan to borrower, as shown in figure below indicates that each loan must have at least one associated customer.
C). The SQL GROUP BY Statement
The GROUP BY statement is often used with aggregate functions
(COUNT, MAX, MIN, SUM, AVG) to group the result-set by one or more
columns.GROUP BY Syntax
SELECT
column_name(s)
FROM table_name
WHERE condition
GROUP BY column_name(s)
ORDER BY column_name(s);
FROM table_name
WHERE condition
GROUP BY column_name(s)
ORDER BY column_name(s);
D). Data that contains a
combination of letters, numbers, and special characters. String data
types are listed below:
-
CHARACTER: Fixed-length character strings. The common short name for this data type is CHAR.
-
VARCHAR: Varying-length character strings.
-
CLOB: Varying-length character large object strings, typically used when a character string might exceed the limits of the VARCHAR data type.
-
GRAPHIC: Fixed-length graphic strings that contain double-byte characters.
-
VARGRAPHIC: Varying-length graphic strings that contain double-byte characters.
-
DBCLOB: Varying-length strings of double-byte characters in a large object.
-
BINARY: A sequence of bytes that is not associated with a code page.
-
VARBINARY: Varying-length binary strings.
-
BLOB: Varying-length binary strings in a large object.
-
XML: Varying-length string that is an internal
representation of XML.
- Numeric
- Data that contains digits. Numeric data types are listed below:
-
SMALLINT: for small integers.
-
INTEGER: for large integers.
-
BIGINT: for bigger values.
-
DECIMAL(p,s) or NUMERIC(p,s), where p is precision and s is scale: for packed decimal numbers with precision p and scale s. Precision is the total number of digits, and scale is the number of digits to the right of the decimal point.
-
DECFLOAT: for decimal floating-point numbers.
-
REAL: for single-precision floating-point numbers.
-
DOUBLE: for double-precision floating-point numbers.
- Datetime
- Data values that represent dates, times, or timestamps. Datetime data types are listed below:
-
DATE: Dates with a three-part value that represents a year, month, and day.
-
TIME: Times with a three-part value that represents a time of day in hours, minutes, and seconds.
-
TIMESTAMP: Timestamps with a seven-part value that
represents a date and time by year, month, day, hour, minute,
second, and microsecond.
The
relational algebra is a procedural query language. It consists of a
set of operations that take one or two relations as input and
produces a new relation as output.
Fundamental Operations
-
Project Operation (∏)
Notation − ∏A1, A2, An (r)
Where A1, A2 , An are attribute names of relation r.
Duplicate rows are automatically eliminated, as relation is a set.
For example −
∏subject, author (Books)Selects and projects columns named as subject and author from the relation Books.
Union Operation (∪)
It performs binary union between two given relations and is defined as −r ∪ s = { t | t ∈ r or t ∈ s}Notation − r U s
Where r and s are either database relations or relation result set (temporary relation).
For a union operation to be valid, the following conditions must hold −
-
r, and s must have the same number of attributes.
-
Attribute domains must be compatible.
-
Duplicate tuples are automatically eliminated.
∏ author (Books) ∪ ∏ author (Articles)Output − Projects the names of the authors who have either written a book or an article or both.
-
Rename Operation (ρ)
Notation − ρ x (E)
Where the result of expression E is saved with name of x.
-
LEFT JOIN performs a join starting with the first (left-most) table and then any matching second (right-most) table records.
-
LEFT JOIN and LEFT OUTER JOIN are the same.
The
general LEFT OUTER JOIN syntax is:
SELECT OrderNumber, TotalAmount, FirstName, LastName, City, Country FROM Customer C LEFT JOIN [Order] O ON O.CustomerId = C.IdORDER BY TotalAmount
....A NATURAL JOIN is a JOIN
operation that creates an implicit join clause for you based on
the common columns in the two tables being joined. Common columns are
columns that have the same name in both tables.
Examples
If the tables COUNTRIES and CITIES have two common columns named COUNTRY and COUNTRY_ISO_CODE, the following two SELECT statements are equivalent:SELECT * FROM COUNTRIES NATURAL JOIN CITIES SELECT * FROM COUNTRIES JOIN CITIES USING (COUNTRY, COUNTRY_ISO_CODE)
ans
7).
Three
Level Architecture of DBMS
Following are the
three levels of database architecture,
1. Physical Level
2. Conceptual Level
3. External Level
In the above diagram,
1. Physical Level
2. Conceptual Level
3. External Level
In the above diagram,
-
It shows the architecture of DBMS.
-
Mapping is the process of transforming request response between various database levels of architecture.
-
Mapping is not good for small database, because it takes more time.
-
In External / Conceptual mapping, DBMS transforms a request on an external schema against the conceptual schema.
-
In Conceptual / Internal mapping, it is necessary to
transform the request from the conceptual to internal levels.
-
Physical level describes the physical storage structure of data in database.
-
It is also known as Internal Level.
-
This level is very close to physical storage of data.
-
At lowest level, it is stored in the form of bits with the physical addresses on the secondary storage device.
-
At highest level, it can be viewed in the form of files.
-
The internal schema defines the various stored data types.
It uses a physical data model.
-
Conceptual level describes the structure of the whole database for a group of users.
-
It is also called as the data model.
-
Conceptual schema is a representation of the entire content of the database.
-
These schema contains all the information to build relevant external records.
-
It hides the internal details of physical storage.
-
External level is related to the data which is viewed by individual end users.
-
This level includes a no. of user views or external schemas.
-
This level is closest to the user.
-
External view describes the segment of the database that is
required for a particular user group and hides the rest of the
database from that user group.
ANS
9)
ANS
10).Database normalization
is a technique of organizing the data in the database.
-
Normalization of data can be considered a process of
analysing the given relation schemas based on their Functional
Dependencies and primary keys to achieve the following properties:
i. Minimizing redundancy
ii. Minimizing the insertion, deletion, and update anomalies
iii. Ensuring data is stored in correct table
-
It can be considered as a filtering process to make the
design have successively better quality
-
It is a multi-step process that puts data into tabular form by removing duplicated data from the relation tables.
-
Without normalization it becomes difficult to handle and update database without facing data loss.
-
The various forms of normalization are described below:
-
First normal form (1NF) states that the domain of an attribute must include only atomic values and that the value of any attribute in a tuple must be a single value from the domain of that attribute.
-
A relation is said to be in 1NF if it contains no non-atomic values and each row can provide a unique combination of values.
-
1NF disallows having a set of values, a tuple of values, or a combination of both as an attribute value for a single tuple.
-
The only attribute values permitted by 1NF are single atomic values.
-
Example: Un-Normalized Table-
Student | Age | Subject |
---|---|---|
Rooney | 15 | Java, C++ |
Kane | 16 | HTML, PHP |
Student | Age | Subject |
---|---|---|
Rooney | 15 | JAVA |
Rooney | 15 | C++ |
Kane | 16 | HTML |
Kane | 16 | PHP |
-
A relation is said to be in 2NF, if it is already in 1NF and each and every attribute fully depends on the primary key of the relation.
-
There must not be any partial dependency of any column on the primary key.
-
Second normal form (2NF) is based on the concept of full functional dependency. A functional dependency X -> Y is a full functional dependency if removal of any attribute A from X means that the dependency does not hold any more.
-
A functional dependency X->Y is a partial dependency if some attribute A belongs to X can be removed from X and the dependency still holds.
-
Example:
Stud_ID | Proj_ID | Stud_Name | Proj_Name |
---|---|---|---|
100 | 001 | Rooney | Cloud |
200 | 002 | Kane | Servers |
The above table can be normalized to 2NF as shown below.
Student Table in 2NF
Stud_ID | Proj_ID | Stud_Name |
---|---|---|
100 | 001 | Rooney |
200 | 001 | Kane |
Proj_ID | Proj_Name |
---|---|
001 | 001 |
002 | Servers |
-
A relation is said to be in 3NF, if it is already in 2NF and there exists no transitive dependency in that relation.
-
If a table contains transitive dependency, then it is not in 3NF, and the table must be split to bring it into 3NF.
-
What is a transitive dependency?
A -> B [B depends on A] & B -> C [C depends on B]
Then A -> C[C depends on A] can be derived.
-
Example:Below table not in 3NF
Stud_ID | Stud_Name | City | Zip |
---|---|---|---|
100 | Rooney | Manchester | 4001 |
200 | Kane | Stoke | 4002 |
Stud_ID -> Zip -> City, so there exists transitive dependency. Hence 3NF table is below
Student_Detail
Stud_ID | Stud_Name | Zip |
---|---|---|
100 | Rooney | 4001 |
200 | Kane | 4002 |
Zip | City |
---|---|
4001 | Manchester |
4002 | Stoke |
-
BCNF is an extension of Third Normal Form in strict way.
-
A relationship is said to be in BCNF if it is already in 3NF and for any non-trivial functional dependency, X -> A, then X must be a super-key.
-
A 3NF table which does not have multiple overlapping candidate keys is said to be in BCNF.
-
Example:
In the 3NF example,Stud_ID is super-key in Student_Detail relation and Zip is super-key in ZipCodes relation.
So Stud_ID ->Stud_Name, Zip and
Zip ->City
Confirms, that both relations are in BCNF.
.
ANS
11).
1. Definition of Views:
i. Basically, a view is a single table derived from multiple
tables or a logically representing subset of data.ii. These other tables can be base tables or previously defined views.
iii. A view is considered to be a virtual table, in contrast to base tables, whose tuples are physically stored in database.
2. Implementation of View:
i. In SQL, the command to specify a view is CREATE VIEW. The view is given a (virtual) table name (or view name), a list of attribute names, and a query to specify the contents of the view. Following is the syntax of view.
CREATE VIEW viewname AS
SELECT column1, column2
FROM tablename
WHERE columnname=expressionList;
ii. Consider a table Tbl_Employee with the fields Emp_ID, Emp_Name, Emp_DOB, Emp_Address,
Emp_DateOfJoining, Emp_Gender, Emp_EmailID. Following is the view vw_EmployeeContactDetails which contains the name and Email ID of the employee.
CREATE VIEW vw_EmployeeContactDetails
SELECT Emp_Name, Emp_EmailID
FROM Tbl_Employee
It results in the creation of view. To fetch the items of view, a select statement can be written as follows:
SELECT * FROM vw_EmployeeContactDetails
iii. If we do not need a view any more, we can use the DROP VIEW command to dispose of it. Syntax is DROP VIEW viewname Example: To drop the view vw_EmployeeContactDetails, following SQL statement must be executed: DROP VIEW vw_EmployeeContactDetails
3. Problems in updating Views:
i. Updating of views is complicated and can be ambiguous task.
ii. In general, an update on a view defined on a single table without any aggregate functions can be mapped to an update on the underlying base table under certain conditions.
iii. For a view involving joins, an update operation may be mapped to update operations on the underlying base relations in multiple ways. Hence, it is often not possible for the DBMS to determine which of the updates is intended.
iv. Generally, a view update is feasible when only one possible update on the base relations can accomplish the desired update effect on the view.
v. Whenever an update on the view can be mapped to more than one update on the underlying base relations, it is necessary to have a certain procedure for choosing one of the possible updates as the most likely one.
vi. Thus problems with updating a view can be summarized as follows:
-
A view with a single defining table is updatable if the view attributes contain the primary key of the base relation, as well as all attributes with the NOT NULL constraint that do not have default values specified.
-
It is generally not possible to update views defined on multiple tables.
-
It is not possible to update views defined using grouping
and aggregate functions
ANS
16).
Indexing is a data structure
technique to efficiently retrieve records from the database files
based on some attributes on which the indexing has been done.
Indexing in database systems is similar to what we see in books.
Indexing is defined based on its indexing attributes. Indexing can
be of the following types −-
Primary Index − Primary index is defined on an ordered data file. The data file is ordered on a key field. The key field is generally the primary key of the relation.
-
Single – level ordered indexes
Indexing can be one of the following types:
-
Primary index:
-
If index is built on ordering ‘keyfield’ of file it is called primary index.
-
Generally it is the primary key of the relation.
-
The second key is block pointer which gives the address of the disk block.
-
Primary indexes are also called sparse indexes.
-
The index file contains primary key along with the pointer to the address of data block where records are stored.
-
The keys are matched with values stored in primary key-field of data file and the matching record is accessed without having to reach each field in entire file.
-
Secondary Indexes
-
In provides an alternative means to access file that
contains database records.
-
Secondary Index
Secondary index may be generated from a field which is a candidate key and has a unique value in every record, or a non-key with duplicate values.
These indexes are based on either candidate keys containing unique values or non-key attributes containing duplicate values.
-
Secondary indexes can be considered as ordered files and having 2 fields.
-
The data type of one field is the same to that of a non-ordering indexing field of file containing records.
-
The data type of other field is a pointer that points to either a block or a record.
-
A number of secondary indexes can be created for same file.
-
Clustered indexes
-
Clustering indexes are defined on file containing data in an ordered manner.
-
These indexes use no key to order the records in the file
-
Clustering indexes require every different value of n field to have a distinct entry for itself.
-
Clustering indexes are also called sparse indexes.
Clustering Index − Clustering index is defined on an ordered data file. The data file is ordered on a non-key field.
Ordered Indexing is of two types: Dense Index and Sparse Index
3.1. Dense Index
In dense index, there is an index record for every search key value in the database. This makes searching faster but requires more space to store index records itself. Index records contain search key value and a pointer to the actual record on the disk.
3.2. Sparse Index
In sparse index, index records are not created for every search key. An index record here contains a search key and an actual pointer to the data on the disk. To search a record, we first proceed by index record and reach at the actual location of the data. If the data we are looking for is not where we directly reach by following the index, then the system starts sequential search until the desired data is found.
3.3. Multilevel Index
Index records comprise search-key values and data pointers. Multilevel index is stored on the disk along with the actual database files. As the size of the database grows, so does the size of the indices. There is an immense need to keep the index records in the main memory so as to speed up the search operations. If single-level index is used, then a large size index cannot be kept in memory which leads to multiple disk accesses.
Multi-level Index helps in breaking down the index into several smaller indices in order to make the outermost level so small that it can be saved in a single disk block, which can easily be accommodated anywhere in the main memory.
B+ Tree
A B+ tree is a balanced binary search tree that follows a multi-level index format. The leaf nodes of a B+ tree denote actual data pointers. B+ tree ensures that all leaf nodes remain at the same height, thus balanced. Additionally, the leaf nodes are linked using a link list; therefore, a B+ tree can support random access as well as sequential access.
Structure of B+ Tree
Every leaf node is at equal distance from the root node. B+ tree is of the order n where nis fixed for every B+ tree.
ANS
18).
B-Tree structure:-
-
Indexing can be made more efficient by including more index levels.
-
With an efficient dynamic insertion and deletion of items
being allowed in B-trees; you can scale the levels as & when
needed.
b.) A B-tree of order p=3. The values were inserted in order. 8, 5, 1, 7, 3, 12, 9, 6
The diagram (a.) contains q number of nodes that allow q-1 search values. Diagram (b.) shows insertion of data items in nodes respectively in order 8, 5, 1, 7, 3, 12, 9 & 6.
ANS 20).
I.
Introduction:
-
A trigger is a special kind of a store procedure that executes in response to certain action on the table like insertion, deletion or updating of data.
-
In many cases it is convenient to specify the type of action to be taken when certain events occur and when certain conditions are satisfied.
-
Consider the following condition where the sales manager
needs to receive a mail whenever they receive an order of high
propriety. In this case, a trigger can be written to determine
priority of the order received and send mail to sales head.
Trigger is based on the model named ECA i.e. Event-Condition-Action.
1. Event:
a) The rules are triggered by Event.
b) These events can be database insert, update or delete statements.
c) Changing an employee’s manager can be an event on which trigger can be fired.
2. Condition:
a) The condition that determines whether the rule action should be executed.
b) Once the triggering event has occurred, an optional condition may be evaluated.
c) If no condition is specified, the action will be executed once the event occurs.
d) If a condition is specified, it is first evaluated, and only if it evaluates to true will the rule action be executed.
3. Action: a) The action is usually a sequence of SQL statements.
b) It could also be a database transaction or an external program that will be automatically executed.
c) For example, executing a stored procedure when a particular event occurs.
III. Example:
Here we are using the syntax of SQL Server database system.
Consider the following table Tbl_Employee with columns Emp_ID, Emp_Name and Emp_Sal.
Emp_ID | Emp_Name | Emp_Sal |
---|---|---|
1 | ABC | 1000.00 |
2 | DEF | 1200.00 |
3 | GHI | 1100.00 |
4 | JKL | 1300.00 |
5 | MNO | 1400.00 |
| Emp_ID | Emp_Name | Emp_Sal | Audit_Action | Audit_Timestamp | |--------|----------|---------|--------------|-----------------|
A trigger needs to be fired when a new row is inserted into Tbl_Employee. The trigger makes an entry into the Tbl_EmpAudit. Following is the trigger which gets trigger which will be fired when a new row is inserted into
Tbl_Employee.
CREATE TRIGGER Trg_AfterInsert ON [Tbl_Employee]
FOR INSERT
AS DECLARE @empid INT; DECLARE @empname VARCHAR(100); DECLARE @empsal DECIMAL(10,2); DECLARE @audit_action VARCHAR(100);
GOSELECT @empid=i.Emp_ID FROM inserted i;
SELECT @empname=i.Emp_Name FROM inserted i;
SELECT @empsal=i.Emp_Sal FROM inserted i;
SET @audit_action='Inserted Record -- After Insert Trigger.';
INSERT INTO Tbl_EmpAudit
(
Emp_ID,
Emp_Name,
Emp_Sal,
Audit_Action,
Audit_Timestamp
)
VALUES
(
@empid,
@empname,
@empsal,
@audit_action,
getdate()
);
PRINT 'AFTER INSERT trigger fired.'
To invoke the trigger, row needs to be inserted to Tbl_Employee. Consider the following insert statement.
INSERT INTO Tbl_Employee VALUES('PQR',1500);
The trigger is fired and an entry is made in the table Tbl_EmpAudit.
The entry in table Tbl_EmpAudit is as follows:
ANS).21
Stored
Procedure for Select, Insert, Update and Delete
The
following stored procedure will be used to perform Select, Insert,
Update and Delete operations on the Customers table of the SQL Server
database.
This
Stored Procedure accepts has the first parameter named @Action which
informs the Stored Procedure about the operation i.e. Select, Insert,
Update and Delete needs to be performed.
The
Stored Procedure performs multiple operations on the same Table and
hence the other parameters are specified with default NULL values in
order to make the Stored Procedure work without specifying all
parameter values.
UPDATE
Operation
When
performing UPDATE operation, the value of the @Action parameter is
passed as UPDATE and the values of @CustomerId, @Name and @Country
parameters are supplied.
CREATE
PROCEDURE
[dbo].[Customers_CRUD]
@Action
VARCHAR(10)
,@CustomerId
INT
=
NULL
,@Name
VARCHAR(100)
=
NULL
,@Country
VARCHAR(100)
=
NULL
AS
--UPDATE
IF
@Action
=
'UPDATE'
BEGIN
UPDATE
Customers
SET
Name
=
@Name,
Country
=
@Country
WHERE
CustomerId
=
@CustomerId
END
ANS
22).
Cursors
In
SQL procedures, a cursor make it possible to define a result set (a
set of data rows) and perform complex logic on a row by row basis. By
using the same mechanics, an SQL procedure can also define a result
set and return it directly to the caller of the SQL procedure or to a
client application.
A cursor can be viewed as a pointer to one row in a set of rows. The cursor can only reference one row at a time, but can move to other rows of the result set as needed.
There are mainly 2 types of cursors :
1) Implicit Cursor.
2) Explicit Cursor.
Implicit cursor: Oracle will implicitly creates an area for
the DML operations. Programmer will not have control on
implicit cursors. The only useful attribute on this implicit
cursor is SQL%ROWCOUNT , it will give the number of rows
affected by the recent DML operation.
The only Implicit cursor is SQL.
Explicit Cursor:
Explicit cursors are created by the programmer and
programmer have control on it
Programmer can
1) Open
2) Close
3) Fetch
and do some manipulations on the values
Explicit Cursors are classified into
1) Normal cursor
2) Parameterized cursor
3) Cursor For Loops and
4) REF cursors
REF Cursors:
Normally when we create a normal cursor , we cant change the
select query associated to that query (the query which is
given at the time of definition)
But using REF cursors , we can change the cursor statement also.
These REF cursors are useful when we are sending data from
one environment to another environment.
CURSOR ATTRIBUTES :
a) %is open: evaluates to true if the cursor is open.
b) %not found: evaluates to true if the most recent fetch
does not return a row
c) %found: evaluates to true if the most recent fetch
returns a row.
d) %row count: evaluates to the total number of rows
returned to far.
A cursor can be viewed as a pointer to one row in a set of rows. The cursor can only reference one row at a time, but can move to other rows of the result set as needed.
There are mainly 2 types of cursors :
1) Implicit Cursor.
2) Explicit Cursor.
Implicit cursor: Oracle will implicitly creates an area for
the DML operations. Programmer will not have control on
implicit cursors. The only useful attribute on this implicit
cursor is SQL%ROWCOUNT , it will give the number of rows
affected by the recent DML operation.
The only Implicit cursor is SQL.
Explicit Cursor:
Explicit cursors are created by the programmer and
programmer have control on it
Programmer can
1) Open
2) Close
3) Fetch
and do some manipulations on the values
Explicit Cursors are classified into
1) Normal cursor
2) Parameterized cursor
3) Cursor For Loops and
4) REF cursors
REF Cursors:
Normally when we create a normal cursor , we cant change the
select query associated to that query (the query which is
given at the time of definition)
But using REF cursors , we can change the cursor statement also.
These REF cursors are useful when we are sending data from
one environment to another environment.
CURSOR ATTRIBUTES :
a) %is open: evaluates to true if the cursor is open.
b) %not found: evaluates to true if the most recent fetch
does not return a row
c) %found: evaluates to true if the most recent fetch
returns a row.
d) %row count: evaluates to the total number of rows
returned to far.
To use cursors in
SQL procedures, you need to do the following:
-
Declare a cursor that defines a result set.
-
Open the cursor to establish the result set.
-
Fetch the data into local variables as needed from the cursor, one row at a time.
-
Close the cursor when done
To work with
cursors you must use the following SQL statements:
-
DECLARE CURSOR
-
OPEN
-
FETCH
-
CLOSE
ANS
23).
1. Introduction
i. In a typical file processing system, each and every subsystem
of the information system will have its own set of files.ii. The typical file-oriented system is supported by a conventional operating system. Permanent records are stored in various files and a number of different application programs are written to extract records from and add records to the appropriate files.
iii. Disadvantages of file system include:
-
Data redundancy and inconsistency
-
Difficulty in accessing data
-
Concurrent access anomalies
-
Security problem
-
Integrity problems
-
Data isolation
i. Redundancy can be reduced:
A major difficulty was that many applications used their own special files of data. Thus, some data items were common to several applications. For example address and phone number of particular customer may appear in a file that consists of personal information and savings account also. This redundancy leads to higher storage and access costs. Database systems can eliminate data redundancy, since all applications share a common pool of data. Essential information such as customer name will appear just once in the database.
ii. Inconsistency can be avoided:
Since the same information can be present at multiple files in file systems, data needs to be updated at all the files whenever any changes in data occur. For example, a changed customer address may be reflected in personal information file, but not in savings account file. By having a centralized database, most of this can be avoided.
iii. Reduced programming effort:
A new application program often required an entirely new set of file definitions. Even though an existing file may contain some of the data needed, the application often required a number of other data items. As a result, the programmer had to recode the definitions of needed data items from the existing file as well as definitions of all new data items. Thus, in file-oriented systems, there was a heavy interdependence between programs and data.
Database provides a separation between programs and data, so that programs can be somewhat independent of the details of data definition. By providing access to a pool of shared data and by supporting powerful data manipulating languages, database systems eliminate a large amount initial and maintenance programming.
iv. Security can be enforced:
Not every user should be able to access all the data. For example, in a banking system, payroll personnel need only the information about various bank employees. They do not need access to customer accounts. Since, in the file systems, application programs are added to systems in ad-hoc manner, it is difficult to enforce security constraints.
With central control over the database, the DBA (Database Administrator) can define the access paths for accessing the data stored in the database and he can define authorization checks whenever access to sensitive data is attempted.
v. Integrity can be maintained:
The data values stored in the database must satisfy certain types of consistency constraints. For example, the balance of a bank account may never fall below a particular amount. In file system, these constraints are enforced by adding appropriate code in various application programs. However, when new constraints are added, it is difficult to change the programs to enforce them.
Centralized control of the data helps in permitting the administrator to define integrity constraints to the data in the database.
ANS
24).
There are four different types of
database system users, differentiated by the way that they expect to
interact with the system. A primary goal of a database system is
to provide an environment for retrieving information from and storing
new information into the database.
-
Application programmers: They are computer professionals who interact with the system through DML calls, which are embedded in a program written in a host language (for example COBOL, C). Since the DML syntax is different from the host language syntax, DML calls are usually prefaced by a special character so that the appropriate code can be generated. A special preprocessor, called the DML precompiler, converts the DML statements to normal procedure calls in the host language. There are special types of programming languages that combine control structures of Pascal like languages with control structures for the manipulation of a database object.
-
Sophisticated users: Sophisticated users interact with the system without writing programs. Instead, they form their requests in a database query language. Each such query is submitted to a query processor whose function is to break down DML statement into instructions that the storage manager understands.
-
Specialised users: Specialised users are sophisticated users who write specialised database applications that do not fit into the traditional data-processing framework. Among these applications are computer aided design systems, knowledge-base and expert systems.
-
Native users: Native users are unsophisticated users who interact with the system by invoking one of the permanent application programs that have been written previously.
ANS
25).
A
primary
key is a column (or columns) in a table that uniquely identifies
the rows in that table.
CUSTOMERSCustomerNo | FirstName | LastName |
1 | Sally | Thompson |
2 | Sally | Henderson |
3 | Harry | Henderson |
4 | Sandra | Wellington |
The values placed in primary key columns must be unique for each row: no duplicates can be tolerated. In addition, nulls are not allowed in primary key columns.
So, having told you that it is possible to use one or more columns as a primary key, how do you decide which columns (and how many) to choose?
More on database management system (DBMS) technology
Read about how Facebook has raised awareness of graph database technologyFind out about the growth of cloud databases and the remaining obstacles
Learn how Pella implemented their database virtualization software initiative
Well there are times when it is advisable or essential to use multiple columns. However, if you cannot see an immediate reason to use multiple columns, then use one. This isn't an absolute rule, it is simply advice. However, primary keys made up of single columns are generally easier to maintain and faster in operation. This means that if you query the database, you will usually get the answer back faster if the tables have single column primary keys.
Next question — which column should you pick? The easiest way to choose a column as a primary key (and a method that is reasonably commonly employed) is to get the database itself to automatically allocate a unique number to each row.
In a table of employees, clearly any column like FirstName is a poor choice since you cannot control employee's first names. Often there is only one choice for the primary key, as in the case above. However, if there is more than one, these can be described as 'candidate keys' — the name reflects that they are candidates for the responsible job of primary key.
Now, given the definition above, that a primary key can be made up of more than one column and must uniquely identify the rows, we could choose, for example, EmployeeNo AND FirstName as the primary key. That fulfils the requirement but it is clearly foolish because we are adding complexity for no gain. It is also a great example of a super key. Super, in this case, is not a synonym for 'great' but a contraction of supernumerary. Super keys are to be avoided.
Foreign keys are columns that point to primary key columns. So, for example, OrderNo is the primary key of the table ORDERS below and CustomerNo is a foreign key that points to the primary key in the CUSTOMERS table.
ORDERS
OrderNo | EmployeeNo | CustomerNo | Supplier | Price | Item |
1 | 1 | 42 | Harrison | $235 | Desk |
2 | 4 | 1 | Ford | $234 | Chair |
3 | 1 | 68 | Harrison | $415 | Table |
4 | 2 | 112 | Ford | $350 | Lamp |
5 | 3 | 42 | Ford | $234 | Chair |
6 | 2 | 112 | Ford | $350 | Lamp |
7 | 2 | 42 | Harrison | $235 | Desk |
EMPLOYEES
SSecurityNo | EmployeeNo | FirstName | LastName | DateOfBirth | DateEmployed |
AF-23432334 | 1 | Manny | Tomanny | 12 Apr 1966 | 01 May 1999 |
DQ-65444444 | 2 | Rosanne | Kolumns | 21 Mar 1977 | 01 Jan 2000 |
GF-54354543 | 3 | Cas | Kade | 01 May 1977 | 01 Apr 2002 |
JK-34333432 | 4 | Norma | Lyzation | 03 Apr 1966 | 01 Apr 2002 |
VB-48565444 | 5 | Juan | Tomani | 12 Apr 1966 | 01 Apr 2002 |
FG-23566553 | 6 | Del | Eats | 01 May 1967 | 01 May 2004 |
A candidate key is a column that meets all of the requirements of a primary key. In other words, it has the potential to be a primary key.
Having told you this, I can also tell you that I cannot remember the last time I saw this done in a real, live, production database. In practice, foreign keys are almost invariably pointed at primary keys.
Most of this answer was shamelessly plagiarized from Inside Relational Databases, a book by by Mark Whitehorn and Bill Marklyn, published by Springer Verlag. However, since Mark is one of the authors of the book, the chances of him suing himself are low.
.
THE
QUESTIONS NO 8,12,13,14,15,26 SHOULD BE SOLVED BY YOUR SELF
THANK
YOU
-AUTHOR
- SHUBHAM ANAND PANDEY
SOLUTION
OF DBMS PRELIMS 2017 IT SEM-3 PIIT PANVEL
TECHNOPRINCEWEB.BLOGSPOT.COM
No comments:
Post a Comment