what is sql server?
RDBMS
stands for Relational Database Management System. RDBMS data is structured in
database tables, fields and records. Each RDBMS table consists of database
table rows. Each database table row consists of one or more database table
fields.
Denomalization on the contrary is the process of adding redundant data to speed up complex queries involving multiple table JOINS. One might just go to a lower form of Normalization to achieve Denormalization and better performance. Data is included in one table from another in order to eliminate the second table which reduces the number of JOINS in a query and thus achieves performance.
A view contains rows and columns, just like a real table. The fields in a view are fields from one or more real tables in the database.
You can add SQL functions, WHERE, and JOIN statements to a view and present the data as if the data were coming from one single table.
Basic Difference
Function must return a value but in Stored Procedure it is optional( Procedure can return zero or n values).
Functions can have only input parameters for it whereas Procedures can have input/output parameters .
Functions can be called from Procedure whereas Procedures cannot be called from Function.
Advance Difference
Procedure allows SELECT as well as DML(INSERT/UPDATE/DELETE) statement in it whereas Function allows only SELECT statement in it.
Procedures can not be utilized in a SELECT statement whereas Function can be embedded in a SELECT statement.
Stored Procedures cannot be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section whereas Function can be.
The most important feature of stored procedures over function is to retention and reuse the execution plan while in case of function it will be compiled every time.
Functions that return tables can be treated as another rowset. This can be used in JOINs with other tables.
Inline Function can be though of as views that take parameters and can be used in JOINs and other Rowset operations.
Exception can be handled by try-catch block in a Procedure whereas try-catch block cannot be used in a Function.
We can go for Transaction Management in Procedure whereas we can't go in Function.
Serializable
This is the highest isolation level.
With a lock-based concurrency control DBMS implementation, serializability requires read and write locks (acquired on selected data) to be released at the end of the transaction. Also range-locks must be acquired when a SELECT query uses a ranged WHERE clause, especially to avoid the phantom reads phenomenon (see below).
When using non-lock based concurrency control, no locks are acquired; however, if the system detects a write collision among several concurrent transactions, only one of them is allowed to commit. See snapshot isolation for more details on this topic.
Repeatable reads
In this isolation level, a lock-based concurrency control DBMS implementation keeps read and write locks (acquired on selected data) until the end of the transaction. However, range-locks are not managed, so phantom reads can occur.
Read committed
In this isolation level, a lock-based concurrency control DBMS implementation keeps write locks (acquired on selected data) until the end of the transaction, but read locks are released as soon as the SELECT operation is performed (so the non-repeatable reads phenomenon can occur in this isolation level, as discussed below). As in the previous level, range-locks are not managed.
Putting it in simpler words, read committed is an isolation level that guarantees that any data read is committed at the moment it is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, it will find the same data; data is free to change after it is read.
Read uncommitted
This is the lowest isolation level. In this level, dirty reads are allowed, so one transaction may see not-yet-committed changes made by other transactions.
Since each isolation level is stronger than those below, in that no higher isolation level allows an action forbidden by a lower one, the standard permits a DBMS to run a transaction at an isolation level stronger than that requested (e.g., a "Read committed" transaction may actually be performed at a "Repeatable read" isolation level).
If locking is not used and multiple users update the same data at the same time, the data within the database can become logically incorrect. If that occurs, queries executed against that data could produce unexpected results.
Locking Hints and Examples
ROWLOCK-Use row-level locks when reading or modifying data.
PAGLOCK-Use page-level locks when reading or modifying data.
TABLOCK-Use a table lock when reading or modifying data.
DBLOCK-Use a database lock when reading or modifying data.
UPDLOCK-UPDLOCK reads data without blocking other readers, and update it later with the assurance that the data has not changed since last read.
XLOCK-Use exclusive locks instead of shared locks while reading a table, and use hold locks until the end of the statement or transaction.
HOLDLOCK-Use a hold lock to hold a lock until completion of the transaction, instead of releasing the lock as soon as the required table, row, or data page is no longer required.
NOLOCK-This does not lock any object. This is the default for SELECT operations. It does not apply to INSERT, UPDATE, and DELETE statements.
Examples- SELECT OrderID
FROM Orders WITH (ROWLOCK)
WHERE OrderID BETWEEN 100
AND 2000
UPDATE Products WITH (NOLOCK)
SET ProductCat = 'Machine'
WHERE ProductSubCat = 'Mac'
Transaction A acquires a share lock on row 1.
Transaction B acquires a share lock on row 2.
Transaction A now requests an exclusive lock on row 2, and is blocked until transaction B finishes and releases the share lock it has on row 2.
Transaction B now requests an exclusive lock on row 1, and is blocked until transaction A finishes and releases the share lock it has on row 1.
Lock is a done by database when any connection access a same piece of data concurrently. One connection need to access Piece of data .
The Meaning of Block :
It occurs when two connections need access to same piece of data concurrently and the meanwhile another is blocked because at a particular time, only one connection can have access. SQL knows that once the blocking process finishes the resource will be available and so the blocked process will wait (until it times out), but it won’t be killed.
The Meaning of Deadlock :
Deadlock occurs when one connection is blocked and waiting for a second to complete its work, and this situation is again with another process as it waiting for first connection to release the lock. Hence deadlock occurs.
Example :
i have 2 processes. P1 & P2 trying to get to 2 resources R1 & R2.
P1 gets a lock on R1
and
P2 gets a lock on R2
THEN
P1 tries to get a lock on R2 but can’t because it is locked by P2
And
P2 tries to get a lock on R1 but can’t because it is locked by P1
in this point no process can finish because they are waiting on locked resources. they are deadlocked. One of them must be killed to allow either of them to finish.
List of DBCC..
7. RAID Level 0
You implement RAID 0 by using disk striping, in which you divide data into blocks, or stripes, and spread them across multiple physical disks. Because RAID 0 makes good use of multiple disk actuators, it tends to improve read and write performance. However, this RAID level doesn't provide fault tolerance. Organizations often use RAID 0 to store SQL Server data files.
6. RAID Level 1
RAID 1, also called disk mirroring, maintains a complete copy of the data on a second, separate physical drive. RAID 1 provides fault tolerance and often improves read performance, but its need to write the data twice can degrade write performance in single-disk-controller configurations. RAID 1 also can be expensive because its data redundancy requires twice the physical disk storage. Because of RAID 1's fault tolerance, organizations often use it to store SQL Server transaction log files.
5. RAID Level 2
RAID 2 spreads both data and parity information across multiple physical disks. RAID 2's data-striping technique doesn't efficiently use disk space and provides little disk space savings compared with full mirroring. RAID 2 implementations are uncommon.
4. RAID Level 3
Like RAID 2, RAID 3 implements data-striping; unlike RAID 2, RAID 3 maintains all parity information on one disk for better performance. RAID 3's disk usage, however, is still inefficient, making implementations rare.
3. RAID Level 4
RAID 4 uses the same data-striping methodology as RAID 2 and RAID 3 but provides better performance by using larger disk blocks. RAID 4 also provides the same single-disk parity storage as RAID 3. Like its RAID 2 and RAID 3 cousins, RAID 4 is inefficient for transaction-based processing.
2. RAID Level 5
RAID 5 uses a data-striping method similar to RAID 0's but adds parity to the data-striping, which gives you better fault tolerance. RAID 5 stores the parity information on a physical disk separate from the one that stores data blocks. Because RAID 5 uses multiple disk actuators, it provides good read and write performance but a lower level of redundancy than RAID 1's full mirroring. Organizations often use RAID 5 to store SQL Server data files.
1. RAID Level 10(Best RAID level)
RAID 10 (a combination of RAID 1 and RAID 0) uses a striped array of disks, as does RAID 0, but fully mirrors the striped data set, similar to RAID 1. RAID 10's use of multiple disks gives it the best performance of all of the RAID levels, but this level's mirrored implementation of the striped data set makes it the most expensive storage option.
There are two types of instances of SQL Server:
NOTE: Use sp_configure to display or change server-level settings. To change database-level settings, use ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.
Microsoft SQL Server Compact supports two types of transactions: explicit and autocommit.
Explicit Transactions
An explicit transaction is one in which you explicitly define both the start and end of the transaction. This can be specified by using either SQL statements or database API functions.
By using Visual Studio, the following SQL statements can be used to define explicit transactions:
-BEGIN TRANSACTION:Marks the starting point of an explicit transaction for a connection.
-COMMIT TRANSACTION:Ends a transaction successfully if no errors are encountered. All data modified by the transaction becomes a permanent part of the database. Resources held by the transaction are freed.
-ROLLBACK TRANSACTION:Erases a transaction in which errors are encountered. All data modified by the transaction is returned to the state it was in at the start of the transaction. Resources held by the transaction are freed.
Autocommit Transactions
Autocommit mode is the default transaction management mode of SQL Server Compact. Every SQL statement is committed or rolled back when it finishes. A SQL Server Compact connection operates in autocommit mode whenever this default mode has not been overridden by explicit transactions. Autocommit mode is also the default mode for ADO.NET and OLE DB.
A SQL Server Compact connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction. When the explicit transaction is committed or rolled back, SQL Server Compact returns to autocommit mode.
26-What is the default Port SQL Server running on?
1433 for TCP/IP and 1434 for USD connection.
27- Different Edition of SQL Server.
7. SQL Server CE
As you'd expect, SQL Server 2000 Windows CE Edition uses a different code base than other SQL Server editions do. SQL Server CE has about a 1MB footprint and doesn't include Analysis Services or Data Transformation Services (DTS). SQL Server CE supports data exchange with other SQL Server editions through anonymous merge replication.
6. Enterprise Evaluation Edition
The Enterprise Evaluation Edition is essentially the same as the Enterprise Edition; however, the Evaluation Edition has a built-in 120-day time limit. You can upgrade from the Evaluation Edition to either the standard or enterprise edition.
5. Developer Edition
Supplied as a part of Visual Studio Enterprise Edition, the Developer Edition supports the same feature set as SQL Server Enterprise Edition does. However, licensing for the Developer Edition allows only development work; you can't use this edition as a production database.
4. Personal Edition
The Personal Edition comes with the SQL Server Standard Edition or Enterprise Edition—you can't purchase it separately. The Personal Edition runs on Windows 2000, Windows NT, or Windows 9x and supports one to two processors on Win2K Professional. However, this edition supports only five concurrent connections. Each Personal Edition requires a Client Access License (CAL).
3. MSDE
Microsoft SQL Server Desktop Engine (MSDE) is a standalone runtime engine for database applications and can be freely distributed. MSDE's core database engine is essentially the same as that of the Personal Edition. However, MSDE provides no management tools and doesn't require a CAL.
2. Standard Edition
The Standard Edition is the most popular version of SQL Server 2000. This edition requires a minimum of Win2K Server or NT Server 4.0 and supports one to four processors. The Standard Edition includes Analysis Services, DTS, XML integration, and English Query, as well as the Personal Edition and MSDE. Each client that connects to the Standard Edition needs a CAL.
1. Enterprise Edition
The Enterprise Edition includes all the Standard Edition features plus built-in Storage Area Network (SAN) support, federated databases, log shipping, and partitioned OLAP cubes. The Enterprise Edition runs on Win2K Server or NT Server 4.0 or later; when running on Win2K Advanced Server or NT Server, Enterprise Edition (NTS/E), this edition supports more than four processors and four-node failover clustering. Each client must have a CAL.
28- Different Version of SQL server.
SQL Server 2005
SQL Server 2005 (formerly codenamed "Yukon") released in October 2005. It included native support for managing XML data, in addition to relational data. For this purpose, it defined an xml data type that could be used either as a data type in database columns or as literals in queries. XML columns can be associated with XSD schemas; XML data being stored is verified against the schema. XML is converted to an internal binary data type before being stored in the database.
SQL Server 2008
SQL Server 2008 (formerly codenamed "Katmai") was released on August 6, 2008[14] and aims to make data management self-tuning, self organizing, and self maintaining with the development of SQL Server Always On technologies, to provide near-zero downtime. SQL Server 2008 also includes support for structured and semi-structured data, including digital media formats for pictures, audio, video and other multimedia data. In current versions, such multimedia data can be stored as BLOBs (binary large objects), but they are generic bitstreams. Intrinsic awareness of multimedia data will allow specialized functions to be performed on them.
SQL Server 2012
At the 2011 Professional Association for SQL Server (PASS) summit on October 11, Microsoft announced that the next major version of SQL Server (codenamed "Denali"), would be SQL Server 2012. It was released to manufacturing on March 6, 2012.SQL Server 2012 Service Pack 1 was released to manufacturing on November 9, 2012, and Service Pack 2 was released to manufacturing on June 10, 2014.
It was announced to be the last version to natively support OLE DB and instead to prefer ODBC for native connectivity.
SQL Server 2012's new features and enhancements include AlwaysOn SQL Server Failover Cluster Instances and Availability Groups which provides a set of options to improve database availability Contained Databases which simplify the moving of databases between instances, new and modified Dynamic Management Views and Functions,programmability enhancements including new spatial features,metadata discovery, sequence objects and the THROW statement, performance enhancements such as ColumnStore Indexes as well as improvements to OnLine and partition level operations and security enhancements including provisioning during setup, new permissions, improved role management, and default schema assignment for groups.
SQL Server 2014
SQL Server 2014 was released to manufacturing on March 18, 2014, and released to the general public on April 1, 2014. Until November 2013 there were two CTP revisions, CTP1 and CTP2.SQL Server 2014 provides a new in-memory capability for tables that can fit entirely in memory (also known as Hekaton). Whilst small tables may be entirely resident in memory in all versions of SQL Server, they also may reside on disk, so work is involved in reserving RAM, writing evicted pages to disk, loading new pages from disk, locking the pages in RAM while they are being operated on, and many other tasks. By treating a table as guaranteed to be entirely resident in memory much of the 'plumbing' of disk-based databases can be avoided.
For disk-based SQL Server applications, it also provides the SSD Buffer Pool Extension, which can improve performance by cache between DRAM and spinning media.
SQL Server 2014 also enhances the AlwaysOn (HADR) solution by increasing the readable secondaries count and sustaining read operations upon secondary-primary disconnections, and it provides new hybrid disaster recovery and backup solutions with Windows Azure, enabling customers to use existing skills with the on-premises version of SQL Server to take advantage of Microsoft's global datacenters.
29- Difference between LOGIN and USER ?
SQL Server is a relational database management system (RDBMS) from Microsoft that's designed for the enterprise environment...
what is RDBMS?
RDBMS
stands for Relational Database Management System. RDBMS data is structured in
database tables, fields and records. Each RDBMS table consists of database
table rows. Each database table row consists of one or more database table
fields.
RDBMS store the data into collection of tables, which might be related by
common fields (database table columns). RDBMS also provide relational operators
to manipulate the data stored into the database tables. Most RDBMS use SQL as
database query language.
1-What is Normalization?
- It is the process of organizing data into related
table.
- To normalize database, we divide database into tables
and establish relationships between the tables.
- It reduces redundancy. It is done to improve
performance of query.
Steps of normalization:
First Normal form
Entities of the table must have unique identifier or entity key.
Second Normal Form
All the attributes of the table must depend on the entity key for that entity.
Third Normal Form
All attributes that are not part of the key must not depend on any other
non-key attributes.
Problem Without
Normalization:
Without
Normalization, it becomes difficult to handle and update the database, without
facing data loss. Insertion, Updation and Deletion Anamolies are very frequent
if Database is not Normalized.
2- What is De-normalization?
The
process of adding redundant data to get rid of complex join, in order to
optimize database performance. This is done to speed up database access by
moving from higher to lower form of normalization.
Entities of the table must have unique identifier or entity key.
Second Normal Form
All the attributes of the table must depend on the entity key for that entity.
Third Normal Form
All attributes that are not part of the key must not depend on any other non-key attributes.
2- What is De-normalization?
The
process of adding redundant data to get rid of complex join, in order to
optimize database performance. This is done to speed up database access by
moving from higher to lower form of normalization.
3-Normalization vs De- Normalization.
Normalization is the process of reducing data redundancy and maintains data integrity. This is performed by creating relationships among tables through primary and foreign keys. Normalization procedure includes 1NF, 2NF, 3NF, BCNF, and then the data is normalized.Denomalization on the contrary is the process of adding redundant data to speed up complex queries involving multiple table JOINS. One might just go to a lower form of Normalization to achieve Denormalization and better performance. Data is included in one table from another in order to eliminate the second table which reduces the number of JOINS in a query and thus achieves performance.
5- What is ACID property?
ACID (an
acronymn for Atomicity Consistency Isolation Durability) is a concept that
Database Professionals generally look for when evaluating databases and
application architectures. For a reliable database all this four attributes
should be achieved.
Atomicity is
an all-or-none proposition.
Consistency
guarantees that a transaction never leaves your database in a half-finished
state.
Isolation
keeps transactions separated from each other until they’re finished.
Durability
guarantees that the database will keep track of pending changes in such a way
that the server can recover from an abnormal termination.
6-Details of ACID property with example.
In computer
science, ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties that guarantee that database transactions are processed reliably.
For example, a transfer of funds
from one bank account to another, even involving multiple changes such as
debiting one account and crediting another, is a single transaction.
Atomicity
Atomicity requires that each transaction be
"all or nothing": if one part of the transaction fails, the entire
transaction fails, and the database state is left unchanged. An atomic system
must guarantee atomicity in each and every situation, including power failures,
errors, and crashes. To the outside world, a committed transaction appears (by
its effects on the database) to be indivisible ("atomic"), and an
aborted transaction does not happen.
Consistency
The consistency property ensures that any
transaction will bring the database from one valid state to another. Any data
written to the database must be valid according to all defined rules, including
constraints, cascades, triggers, and any combination thereof. This does not guarantee
correctness of the transaction in all ways the application programmer might
have wanted (that is the responsibility of application-level code) but merely
that any programming errors cannot result in the violation of any defined
rules.
Isolation
The isolation property ensures that the
concurrent execution of transactions results in a system state that would be
obtained if transactions were executed serially, i.e., one after the other.
Providing isolation is the main goal of concurrency control. Depending on concurrency control
method (i.e. if it uses strict - as opposed to relaxed - serializability), the effects of
an incomplete transaction might not even be visible to another transaction.
Durability
Durability means that once a transaction has
been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational
database, for instance, once a group of SQL statements execute, the results
need to be stored permanently (even if the database crashes immediately
thereafter). To defend against power loss, transactions (or their effects) must
be recorded in a non-volatile memory.
7- What is Stored Procedure?
In a database management system
(DBMS), a stored procedure is a set of Structured Query Language (SQL)
statements with an assigned name that's stored in the database in compiled form
so that it can be shared by a number of programs. The use of stored procedures
can be helpful in controlling access to data (end-users may enter or change
data but do not write procedures), preserving data integrity (information is
entered in a consistent manner), and improving productivity (statements in a
stored procedure only need to be written one time).
8-What is Function?
A function is a subprogram that
returns a single value. You must declare and define a function before invoking
it. You can either declare and define it at the same time, or you can declare
it first and then define it later in the same block.
Microsoft® SQL Server™ 2000 supports
two types of functions:
Built-in functions
Operate as defined in the Transact-SQL
Reference and cannot be modified. The functions can be referenced only in
Transact-SQL statements using the syntax defined in the Transact-SQL Reference.
For more information about these built-in functions, see Using Functions.
User-defined functions
Allow you to define your own
Transact-SQL functions using the CREATE FUNCTION statement. For more
information about these built-in functions, see User-defined Functions.
9-What is Trigger?
A database trigger is procedural
code that is automatically executed in response to certain events on a
particular table or view in a database. The trigger is mostly used for
maintaining the integrity of the information on the database. For example, when
a new record (representing a new worker) is added to the employees table, new
records should also be created in the tables of the taxes, vacations and
salaries.
10- What is view?
In SQL, a view is a virtual table based on the result-set of an SQL statement.A view contains rows and columns, just like a real table. The fields in a view are fields from one or more real tables in the database.
You can add SQL functions, WHERE, and JOIN statements to a view and present the data as if the data were coming from one single table.
11-Difference between stored procedure and functon.
Stored Procedures are pre-compile objects which are compiled for first time and its compiled format is saved which executes (compiled code) whenever it is called. But Function is compiled and executed every time when it is called.Basic Difference
Function must return a value but in Stored Procedure it is optional( Procedure can return zero or n values).
Functions can have only input parameters for it whereas Procedures can have input/output parameters .
Functions can be called from Procedure whereas Procedures cannot be called from Function.
Advance Difference
Procedure allows SELECT as well as DML(INSERT/UPDATE/DELETE) statement in it whereas Function allows only SELECT statement in it.
Procedures can not be utilized in a SELECT statement whereas Function can be embedded in a SELECT statement.
Stored Procedures cannot be used in the SQL statements anywhere in the WHERE/HAVING/SELECT section whereas Function can be.
The most important feature of stored procedures over function is to retention and reuse the execution plan while in case of function it will be compiled every time.
Functions that return tables can be treated as another rowset. This can be used in JOINs with other tables.
Inline Function can be though of as views that take parameters and can be used in JOINs and other Rowset operations.
Exception can be handled by try-catch block in a Procedure whereas try-catch block cannot be used in a Function.
We can go for Transaction Management in Procedure whereas we can't go in Function.
12- Database isolation level
Transactions specify an isolation level that defines the degree to which one transaction must be isolated from resource or data modifications made by other transactions. Isolation levels are described in terms of which concurrency side-effects, such as dirty reads or phantom reads, are allowed.Serializable
This is the highest isolation level.
With a lock-based concurrency control DBMS implementation, serializability requires read and write locks (acquired on selected data) to be released at the end of the transaction. Also range-locks must be acquired when a SELECT query uses a ranged WHERE clause, especially to avoid the phantom reads phenomenon (see below).
When using non-lock based concurrency control, no locks are acquired; however, if the system detects a write collision among several concurrent transactions, only one of them is allowed to commit. See snapshot isolation for more details on this topic.
Repeatable reads
In this isolation level, a lock-based concurrency control DBMS implementation keeps read and write locks (acquired on selected data) until the end of the transaction. However, range-locks are not managed, so phantom reads can occur.
Read committed
In this isolation level, a lock-based concurrency control DBMS implementation keeps write locks (acquired on selected data) until the end of the transaction, but read locks are released as soon as the SELECT operation is performed (so the non-repeatable reads phenomenon can occur in this isolation level, as discussed below). As in the previous level, range-locks are not managed.
Putting it in simpler words, read committed is an isolation level that guarantees that any data read is committed at the moment it is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, it will find the same data; data is free to change after it is read.
Read uncommitted
This is the lowest isolation level. In this level, dirty reads are allowed, so one transaction may see not-yet-committed changes made by other transactions.
Since each isolation level is stronger than those below, in that no higher isolation level allows an action forbidden by a lower one, the standard permits a DBMS to run a transaction at an isolation level stronger than that requested (e.g., a "Read committed" transaction may actually be performed at a "Repeatable read" isolation level).
13-What is Locking? Detecting the lock available in the database?
When multiple users or applications access the same data at the same time, locking prevents them from making simultaneous changes to the data. Locks are managed internally by the Microsoft SQL Server Compact Database Engine. The locks are automatically acquired and released based on actions taken by the user.If locking is not used and multiple users update the same data at the same time, the data within the database can become logically incorrect. If that occurs, queries executed against that data could produce unexpected results.
Locking Hints and Examples
ROWLOCK-Use row-level locks when reading or modifying data.
PAGLOCK-Use page-level locks when reading or modifying data.
TABLOCK-Use a table lock when reading or modifying data.
DBLOCK-Use a database lock when reading or modifying data.
UPDLOCK-UPDLOCK reads data without blocking other readers, and update it later with the assurance that the data has not changed since last read.
XLOCK-Use exclusive locks instead of shared locks while reading a table, and use hold locks until the end of the statement or transaction.
HOLDLOCK-Use a hold lock to hold a lock until completion of the transaction, instead of releasing the lock as soon as the required table, row, or data page is no longer required.
NOLOCK-This does not lock any object. This is the default for SELECT operations. It does not apply to INSERT, UPDATE, and DELETE statements.
Examples- SELECT OrderID
FROM Orders WITH (ROWLOCK)
WHERE OrderID BETWEEN 100
AND 2000
UPDATE Products WITH (NOLOCK)
SET ProductCat = 'Machine'
WHERE ProductSubCat = 'Mac'
14-What is Blocking/Blockage? Common Blocking scenarios.
Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock type. This forces the second connection to wait, blocked on the first. One connection can block another connection, regardless of whether they emanate from the same application or separate applications on different client computers.15-What is Deadlock?
A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. For example:Transaction A acquires a share lock on row 1.
Transaction B acquires a share lock on row 2.
Transaction A now requests an exclusive lock on row 2, and is blocked until transaction B finishes and releases the share lock it has on row 2.
Transaction B now requests an exclusive lock on row 1, and is blocked until transaction A finishes and releases the share lock it has on row 1.
16-Difference between locking, blocking/blockage and deadlock.
The Meaning of lock is :Lock is a done by database when any connection access a same piece of data concurrently. One connection need to access Piece of data .
The Meaning of Block :
It occurs when two connections need access to same piece of data concurrently and the meanwhile another is blocked because at a particular time, only one connection can have access. SQL knows that once the blocking process finishes the resource will be available and so the blocked process will wait (until it times out), but it won’t be killed.
The Meaning of Deadlock :
Deadlock occurs when one connection is blocked and waiting for a second to complete its work, and this situation is again with another process as it waiting for first connection to release the lock. Hence deadlock occurs.
Example :
i have 2 processes. P1 & P2 trying to get to 2 resources R1 & R2.
P1 gets a lock on R1
and
P2 gets a lock on R2
THEN
P1 tries to get a lock on R2 but can’t because it is locked by P2
And
P2 tries to get a lock on R1 but can’t because it is locked by P1
in this point no process can finish because they are waiting on locked resources. they are deadlocked. One of them must be killed to allow either of them to finish.
17-DBCC Commands.
DBCC (Database consistency checker) act as Database console commands for SQL Server to check database consistency. They are grouped as: Maintenance: Maintenance tasks on Db, filegroup, index etc.List of DBCC..
18-RAID(redundant array of independent disks) levels.
RAID levels 0, 1, and 5 are typically implemented with SQL Server.7. RAID Level 0
You implement RAID 0 by using disk striping, in which you divide data into blocks, or stripes, and spread them across multiple physical disks. Because RAID 0 makes good use of multiple disk actuators, it tends to improve read and write performance. However, this RAID level doesn't provide fault tolerance. Organizations often use RAID 0 to store SQL Server data files.
6. RAID Level 1
RAID 1, also called disk mirroring, maintains a complete copy of the data on a second, separate physical drive. RAID 1 provides fault tolerance and often improves read performance, but its need to write the data twice can degrade write performance in single-disk-controller configurations. RAID 1 also can be expensive because its data redundancy requires twice the physical disk storage. Because of RAID 1's fault tolerance, organizations often use it to store SQL Server transaction log files.
5. RAID Level 2
RAID 2 spreads both data and parity information across multiple physical disks. RAID 2's data-striping technique doesn't efficiently use disk space and provides little disk space savings compared with full mirroring. RAID 2 implementations are uncommon.
4. RAID Level 3
Like RAID 2, RAID 3 implements data-striping; unlike RAID 2, RAID 3 maintains all parity information on one disk for better performance. RAID 3's disk usage, however, is still inefficient, making implementations rare.
3. RAID Level 4
RAID 4 uses the same data-striping methodology as RAID 2 and RAID 3 but provides better performance by using larger disk blocks. RAID 4 also provides the same single-disk parity storage as RAID 3. Like its RAID 2 and RAID 3 cousins, RAID 4 is inefficient for transaction-based processing.
2. RAID Level 5
RAID 5 uses a data-striping method similar to RAID 0's but adds parity to the data-striping, which gives you better fault tolerance. RAID 5 stores the parity information on a physical disk separate from the one that stores data blocks. Because RAID 5 uses multiple disk actuators, it provides good read and write performance but a lower level of redundancy than RAID 1's full mirroring. Organizations often use RAID 5 to store SQL Server data files.
1. RAID Level 10(Best RAID level)
RAID 10 (a combination of RAID 1 and RAID 0) uses a striped array of disks, as does RAID 0, but fully mirrors the striped data set, similar to RAID 1. RAID 10's use of multiple disks gives it the best performance of all of the RAID levels, but this level's mirrored implementation of the striped data set makes it the most expensive storage option.
19-What is instance?
An SQL Server instance is a complete SQL server and you can install many instances on a machine but you can have only 1 default instance. An SQL Server instance has its own copy of the server files, databases and security credentials.There are two types of instances of SQL Server:
Default Instances: The default instance of the SQL Server 2000
database engine operates the same way as the database engines in earlier
versions of SQL Server. The default instance is identified solely by the name
of the computer on which the instance is running, it does not have a separate
instance name. When applications specify only the computer name in their
requests to connect to SQL Server, the SQL Server client components attempt to
connect to the default instance of the database engine on that computer. This
preserves compatibility with existing SQL Server applications.
There can only be one default instance on any computer, the
default instance can be any version of SQL Server.
Named Instances:All instances of the database engine other than
the default instance are identified by an instance name specified during
installation of the instance. Applications must provide both the computer name
and the instance name of any named instance to which they are attempting to
connect. The computer name and instance name are specified in the format computer_name\instance_name.
There can be multiple named instances running on a computer,
but only the SQL Server 2000 database engine can operate as a named instance.
The database engines from earlier versions of SQL Server cannot operate as a
named instance.
20-What is schema?
To
define schema in simple terms, schema can be assumed as a collection of
database objects. Hence hereafter the objects in SQL Server 2005 will be
referenced using the schema rather than the user who created the object. All
users will be bound to a default schema to which he or she creates the object.
CREATE SCHEMA DemoSchema AUTHORIZATION Ashirbad
21-how to see server level configuration?
sp_configure a tool to display and change SQL Server settings. Take a look at your SQL Settings? Open a query window and just type and run sp_configure; you’ll likely get a partial list of settings. To see them all you have to enable an option called ‘Show Advanced Options’.
sp_configure (Transact-SQL)
SYNTAX: sp_configure [ [ @configname = ] 'option_name'
[ , [ @configvalue = ] 'value' ] ]
Arguments
[ @configname= ] 'option_name'
Is
the name of a configuration option. option_name is varchar(35), with a default
of NULL. The SQL Server Database Engine recognizes any unique string that is
part of the configuration name. If not specified, the complete list of options is
returned.
[ @configvalue= ] 'value'
Is
the new configuration setting. value is int, with a default of NULL. The
maximum value depends on the individual option.
To
see the maximum value for each option, see the maximum column of the sys.configurations
catalog view.
Return Code Values
0 (success) or 1 (failure)NOTE: Use sp_configure to display or change server-level settings. To change database-level settings, use ALTER DATABASE. To change settings that affect only the current user session, use the SET statement.
22-what is transaction and types of transaction?
A transaction is a single unit
of work. If a transaction is successful, all of the data modifications made
during the transaction are committed and become a permanent part of the
database. If a transaction encounters errors and must be canceled or rolled
back, then all of the data modifications are erased.
Types of transaction:Microsoft SQL Server Compact supports two types of transactions: explicit and autocommit.
Explicit Transactions
An explicit transaction is one in which you explicitly define both the start and end of the transaction. This can be specified by using either SQL statements or database API functions.
By using Visual Studio, the following SQL statements can be used to define explicit transactions:
-BEGIN TRANSACTION:Marks the starting point of an explicit transaction for a connection.
-COMMIT TRANSACTION:Ends a transaction successfully if no errors are encountered. All data modified by the transaction becomes a permanent part of the database. Resources held by the transaction are freed.
-ROLLBACK TRANSACTION:Erases a transaction in which errors are encountered. All data modified by the transaction is returned to the state it was in at the start of the transaction. Resources held by the transaction are freed.
Autocommit Transactions
Autocommit mode is the default transaction management mode of SQL Server Compact. Every SQL statement is committed or rolled back when it finishes. A SQL Server Compact connection operates in autocommit mode whenever this default mode has not been overridden by explicit transactions. Autocommit mode is also the default mode for ADO.NET and OLE DB.
A SQL Server Compact connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction. When the explicit transaction is committed or rolled back, SQL Server Compact returns to autocommit mode.
26-What is the default Port SQL Server running on?
1433 for TCP/IP and 1434 for USD connection.
27- Different Edition of SQL Server.
7. SQL Server CE
As you'd expect, SQL Server 2000 Windows CE Edition uses a different code base than other SQL Server editions do. SQL Server CE has about a 1MB footprint and doesn't include Analysis Services or Data Transformation Services (DTS). SQL Server CE supports data exchange with other SQL Server editions through anonymous merge replication.
6. Enterprise Evaluation Edition
The Enterprise Evaluation Edition is essentially the same as the Enterprise Edition; however, the Evaluation Edition has a built-in 120-day time limit. You can upgrade from the Evaluation Edition to either the standard or enterprise edition.
5. Developer Edition
Supplied as a part of Visual Studio Enterprise Edition, the Developer Edition supports the same feature set as SQL Server Enterprise Edition does. However, licensing for the Developer Edition allows only development work; you can't use this edition as a production database.
4. Personal Edition
The Personal Edition comes with the SQL Server Standard Edition or Enterprise Edition—you can't purchase it separately. The Personal Edition runs on Windows 2000, Windows NT, or Windows 9x and supports one to two processors on Win2K Professional. However, this edition supports only five concurrent connections. Each Personal Edition requires a Client Access License (CAL).
3. MSDE
Microsoft SQL Server Desktop Engine (MSDE) is a standalone runtime engine for database applications and can be freely distributed. MSDE's core database engine is essentially the same as that of the Personal Edition. However, MSDE provides no management tools and doesn't require a CAL.
2. Standard Edition
The Standard Edition is the most popular version of SQL Server 2000. This edition requires a minimum of Win2K Server or NT Server 4.0 and supports one to four processors. The Standard Edition includes Analysis Services, DTS, XML integration, and English Query, as well as the Personal Edition and MSDE. Each client that connects to the Standard Edition needs a CAL.
1. Enterprise Edition
The Enterprise Edition includes all the Standard Edition features plus built-in Storage Area Network (SAN) support, federated databases, log shipping, and partitioned OLAP cubes. The Enterprise Edition runs on Win2K Server or NT Server 4.0 or later; when running on Win2K Advanced Server or NT Server, Enterprise Edition (NTS/E), this edition supports more than four processors and four-node failover clustering. Each client must have a CAL.
28- Different Version of SQL server.
SQL Server 2005
SQL Server 2005 (formerly codenamed "Yukon") released in October 2005. It included native support for managing XML data, in addition to relational data. For this purpose, it defined an xml data type that could be used either as a data type in database columns or as literals in queries. XML columns can be associated with XSD schemas; XML data being stored is verified against the schema. XML is converted to an internal binary data type before being stored in the database.
SQL Server 2008
SQL Server 2008 (formerly codenamed "Katmai") was released on August 6, 2008[14] and aims to make data management self-tuning, self organizing, and self maintaining with the development of SQL Server Always On technologies, to provide near-zero downtime. SQL Server 2008 also includes support for structured and semi-structured data, including digital media formats for pictures, audio, video and other multimedia data. In current versions, such multimedia data can be stored as BLOBs (binary large objects), but they are generic bitstreams. Intrinsic awareness of multimedia data will allow specialized functions to be performed on them.
SQL Server 2012
At the 2011 Professional Association for SQL Server (PASS) summit on October 11, Microsoft announced that the next major version of SQL Server (codenamed "Denali"), would be SQL Server 2012. It was released to manufacturing on March 6, 2012.SQL Server 2012 Service Pack 1 was released to manufacturing on November 9, 2012, and Service Pack 2 was released to manufacturing on June 10, 2014.
It was announced to be the last version to natively support OLE DB and instead to prefer ODBC for native connectivity.
SQL Server 2012's new features and enhancements include AlwaysOn SQL Server Failover Cluster Instances and Availability Groups which provides a set of options to improve database availability Contained Databases which simplify the moving of databases between instances, new and modified Dynamic Management Views and Functions,programmability enhancements including new spatial features,metadata discovery, sequence objects and the THROW statement, performance enhancements such as ColumnStore Indexes as well as improvements to OnLine and partition level operations and security enhancements including provisioning during setup, new permissions, improved role management, and default schema assignment for groups.
SQL Server 2014
SQL Server 2014 was released to manufacturing on March 18, 2014, and released to the general public on April 1, 2014. Until November 2013 there were two CTP revisions, CTP1 and CTP2.SQL Server 2014 provides a new in-memory capability for tables that can fit entirely in memory (also known as Hekaton). Whilst small tables may be entirely resident in memory in all versions of SQL Server, they also may reside on disk, so work is involved in reserving RAM, writing evicted pages to disk, loading new pages from disk, locking the pages in RAM while they are being operated on, and many other tasks. By treating a table as guaranteed to be entirely resident in memory much of the 'plumbing' of disk-based databases can be avoided.
For disk-based SQL Server applications, it also provides the SSD Buffer Pool Extension, which can improve performance by cache between DRAM and spinning media.
SQL Server 2014 also enhances the AlwaysOn (HADR) solution by increasing the readable secondaries count and sustaining read operations upon secondary-primary disconnections, and it provides new hybrid disaster recovery and backup solutions with Windows Azure, enabling customers to use existing skills with the on-premises version of SQL Server to take advantage of Microsoft's global datacenters.
29- Difference between LOGIN and USER ?
--SQL Server Login is for Authentication and
SQL Server User is for Authorization.
--Authentication can decide if we have
permissions to access the server or not and Authorization decides what are
different operations we can do in a database.
--Login are created at the SQL Server
instance level and User is created at SQL Server database level.
--We can have multiple user from different
database connected to a single login to a server.
30- What is a orphan
user?
A database user can become orphaned if the
corresponding SQL Server login is dropped. Also, a database user can become
orphaned after a database is restored or attached to a different instance of
SQL Server. Orphaning can happen if the database user is mapped to a SID
that is not present in the new server instance.
31- What is
different authentication mode?
SQL Server supports two security
(authentication) modes:
Windows Authentication and Mixed mode
--Windows
Authentication mode connects to an instance of SQL Server through a Windows NT
4.0 or Windows 2000 user account.
--Mixed mode (Windows Authentication and SQL
Server Authentication) connect to an instance of SQL Server by using either
Windows Authentication or SQL Server Authentication.
--Users who connect through a Windows NT or
2000 user account can make use of trusted connections in either Windows
Authentication mode or mixed mode.
32- What are the
server roles?
Server
roles are defined at the server level and exist outside of users databases.
There
are seven fixed server roles:
--The
members of sysadmin server role can
perform any activity in SQL Server and have completes control over all database
functions.
--The members of serveradmin server role can change server configuration parameters and shut down the server.
--The members of setupadmin server role can manage linked servers (add or remove linked servers), manage replication, manage extended stored procedures, and execute some system stored procedures, such as sp_serveroption.
--The members of securityadmin server role can create and manage server logins and auditing, and read error logs.
--The members of processadmin server role can manage the processes running in SQL Server.
--The members of dbcreator server role can create, alter, and resize databases.
--The members of diskadmin server role can manage disk files.
--The members of serveradmin server role can change server configuration parameters and shut down the server.
--The members of setupadmin server role can manage linked servers (add or remove linked servers), manage replication, manage extended stored procedures, and execute some system stored procedures, such as sp_serveroption.
--The members of securityadmin server role can create and manage server logins and auditing, and read error logs.
--The members of processadmin server role can manage the processes running in SQL Server.
--The members of dbcreator server role can create, alter, and resize databases.
--The members of diskadmin server role can manage disk files.
33- What are the
database roles?
There
are three kinds of the database roles:
--Fixed
Database Roles
--Public
Role
--User-Defined
Database Roles
Fixed
Database Roles
Fixed database roles are defined at
the database level and exist in each database. You cannot add, delete or modify
fixed database roles. You can only add users as a member of a fixed database
role.
There are nine fixed database roles:
There are nine fixed database roles:
--The members of db_owner database role can perform any
activity in the database.
--The members of db_accessadmin database role can add or remove Windows NT groups, users or SQL Server users in the database.
--The members of db_datareader database role can see any data from all user tables in the database.
--The members of db_datawriter database role can add, change, or delete data from all user tables in the database.
--The members of db_accessadmin database role can add or remove Windows NT groups, users or SQL Server users in the database.
--The members of db_datareader database role can see any data from all user tables in the database.
--The members of db_datawriter database role can add, change, or delete data from all user tables in the database.
--The members of db_ddladmin database role can make any
data definition language commands in the database.
--The members of db_securityadmin database role can manage statement and object permissions in the database.
--The members of db_backupoperator database role can back up the database.
--The members of db_denydatareader database role can deny permission to select data in the database.
--The members of db_denydatawriter database role can deny permission to change data in the database.
Note. To add a security account as a member of an existing SQL Server database role in the current database, you can use the sp_addrolemember system stored procedure.
--The members of db_securityadmin database role can manage statement and object permissions in the database.
--The members of db_backupoperator database role can back up the database.
--The members of db_denydatareader database role can deny permission to select data in the database.
--The members of db_denydatawriter database role can deny permission to change data in the database.
Note. To add a security account as a member of an existing SQL Server database role in the current database, you can use the sp_addrolemember system stored procedure.
Public
Role
The public role is a special
database role to which every database user belongs. The public role contains
default access permissions for any user who can access the database. This
database role cannot be dropped.
User-Defined
Database Roles
Although
the built-in database roles handle permissions for common database management
tasks, it's likely that you will want to group users who have access to perform
specific database functions.
Note. To create a new SQL Server role in the current database, you can use the sp_addrole system stored procedure.
Note. To create a new SQL Server role in the current database, you can use the sp_addrole system stored procedure.
34- Difference between
view and stored procedure function?
View-
Defination- SQL, a view is a virtual
table based on the result-set of an SQL statement.
A view contains rows and columns,
just like a real table. The fields in a view are fields from one or more real
tables in the database.
* does NOT accept parameters
* can be used as building block in a
larger query
* can contain only one single SELECT
query
* can NOT perform modifications to
any table
* but can (sometimes) be used as the
target of an INSERT, UPDATE or DELETE statement.
stored
procedure function-
Defination- A stored procedure is a
group of Transact-SQL statements compiled into a single execution plan.
* accepts parameters
* can NOT be used as building block
in a larger query
* can contain several statements,
loops, IF ELSE, etc.
* can perform modifications to one
or several tables
* can NOT be used as the target of
an INSERT, UPDATE or DELETE statement.
35- what is an INDEX?
A database index is a data structure that improves the
speed of data retrieval operations on a database table at the cost of
additional writes and storage space to maintain the index data structure.
Indexes are used to quickly locate data without having to search every row in a
database table every time a database table is accessed. Indexes can be created
using one or more columns of a
database table,
providing the basis for both rapid random lookups and efficient access of ordered records.
36- Difference between
upgradation and migration?
Upgradation: an upgrade is an
automated process in which a set-up program moves an old instance of the
database to a new instance, keeping the metadata (with some exception) and data
same.
[E.G HOTFIX, PATCH,SERVICE PACKS,
VERSION UPGRADE]
--Down time is mandatory for
upgradation.
--Man power is not required.
There are two types of UP-GRADATION
1. IN-PLACE UP-GRADATION
2. SIDE-BY-SIDE UP-GRADATION
--Man power is not required.
There are two types of UP-GRADATION
1. IN-PLACE UP-GRADATION
2. SIDE-BY-SIDE UP-GRADATION
Migration:
Migration is
the process of taking the backup and restore into the target server (One or
More).
Or-Migration is the process of
Copying or exporting the data from one server to the other.
--Down time is not mandatory.
--Man power is required.
--Down time is not mandatory.
--Man power is required.
There are various
situations for migration
*LAUNCH, GO-LIVE, DEPLOYMENT.
A situation or a birth of PROJECT is known as LAUNCH,GO-LIVE, DEPLOYMENT.
*LAUNCH, GO-LIVE, DEPLOYMENT.
A situation or a birth of PROJECT is known as LAUNCH,GO-LIVE, DEPLOYMENT.
Starts from Development -> Test
->Production.
*DB-REFRESH.
When we need to take back the server and its functionalities from PRODUCTION SERVER to TEST SERVER and back again to PRODUCTION SERVER is known as DB-REFRESH.
* MOVING FROM OLD HARDWARE TO NEW HARDWARE.
Every hardware is subject to END OF LIFE. In such situations, we would move the entire server from OLD HARDWARE to NEW HARDWARE.
*MOVING BETWEEN DIFFERENT DATA CENTERS.
*CONSOLIDATION PROJECT.
CONSOLIDATION is a process of gathering all the NON-CRITICAL databases over individual servers and managing them in a single high scalable server.
*DB-REFRESH.
When we need to take back the server and its functionalities from PRODUCTION SERVER to TEST SERVER and back again to PRODUCTION SERVER is known as DB-REFRESH.
* MOVING FROM OLD HARDWARE TO NEW HARDWARE.
Every hardware is subject to END OF LIFE. In such situations, we would move the entire server from OLD HARDWARE to NEW HARDWARE.
*MOVING BETWEEN DIFFERENT DATA CENTERS.
*CONSOLIDATION PROJECT.
CONSOLIDATION is a process of gathering all the NON-CRITICAL databases over individual servers and managing them in a single high scalable server.
37- When we create a
DB what is its default size?
In
SQL server every database has a property Initial Size (MB) which can be
seen in properties of database in SSMS. By default it will be 3 MB for mdf and
1 MB for ldf file.
No comments:
Post a Comment