Export (0) Print
Expand All

Creating Databases

Updated : July 30, 2001

On This Page

About This Chapter
Lesson 1: Introduction to Database Design
Lesson 2: Introduction to SQL Server Databases
Lesson 3: Creating and Dropping Databases
Lesson 4: Managing Databases
Lesson 5: The library Database
Review

About This Chapter

A Microsoft SQL Server database is a collection of information, tables, and other objects organized and presented to serve a specific purpose, such as facilitating searching, sorting, and recombining data.

This chapter begins by introducing you to the logical database design process and normalization. Next, you will learn how data is stored in Microsoft SQL Server 7.0 and finally, how to implement databases and manage database files, transaction log files, and filegroups.

Before You Begin

To complete the lessons in this chapter you must:

  • Have installed SQL Server 7.0.

  • Be able to log on SQL Server as an Administrator.

  • Have installed the exercise files from the Supplemental Course Materials CD-ROM to your hard drive using the Setup.exe discussed in the "Getting Started" section of the book.

Lesson 1: Introduction to Database Design

Before creating tables and other objects in a database, you must decide what tables you will need to logically represent all your data. The logical database design process produces a product-independent design, which can be implemented in any database management system.

Having a well-documented logical database design makes the physical implementation of the database a clearly defined, standardized process and allows for reliable, efficient storage and retrieval of data. Although database design is not the topic of this book, this lesson presents a brief overview of logical database design and normalization.

After this lesson you will be able to:

  • Describe the main components of a relational database.

  • Describe the process of normalization.

  • Identify cases where denormalization is appropriate.

Estimated lesson time: 30 minutes

Logical Database Design

The logical database design should define:

  • Tables and their names (also called entities).

  • Column names (also called attributes) for each table.

  • Column characteristics, such as requiring unique values and allowing nulls, and the type of data that the column will store.

  • The primary key for each table. This is the column (or set of columns) that stores values that uniquely identify each row in a table. Although other unique columns may exist in the table, only one is identified as the unique access key for retrieving rows—the primary key. Each table can have only one primary key. Although a primary key is not strictly required, you should always define one.

  • Relationships between tables. Rows in some tables are dependent on one or more rows in another table. These intertable dependencies are called relationships. To define a relationship, a column (or set of columns) in one table, called a foreign key, references the primary key of another table.

For example, each row in an Orders table is dependent on a row in a Customers table because each order must be placed by a customer. This is a relationship between the Orders table and the Customers table. The Orders table must have a column that holds values that reference an individual row in the Customers table. Rows in the Orders table must be guaranteed to reference only one customer so the relationship should be based on the primary key of the Customers table. The Orders table column that references the primary key of the Customers table is called a foreign key.

Types of Relationship

Three types of relationships are possible between tables:

  • One-to-one. Each row in the primary is related to only one row in the foreign table. A one-to-one relationship is implemented by defining the foreign key to be unique (that is, not allow duplicates).

  • One-to-many. Each row in the primary table is related to one or more rows in the foreign table. For example, one customer can place many orders, but one order cannot be placed by many customers. In Chapter 5, "Implementing Data Integrity," you will learn how to enforce a one-to-many relationship between two tables.

  • Many-to-many. Many rows in one table are related to many rows in another table. For example, an author can write many books and a book can be written by more than one author. A many-to-many relationship between two tables is implemented by creating a third table and creating a one-to-many relationship to this junction table from each of the initial tables.

Normalization

When designing the tables and relationships for a database, certain logical inconsistencies commonly occur. A process called normalization helps you to ensure that these inconsistencies do not occur.

Normalization is the process of refining tables, keys, columns, and relationships to create a consistent database design. Normalization is achieved by applying a number of tests to tables. Three levels of normalization (first, second, and third normal form) are commonly applied, although others are defined.

Rather than give formal definitions of the three levels, examples of the levels and the problems they deal with are given. Tables for a simple, student course registration database are also given.

Before Normalization

The example assumes that:

  • Students can register for any number of courses.

  • Lecturers can lecture more than one course.

  • Each lecturer always lectures in the same room.

  • Rooms are not shared.

Consider the following tables:

Students

Name

PhoneNo

CourseRegistrations

Marjorie Green

415 986-7020

Basic Computing, Database Administration

Burt Gringelsby

707 938-6445

Database Administration, Advanced Hardware Support

Akiko Yokamoto

415 935-4228

Advanced Hardware Support

Courses

Course

Lecturer

Room

Basic Computing

Meander Smith

542 South

Database Administration

Dean Straight

221 East

Advanced Hardware Support

Dean Straight

221 East

The tables can store the student registration data but several problems and data inconsistencies will easily occur. For example,

  • If the Basic Computing course is removed, the lecturer Meander Smith and room 542 South are also removed.

  • The number of courses a student can take is limited to how much text the CourseRegistrations column can hold.

  • The CourseRegistrations column cannot easily be searched or used in calculations.

  • The full name of each course is repeated for every registration. This repetition wastes space and introduces the probability of data inconsistencies based on the course name being incorrectly entered. It also makes it necessary to find and update every registration if the name of a course is changed.

  • It is not possible to index the Students table by last name because full names are stored in a single column.

  • If a lecturer moves to a new room, all courses that the lecturer teaches will have to be updated.

After Normalization

The following tables for the student course registration database are normalized:

Students

StudentID

FirstName

LastName

PhoneNo

1001

Marjorie

Green

415 986-7020

1002

Burt

Gringelsby

707 938-6445

1003

Akiko

Yokamoto

415 935-4228

Registrations

RegID

StudentID

CourseID

1

1001

1

2

1001

2

3

1002

2

4

1002

3

5

1003

3

Courses

CourseID

Course

LecturerID

1

Basic Computing

1

2

Database Administration

2

3

Advanced Hardware Support

2

Lecturers

LecturerID

FirstName

LastName

Room

1

Meander

Smith

542 South

2

Dean

Straight

221 East

The relationships between the tables are as follows:

  • Students-to-courses: many-to-many through the Registrations junction table

  • Students-to-registrations: one-to-many

  • Courses-to-registrations: one-to-many

  • Lecturers-to-courses: one-to-many

The features of normalization that are identifiable in these tables are:

  • Each table holds only one set of related data. For example, the Students table no longer holds data about registrations.

  • Each table has a primary key: Students–StudentID, Registrations–RegID, Courses–CourseID, and Lecturers–LecturerID.

  • No decomposable columns. All columns describe a single attribute. For example, the student name column has been split into first and last name columns.

  • No repeated data. For example, lecturer names are now recorded only once.

  • No multivalue columns. For example, each course registration is now a separate row in the Registrations table, unlike the CourseRegistrations column from the previous Students table.

  • Every column is fully dependent on the primary key. For example, there is no Room column in the Courses table. This is because the room is not dependent on the CourseID; it is dependent on the LecturerID.

The benefits of normalization include:

  • Sorting and index creation, because tables are narrower.

  • More clustered indexes, since there are more tables.

  • Compact and narrow indexes.

  • Fewer indexes per table, improving UPDATE performance.

  • Fewer NULLs and less redundant data, increasing database compactness.

  • Reduced concurrency impact of table locks, because they will affect less data.

Note: You will fully understand the impact of these benefits as you begin to learn about indexes and queries later in this book.

Denormalization

A highly normalized database is routinely associated with complex relational joins, which can hurt performance. For example, consider extracting a report from the student registration database, which lists the rooms for a student's courses. This extraction would require accessing the Students table to get the student's name, the Registrations table to find the CourseIDs of courses for which the student is registered, the Courses table to find the LecturerID of the lecturer for each course, and the Lecturers table to find the room in which the lecturer lectures.

The rules of normalization require that the room is not a column in the Courses table. Adding the room as a column in the Courses table will introduce the possibility of inconsistencies. However, in this case you may decide to add a Room column to the Courses table so that you do not have to access the Lecturers table in order to find the room for a course.

This addition is an example of denormalization, which is the process of intentionally defining a table that is not normalized, to enhance performance and simplify queries. Denormalization is acceptable when it creates definite benefits and appropriate steps are taken to ensure data integrity. In Chapter 15, "Implementing Triggers," you will learn how to use triggers to enforce integrity when you have denormalized a table.

A greater number of narrow tables is a characteristic of a normalized database. A lesser number of wide tables is a characteristic of a denormalized database. However, the SQL Server optimizer is efficient at selecting rapid, efficient joins, as long as effective indexes are available.

With SQL Server, reasonable normalization often helps rather than hinders performance. As normalization increases, so does the number and complexity of joins required to retrieve data. As a rough rule-of-thumb, extensive normalization is recommended unless it causes many queries to join in excess of four ways.

Lesson Summary

It is important to create a complete normalized database design before you begin implementing the database. After you start implementing the database, it is difficult to make changes when design inconsistencies are discovered. Although normalization ensures data consistency, it can hurt performance. Therefore, it is common to intentionally denormalize a database and take care of the potential data inconsistencies using constraints, rules, or triggers.

Lesson 2: Introduction to SQL Server Databases

When you create a database, you set up the data storage structure. This structure includes at least one data file and one transaction log file. Before you create a database, you should understand how Microsoft SQL Server version 7.0 stores data, as well as the function of the transaction log.

After this lesson you will be able to:

  • Describe how data is stored in databases and transaction logs.

  • Evaluate database storage considerations.

Estimated lesson time: 20 minutes

How Data Is Stored

When you are creating a database, you will have more insight into capacity planning, data integrity, and performance if you understand how Microsoft SQL Server stores data. Figure 3.1 illustrates how data is allocated for storage.

Figure 3.1: Database storage allocation.

Figure 3.1: Database storage allocation.

SQL Server Database Files

SQL Server databases are stored in files. These files are created automatically when you create a database. There is more than one type of file and each database always uses at least two files.

Database File Types

Database files are categorized into three types:

  • Primary data file. This file is the starting point of the database. Every database must have one, but only one, primary data file. Primary data filenames usually have an .mdf extension.

  • Secondary data files. These files are optional and can hold all data and objects that are not on the primary data file. A database does not have to have secondary data files, but may have multiple secondary data files. Secondary data filenames usually have an .ndf extension.

  • Log files. These files hold all the transaction log information used to recover the database. Every database has at least one log file. Log filenames usually have an .ldf extension.

Note: The term database file generally means any of the three file types. The term data file refers to either the primary data file or a secondary data file. The term log file refers to a file that is part of the database transaction log.

Database File Considerations

Consider the following about database files:

  • When you create a database, a copy of the model database, which includes the system tables, is copied into the database. Therefore, the minimum size of a database is equal to or greater than the size of the model database.

  • Data is stored in 8-kilobyte (KB) blocks of contiguous disk space called pages. This means that a database can store 128 pages per megabyte (MB).

  • Rows cannot span pages. The maximum amount of data in a single row is 8060 bytes (8192, less some overhead). The maximum amount of space that can be used by all rows on a page is 8094 bytes.

Note: SQL Server 7.0 has changed significantly from SQL Server 6.5 with regard to data storage. SQL Server 6.5 uses 2-KB pages, thus limiting a single row to 1962 bytes. Many other capacities have been increased in SQL Server 7.0 (search for "Maximum Capacity Specifications" in Books Online for more information).

  • Tables, other database objects, and indexes are stored in extents. An extent is 8 contiguous pages, or 64 KB. Therefore, a database has 16 extents per megabyte. Up to 8 small objects can share an extent (Mixed Extent). When a table grows to 8 pages, it uses its own extents (Uniform Extents).

  • Transaction log files hold all the information necessary for recovery of the database in case of a system failure. By default, the size of the transaction log is 25 percent of the size of the data file. Use this figure as a starting point and adjust it according to the needs of your application.

How the Transaction Log Works

The transaction log records data modifications—INSERT, UPDATE, and DELETE statements—as they are executed. The logging process, as shown in Figure 3.2, is as follows:

  1. A data modification is sent by the application.

  2. When a modification is executed, the affected data pages are loaded from the data file into memory (called the data cache), if the pages are not already in the data cache from a previous query.

    Cc917617.f03xx02(en-us,TechNet.10).gif

    Figure 3.2: How the transaction log works.
  3. Each data modification statement is recorded in the log as it is made. The change is always recorded in the log and written to the log file before that change is made in the database. This type of log is called a write-ahead log.

  4. When the data pages reside in the data cache, and the log pages are recorded on the disk in the transaction log file, the checkpoint process writes all committed transactions to the database on the disk.

A single transaction can include many data modifications. Each transaction starts with a BEGIN TRANSACTION marker. If the application completes all data modifications successfully, the transaction ends with a COMMIT TRANSACTION marker (such a transaction is said to be a committed transaction).

During normal operation, the checkpoint process regularly checks for committed transactions for which the data modifications have not yet been written to the data file. The checkpoint process writes these modifications to the data file and checkpoints the transaction to indicate that it has been written to the data file.

If the system fails, the automatic recovery process runs when the SQL Server is restarted. This process uses the transaction log to roll forward (apply the modifications to the data) all committed transactions that have not been "checkpointed" and roll back (remove) any incomplete transactions.

Transaction logging in SQL Server is not optional (that is, you cannot turn it off). All normal data modifications must go through the transaction log. There are two methods of bulk data loading that can be performed without transaction logging: the bulk copy program and the SELECT INTO statement). Because logging is not optional, it is important that the transaction log does not ever become full, because this will prevent data modification in the database.

Tip In general, if the hard disk of the computer has a disk-caching controller, disable it. Unless a write-caching disk controller is designed specifically for a database, it can seriously harm SQL Server data integrity. Check with your vendor to determine whether the hardware write-caching mechanism of your hard disk was designed for use with a database server.

Lesson Summary

Understanding how SQL Server stores data makes it possible for you to manage space and tune the performance of a database. SQL Server automatically adjusts the size of database files, or you can control the size of your database files manually if you choose to. All database changes are recorded in the transaction log. This recording makes the database fully recoverable and enables user transactions. Managing the transaction log space is important because all database changes are dependent on the transaction log.

Lesson 3: Creating and Dropping Databases

The CREATE DATABASE and DROP DATABASE statements are used to create and remove databases. In this lesson, you will learn how to use these statements.

After this lesson you will be able to:

  • Create a database, specifying size and automatic growth options

  • Drop databases that are no longer needed

Estimated lesson time: 30 minutes

Creating Databases

You can create a database using the Database Creation wizard, SQL Server Enterprise Manager, or the CREATE DATABASE statement. The process of creating a database also creates a transaction log for that database.

Creating a database is a process of specifying the name of the database and des ignating the size and location of the database files. When the new database is created, it is a duplicate of the model database. Any options or settings in the model database are copied into the new database. New databases can be created by default by members of the sysadmin and dbcreator fixed server roles; however, permissions to create databases can be given to other users.

Tip Information about each database in SQL Server is stored in the sysdatabases system table in the master database. Therefore, you should back up the master database each time you create, modify, or drop a database.

Syntax

Note: The CREATE DATABASE syntax has changed significantly from SQL Server version 6.5. SQL Server version 7.0 does not use devices, so it is no longer necessary to create a device, using the DISK INIT statement, before creating a database. You now specify file information as part of the CREATE DATABASE statement.

CREATE DATABASE database_name
[ON
 { [PRIMARY] (NAME = logical_file_name,
 	 FILENAME = 'os_file_name'
 	 [, SIZE = size]
 	 [, MAXSIZE = max_size]
 	 [, FILEGROWTH = growth_increment] )
 } [,...n]
 ]
[LOG ON
 { ( NAME = logical_file_name,
 	 FILENAME = 'os_file_name'
 	 [, SIZE = size] )
 } [,...n]
]
[FOR RESTORE]

When you create a database, you can set the following options:

  • PRIMARY. This option specifies the files in the primary filegroup. The primary filegroup contains all the database system tables. It also contains all objects not assigned to user filegroups (covered later in this chapter). A database can have only one primary data file. The primary data file is the starting point of the database and points to the rest of the files in the database. Every database has one primary data file. The recommended file extension for a primary data file is .mdf. If the PRIMARY keyword is not specified, the first file listed in the statement becomes the primary data file.

  • FILENAME. This option specifies the operating system file name and path for the file. The path in the os_file_name must specify a folder on a local hard drive of the server on which SQL Server is installed.

    Note: If your computer has more than one disk and you are not using RAID (Redundant Array of Inexpensive Disks), place the data file and transaction log file on separate physical disks. This separation increases performance and can be used, in conjunction with disk mirroring, to decrease the likelihood of data loss in case of media failure.

  • SIZE. This option specifies the size of the file. You can specify sizes in megabytes using the MB suffix (the default), or kilobytes using the KB suffix. The minimum value is 512 KB. If size is not specified for the primary data file, it defaults to the size of the model database primary data file. If no log file is specified, a default log file that is 25 percent of the total size of the data files is created. If size is not specified for secondary data files or log files, it defaults to a size of 1 MB.

    The SIZE option sets the minimum size of the file. The file can grow larger but cannot shrink smaller than this size. To reduce the minimum size of a file, use the DBCC SHRINKFILE statement.

    Note: SQL Server 7.0 has a maintenance wizard that can be set to control the size of databases.

  • MAXSIZE. This option specifies the maximum size to which the file can grow. You can specify sizes in megabytes using the MB suffix (the default), or kilobytes using the KB suffix. If no size is specified, the file grows until the disk is full.

  • FILEGROWTH . This option specifies the growth increment of the file. When SQL Server needs to increase the size of a file, it will increase the file by the amount specified by the FILEGROWTH option. A value of 0 indicates no growth. The value can be specified in megabytes (the default), in kilobytes, or as a percentage (%). The default value, if FILEGROWTH is not specified, is 10 percent, and the minimum value is 64 KB. The specified size is rounded to the nearest 64 KB.

Example

The following example creates a database called sample with a 10-MB primary data file and a 3-MB log file. The primary data file can grow to a maximum of 15 MB and it will grow in 20 percent increments; for example, it would grow by 2 MB the first time that it needed to grow. The log file can grow to a maximum of 5 MB and it will grow in 1-MB increments.

CREATE DATABASE sample
ON
  PRIMARY ( NAME=sample_data,  
  FILENAME='c:\mssql7\data\sample.mdf', 
  SIZE=10MB,
  MAXSIZE=15MB,
  FILEGROWTH=20%) 
LOG ON 
  ( NAME=sample_log, 
  FILENAME='c:\mssql7\data\sample.ldf', 
  SIZE=3MB,
  MAXSIZE=5MB,
  FILEGROWTH=1MB)

To create a database

In this exercise, use the CREATE DATABASE statement to create a database called sample_db based on the information in the following table:

File

NameFile name

Initial size

File growth

Maximum file size

Database

sample_datac:\mssql7\data\sample.mdf

2 MB

20 %

15 MB

Log sample

_logc:\mssql7\data\sample.ldf

1 MB

1 MB

5 MB

  1. Open SQL Server Query Analyzer and log in to the (local) server with Microsoft Windows NT authentication. Your account is a member of the Windows NT Administrators group, which is automatically mapped to the SQL Server sysadmin role.

  2. Execute the CREATE DATABASE statement to create the database. You will find the entire script for this exercise in c:\sqlimpl\exercise\ch03\ creasmpl.sql.

    CREATE DATABASE sample_db
     ON
       PRIMARY (NAME=sample_data, 
       FILENAME='c:\mssql7\data\sample.mdf', 
       SIZE=2MB,
       MAXSIZE=15MB,
       FILEGROWTH=20%) 
     LOG ON 
       (NAME=sample_log, 
       FILENAME='c:\mssql7\data\sample.ldf', 
       SIZE=1MB,
       MAXSIZE=5MB,
       FILEGROWTH=1MB)
    
  3. View the database properties in sp_helpdb to verify that the database has been created properly.

    EXEC sp_helpdb sample_db
    

Dropping a Database

You can drop a database when you no longer need it. When you drop a database, you permanently delete the database and the disk files used by the database. Permission to drop a database defaults to the database owner and members of the sysadmin fixed server role. Permission to drop a database cannot be transferred.

You can drop databases using SQL Server Enterprise Manager or by executing the DROP DATABASE statement.

Syntax

DROP DATABASE database_name [,…n]

Example

This example drops multiple databases by using one statement.

DROP DATABASE mydb1, mydb2

When you drop a database, consider the following facts and guidelines:

  • With SQL Server Enterprise Manager, you can drop only one database at a time.

  • With Transact-SQL, you can drop several databases at one time.

  • After you drop a database, login IDs for which that particular database was the default database will not have a default database.

  • Back up the master database after you drop a database.

  • SQL Server does not let you drop master, model, and tempdb databases but does allow you to drop the msdb system database.

Restrictions on Dropping a Database

The following restrictions apply to dropping databases. You cannot drop a database that is:

  • In the process of being restored.

  • Open for reading or writing by any user.

  • Publishing any of its tables as part of SQL Server replication.

    Although SQL Server allows you to drop the msdb system database, you should not drop it if you use or intend to use any of the following:

  • SQL Server Agent

  • Replication

  • SQL Server Web wizard

  • Data Transformation Services (DTS)

To delete a database

In this exercise, you will use Transact-SQL statements to drop the sample_db database that you created in a previous exercise. You will find the entire script for this exercise in c:\sqlimpl\exercise\ch03\dropdb.sql.

  1. Open or switch to SQL Server Query Analyzer.

  2. Execute the DROP DATABASE statement to delete the sample_db database.

    DROP DATABASE sample_db
    
  3. Execute a system stored procedure to generate a list of databases. Verify that you have deleted the sample_db database.

    EXEC sp_helpdb
    

Tip After deleting a database in SQL Server Query Analyzer, it may still show in SQL Server Enterprise Manager. This is because Enterprise Manager does not automatically refresh information that has been changed by other connections. You will need to use the Refresh option, which is available from the right-click shortcut menu on the various folders, to see updated information.

Lesson Summary

The CREATE DATABASE statement is used to create databases. Although you can also use SQL Server Enterprise Manager to create databases graphically, you should always save scripts for recreating databases in case of failure. Use the DROP DATABASE statement or SQL Server Enterprise Manager to remove databases. When you remove a database, its database files are automatically removed.

Lesson 4: Managing Databases

Many options are available for managing database files. You can create groups of files, increase and decrease file sizes, and retrieve information about database files. Managing space is an important function of the database administrator.

After this lesson you will be able to:

  • Describe options that can be set for a database.

  • Grow or shrink a database.

  • Monitor the size of the transaction log.

  • Grow or shrink database files.

  • Describe filegroups.

  • Retrieve information about databases and database files.

Estimated lesson time: 120 minutes

Database Options

After you have created a database, you can view information about the database and change various database options.

Database options determine the characteristics of a database. For example, you can make a database read-only or specify that log entries be removed from the transaction log whenever a checkpoint occurs.

Viewing Database Information

You can use SQL Server Enterprise Manager and Transact-SQL to get information about databases.

To create a database using SQL Server Enterprise Manager

In this exercise, you will create a database using SQL Server Enterprise Manager. The following table lists the database properties you need to complete the exercise:

File

Initial size

File growth

Maximum file size

Database

2 MB

1 MB

15 MB

Log

1 MB

1 MB

5 MB

  1. Expand your server group, and then expand your server.

  2. Right-click Databases, and then click New Database.

  3. Type the name sample_ssem for the new database.

  4. Use the values from the table at the beginning of the exercise to change the properties of the database. You will need to use the General tab to change da tabase properties and the Transaction Log tab to change properties of the transaction log. Use the default database filename of sample_ssem_Data and the default transaction log file name of sample_ssem_Log.

  5. Click OK to create the new database.

  6. Expand the Databases folder, right-click the sample_ssem database, and then click Properties and verify the properties of your new database.

To view database information using SQL Server Enterprise Manager

  1. Expand your server group, and then expand your server.

  2. Expand Databases, and then click the sample_ssem database.

  3. In the details pane, click the Space Allocated heading to view database and log space information.

  4. In the console tree, click the Northwind database.

  5. In the details pane, click the Tables & Indexes heading to view table and index space information.

The following table lists commonly used system-stored procedures that display information about databases and database options:

System stored procedure

Description

sp_dboption

Lists all available options.

sp_helpdb

Reports on all databases on a server. Provides database name, size, owner, ID, creation date, and status information.

sp_helpdb database_name

Reports on a specified database only. Provides database name, size, owner, ID, creation date, and status information. Additionally, reports detail about each data and log file.

sp_spaceused [objname]

Summarizes the storage space used by the current data-base or by a table in the current database.

Tip The size reported by sp_helpdb and sp_spaceused is the total current size of the database including the size of the log file(s). To determine the size of the data in the database, subtract the size of the log file(s) from the database size.

To view information about databases using Transact-SQL statements

In this exercise, you will use system stored procedures to view information about previously created databases. You will find the script for this exercise in c:\sqlimpl\exercise\ch03\dbinfo.sql.

  1. Open or switch to SQL Server Query Analyzer.

  2. Execute a system stored procedure to generate a list of all databases.

    EXEC sp_helpdb
    
  3. Execute a system stored procedure to display information about the sample_ssem database.

    EXEC sp_helpdb sample_ssem
    
  4. Execute a system stored procedure to display information about use of space in the sample_ssem database.

    USE sample_ssem
     EXEC sp_spaceused
    
  5. Execute a system stored procedure to display information about space usage for the authors table in the pubs database.

    USE pubs
     EXEC sp_spaceused authors
    

Setting Database Options

Most database options can be set using SQL Server Enterprise Manager. All database options can be set using the sp_dboption system stored procedure. You can configure database options for only one database at a time. To affect options in all future databases, change the model database.

The following table lists some of the more frequently used options:

Database option

Description

dbo-use only

Limits use of the database to the database owner only—typically used during development.

read-only

Defines a database as read-only—typically used to set security for decision-support databases.

select into/bulkcopy

Allows a database to accept nonlogged operations—use during bulk copying of data or when using SELECT INTO to conserve transaction log space.

single user

Restricts database access to one user at a time—use when performing maintenance.

trunc. Log on chkpt

Causes the transaction log to be truncated (committed transactions are removed) every time that the checkpoint process occurs—use during development to conserve transaction log space. If you set this option, you will need to perform full database backups to ensure recovery in the event of a server or media failure. Because this option negates the usefulness of transaction log backups, it is seldom enabled in a production database.

Autoshrink

Determines whether the database size shrinks automatically.

To view and change database options using Transact-SQL statements

In this exercise, you will use the sp_dboption system stored procedure to view and change database options. You will find the entire script for this exercise in c:\sqlimpl\exercise\ch03\dboption.sql

  1. Switch to SQL Server Query Analyzer (if necessary).

  2. Execute the sp_dboption system stored procedure to view a list of database options.

    EXEC sp_dboption
    
  3. Execute the sp_dboption system stored procedure to view a list of database options that are enabled for the sample_ssem database. Two options should be displayed in the results window: auto create statistics and auto update statistics.

    EXEC sp_dboption sample_ssem
    
  4. Execute the sp_dboption system stored procedure to have the sample_ssem database transaction log truncated whenever a checkpoint occurs.

    EXEC sp_dboption sample_ssem, 'trunc. log on chkpt.', 'true' 
    
  5. Execute the sp_dboption system stored procedure to verify that the transaction log of the sample_ssem database will be truncated whenever a checkpoint occurs.

    EXEC sp_dboption sample_ssem
    

Managing Data and Log Files

As your database grows or changes, you can expand or shrink the database size automatically or manually.

Managing Data and Log File Growth

When your database grows, or when data-modification activity increases, you may need to expand the size of the data or log files.

You can control the size of a database by:

  • Configuring the data and log files to grow automatically.

  • Manually increasing the current or maximum size of existing data and log files.

  • Manually adding secondary data files or log files.

Using Automatic File Growth

You can set the automatic growth options using the ALTER DATABASE statement or SQL Server Enterprise Manager to specify that database files automatically expand by a specified amount whenever necessary. Using automatic file growth reduces the administrative tasks involved with manually increasing the database size; it also reduces the possibility of a database running out of space unexpectedly.

You can specify the initial size, maximum size, and growth increment of each file. If you do not specify a maximum size, a file can continue to grow until it uses all available space on the disk.

Syntax

ALTER DATABASE database
{	 ADD FILE <filespec> [TO FILEGROUP filegroup][FOR RESTORE]
 | ADD LOG FILE <filespec>
 | REMOVEFILE logical_file 
 | CREATE FILEGROUP filegroup_name
 | DROP FILEGROUP filegroup
 | MODIFY FILE<filespec>
}
<filespec> ::=
(NAME = 'logical_file_name'
[, FILENAME = 'os_file_name' ]
[, SIZE = size]
[, MAXSIZE = { max_size | UNLIMITED } ]
[, FILEGROWTH = growth_increment] )

MODIFY FILE

The MODIFY FILE option allows you to change options for any existing file. In the filespec for MODIFY FILE only, specify the name and the option you want to change. You may only change one option at a time. To change more than one option, execute multiple ALTER DATABASE statements. You may not specify the FILENAME option with the MODIFY OPTION.

Expanding Database Files

If an existing file is not configured to grow automatically, you can still increase its SIZE. If you increase SIZE beyond the current MAXSIZE setting for a file without increasing MAXSIZE, the MAXSIZE will be set equal to the new SIZE. A value of zero for the FILEGROWTH growth increment indicates that it does not grow automatically.

The SIZE option sets the minimum size of a file. The file can grow larger but cannot shrink smaller than this size. You cannot reduce SIZE using ALTER DATABASE. To reduce the minimum size of a file, use the DBCC SHRINKFILE statement.

Adding Secondary Data Files or Log Files

You can create secondary data files to expand the size of a database. The maximum size of a single data file is 32 terabytes (TB) and the maximum size of a single log file is 4 TB. It is unlikely that you will need to add files due to insufficient space. Rather, use secondary data files or log files to make use of separate physical disks when you do not use the disk-striping capabilities of RAID systems.

Example

The following example increases the current data file size and adds a secondary data file to the sample database.

ALTER DATABASE sample
  MODIFY FILE ( NAME = 'sample_data',
 SIZE = 20MB)
GO
ALTER DATABASE sample
ADD FILE
(NAME = 'sample_data2' ,
FILENAME='c:\mssql7\data\sample2.ndf',
SIZE=10MB ,
MAXSIZE=20MB)
GO

To modify a database using SQL Server Enterprise Manager

In this exercise, you will add a data file to the sample_ssem database and change the maximum size specification of a data file.

  1. Expand your server group, and then expand your server.

  2. Expand Databases, right-click the sample_ssem database, and then click Properties.

  3. In the File name column, click the next empty row and enter the file name sample_ssem_data2. This is the data file that will contain the additional space. Note that the file location is generated automatically and given the .ndf suffix.

  4. Change the Space Allocated column value size to 2 MB, which is the initial size of the new file.

  5. To specify that the file should grow by fixed increments, change File growth to In megabytes and change the value to 2.

  6. To allow the file to grow as more data space is needed, leave Maximum file size set to Unrestricted filegrowth.

  7. Click OK to accept your changes and have them applied to the database.

Expanding a Transaction Log

If your transaction log runs out of space, SQL Server cannot record transactions and does not allow changes to your database. When a database grows, or when data modification activity increases, you may need to expand the transaction log.

Monitor the Log

Monitoring the log helps you determine when it is necessary to expand it. You can monitor the transaction log manually with SQL Server Enterprise Manager, or you can use the Microsoft Windows NT Performance Monitor.

SQL Server adds many objects and counters to the Windows NT Performance Monitor. Use the Percent Log Used counter of the SQLServer:Databases object to monitor the amount of space currently in use, in the transaction log of each database. The following table lists the SQLServer:Databases object counters that you can use to monitor advanced performance statistics of the transaction logs of individual databases.

SQLServer: Database object counter

Displays

Log Bytes Per Flush

Number of bytes in the log buffer when the buffer is flushed.

Log Flushes/sec

Number of log flushes per second.

Log Flush Wait Time

Total wait time (milliseconds) to flush the log.

Log Flush Waits/sec

Number of commits per second that are waiting on log flush.

Expand the Log When Necessary

If the log is not configured to grow automatically, you can still expand the transaction log manually with SQL Server Enterprise Manager or by using the ALTER DATABASE statement.

Situations Producing Increased Log Activity

Situations that produce increased transaction log activity include:

  • Performing a logged bulk load into a table that has indexes (all inserts, index changes, and extent allocations are logged).

    Note: Under certain conditions, it is possible to have nonlogged inserts when you perform bulk loads, by using the bcp utility and the "select into/bulk copy" database option. In this case, if you drop indexes before bulk loading, only extent allocations are logged.

  • Transactions that perform many modifications (INSERT, UPDATE, and DELETE statements) to a table within a single transaction. This typically occurs when the statement lacks a WHERE clause or when the WHERE clause is too general, causing many records to be affected.

  • Using the UPDATE statement to add or modify text or image data in a table. Text and image data is typically large and can cause the transaction log to fill quickly. You should rather use the WRITETEXT or UPDATETEXT statements. If used correctly, these statements perform nonlogged text and image updates in order to conserve transaction log space.

    Note: Increased activity can dramatically increase the size of the transaction log. Quickly reducing the log can be difficult when this happens. Although space can quickly be freed by backing up or truncating the log, the size of the log cannot quickly be reduced. Log shrinking is a deferred operation that cannot be forced to occur.

Example

The following example increases the current log file size for the sample database.

ALTER DATABASE sample
  MODIFY FILE ( NAME = 'sample_log',
 SIZE = 10MB)
GOCOMP: INSERT PRACTICE ICON.

To modify the transaction log using Transact-SQL statements

In this exercise, you will increase the maximum size of the log file to 20 MB for the sample_ssem database. You will find the entire script for this exercise in c:\sqlimpl\exercise\ch03\modismpl.sql.

  1. Switch to SQL Server Query Analyzer.

  2. Execute an ALTER DATABASE statement to change the maximum size of the sample_ssem database log file to 20 MB.

      ALTER DATABASE sample_ssem
       MODIFY FILE (NAME = 'sample_ssem_log',
       MAXSIZE=20MB)
    
  3. View the database properties using either SQL Server Enterprise Manager or the sp_helpdb stored procedure to verify that the database has been properly modified.

Tip SQL Server Enterprise Manager does not automatically refresh information that has not been changed in SQL Server Enterprise Manager. Therefore, you will often need to use the Refresh option on the various folders to see updated information.

Shrinking a Database or File

When too much space is allocated, or when space requirements decrease, you can shrink an entire database or specific data files in a database.

There are three different ways to shrink a database.

  • Remove free space from database files using the DBCC SHRINKDATABASE statement or SQL Server Enterprise Manager. When removing free space, you cannot shrink the size of a file below its minimum size (the SIZE specified in the CREATE DATABASE or ALTER DATABASE statements used to create or modify the file).

  • Reduce the minimum size of database files using the DBCC SHRINKFILE statement. You can also use DBCC SHRINKFILE to empty files so that they can be removed with the ALTER DATABASE statement.

  • Set the database to shrink automatically using the SQL Server Enterprise Manager or the sp_dboption system stored procedure.

Shrinking an Entire Database

You can shrink an entire database using SQL Server Enterprise Manager or by executing the Database Consistency Checker (DBCC) statement SHRINKDATABASE. This shrinks the size of all data files in the database. Log files are shrunk using a deferred shrink operation, which will occur at some time after the log has been backed up or truncated. You cannot force the log files to shrink even after backing up or truncating the log.

Syntax

DBCC SHRINKDATABASE (database_name [, target_percent] [, 
{NOTRUNCATE | TRUNCATEONLY}])

target_percent

The target_percent option specifies the desired percentage of free space left in the data file(s) after the database has been shrunk. Database files will not shrink below their original size even if this means that the target_percent is not achieved.

Using target_percent, with or without the NOTRUNCATE option, causes used pages to be relocated from the end to the front of the file(s). Freed space is then either released to the operating system (default) or left in the file (NOTRUNCATE specified). Using the TRUNCATEONLY option causes free space at the end of the file(s) to be released to the operating system without moving any pages. The target_percent option is ignored when TRUNCATEONLY is used.

Example

This example shrinks the size of the sample database to have 25 percent free space. The following table gives example results of executing a DBCC SHRINKDATABASE statement, where the target of reducing free space to 25 percent is achieved.

DBCC SHRINKDATABASE (sample, 25)

Data file

Original size

Current size

Space used

Size after shrinking

Percent free

sample_data

20 MB

30 MB

15 MB

20 MB

25%

sample_data2

10 MB

15 MB

9 MB

12 MB

25%

Total

30 MB

45 MB

24 MB

32 MB

25%

The next table gives example results of executing the DBCC SHRINKDATABASE statement, where the target of reducing free space to 25 percent is not achieved, as this would require shrinking the files to less than their original sizes.

Data file

Original size

Current size

Space used

Size after shrinking

Percent free

sample_data

20 MB

30 MB

12 MB

20 MB

40%

Sample_data2

10 MB

15 MB

3 MB

10 MB

70%

Total

30 MB

45 MB

15 MB

30 MB

50%

These tables provide examples that do not show log files, as they would not be affected.

To shrink a database

In this exercise, you will use the DBCC SHRINKDATABASE statement to reduce the size of the sample_ssem database to contain only 25 percent of the current available space. You will find the entire script for this exercise in c:\sqlimpl\exercise\ch03\shrinkdb.sql.

  1. Switch to SQL Server Query Analyzer.

  2. Execute the DBCC SHRINKDATABASE statement to reduce the size of the sample_ssem database to contain only 25 percent of the current available space.

    DBCC SHRINKDATABASE (sample_ssem, 25)
    

Note: You will not see any changes to the database because the files are still their original size, and shrinking a database will not decrease files below their original size.

Shrinking a Data File in the Database

You can shrink a data file in a database using SQL Server Enterprise Manager or by executing the DBCC SHRINKFILE statement. This is the only way to reduce a file to less than its original size.

Syntax

DBCC SHRINKFILE ({file_name | file_id} [, target_size] [, 
{ EMPTYFILE | NOTRUNCATE | TRUNCATEONLY}])

target_size

The target_size option specifies the desired size for the data file in megabytes, expressed as an integer. If target_size is not specified or is less than the amount of space in use, DBCC SHRINKFILE reduces the size as much as possible.

EMPTYFILE

The EMPTYFILE option moves all data from the specified file to other files in the same filegroup. After emptying the file, SQL Server no longer allows data to be placed in the file. This option allows the resulting empty file to be dropped using the ALTER DATABASE statement with the REMOVE FILE option.

Example

This example shrinks the size of the sample_data data file of the sample database to 10 MB.

DBCC SHRINKFILE (sample_data, 10)

To shrink a database file

In this exercise, you will use the DBCC SHRINKFILE statement to reduce the size of the secondary database file in the sample_ssem database. Reduce the size to 1 MB. You will find the entire script for this exercise in c:\sqlimpl\exercise\ch03\shrinkfl.sql.

  1. Switch to SQL Server Query Analyzer.

  2. Execute the DBCC SHRINKFILE statement to reduce the size of the sample_ssem_data2 database file to 1 MB. Make sure you release the freed space to the operating system.

    USE sample_ssem
     DBCC SHRINKFILE (sample_ssem_data2, 1)
    
  3. View the database properties to verify that the database file size has been reduced to 1 MB.

Shrinking a Database Automatically

You also can set a database option to recover unused space automatically by setting the database option autoshrink to true. This option can also be changed with SQL Server Enterprise Manager or sp_dboption.

Consider the following facts and guidelines when you shrink a database or a data file:

  • The resulting database must be larger than the size of the model database and the existing data in the database or data file.

  • Before you shrink a database or a data file, you should back up the database, as well as the master database.

  • DBCC SHRINKDATABASE and SHRINKFILE execute on a deferred basis, so you may not see the database or file size reduced immediately.

Creating Filegroups

If your hardware setup includes multiple disk drives and you are not using RAID, you can place database files on different disks. This allows for very large databases (VLDBs) and can improve performance when disks operate simultaneously. To simplify the management of multiple database files, SQL Server provides filegroups. Filegroups are named collections of files. Every database has one default filegroup and you can create additional filegroups as needed.

You can assign specific tables, indexes, or the text, (ntext), and image data from a table to a specific filegroup. In Figure 3.3, the Ordhist1.ndf and Ordhist2.ndf files are placed on a separate disk to separate files that are heavily queried from those that are heavily modified and to reduce disk drive contention.

System administrators can back up and restore individual files or filegroups instead of backing up or restoring an entire database.

Note: Log files are not part of a filegroup. Log space is managed separately from data space and filegroups are used for managing data files only.

Considerations When Using Filegroups

Using filegroups is an advanced database design technique. You must understand your database structure, data, transactions, and queries in order to determine the best way to place tables and indexes on specific filegroups. In many cases, using the striping capabilities of RAID systems provides much of the same performance gain that you might achieve using filegroups without the added administrative burden of defining and managing filegroups.

Cc917617.f03xx03(en-us,TechNet.10).gif

Figure 3.3: Files and filegroups.

Types of Filegroups

SQL Server offers the following three types of filegroups:

  • Files and filegroups: The primary filegroup, which contains the primary data file and any secondary data files that are not part of another filegroup. All system tables are placed on the primary filegroup. For this reason, it is very important that the primary filegroup does not run out of space.

  • User-defined filegroups, which are specified using the FILEGROUP keyword in a CREATE DATABASE or an ALTER DATABASE statement.

  • The default filegroup, which can be any filegroup in the database. Initially, the primary filegroup is the default filegroup, but members of the db_owner fixed database role can change the default filegroup at any time. All tables and indexes for which a filegroup was not specified upon creation are placed on the default filegroup.

Sizing the Default Filegroup

Sizing the primary filegroup correctly is important. The primary filegroup must be large enough to hold all system tables and, if it remains the default filegroup, any tables not allocated to a user-defined filegroup.

If the primary filegroup runs out of space, no new information can be added to the system tables. If a user-defined filegroup runs out of space, only the user files that are specifically allocated to that filegroup are affected. The primary filegroup only fills when the automatic growth option is turned off or when the disk holding the primary filegroup runs out of space. To allow the primary filegroup to grow, turn the automatic growth option back on or free more disk space.

Example

The following example creates a user-defined filegroup in the Northwind database and adds a secondary data file to the user-defined filegroup.

ALTER DATABASE Northwind
ADD FILEGROUP orderhistorygroup
GO
ALTER DATABASE northwind
ADD FILE
(	 NAME = 'ordhistyear1',
 FILENAME = 'c:\mssql7\data\ordhist1.ndf',
 SIZE = 5MB)
TO FILEGROUP orderhistorygroup
GO

Viewing Filegroup Information

The following table lists system stored procedures that display information about database files and filegroups:

System stored procedure

Description

sp_helpfile ' logical_file_name'

Returns the physical names and attributes of all files or a specified file associated with the current database.

sp_helpfilegroup 'filegroup_name'

Returns the names and attributes of filegroups associated with the current database. If a filegroup name is specified then sp_helpfilegroup returns a list of the files in the group.

System Tables

System tables store information, called metadata, about the system and objects in databases. Metadata is information about data.

Database Catalog

Each database (including master) contains a collection of system tables that store metadata about that specific database. This collection of system tables is called the database catalog.

System Catalog

The system catalog, found only in the master database, is a collection of system tables that stores metadata about the entire system and all other databases.

System tables all begin with the sys prefix. The following table identifies some frequently used system tables.

System table

Database

Function

sysxlogins

master

Contains one row for each login account that can connect to SQL Server. If you need to access information in sysxlogins, you should do so through the syslogins view.

sysmessages

master

Contains one row for each system error or warning that SQL Server can return.

sysdatabases

master

Contains one row for each database on a SQL Server.

sysusers

All

Contains one row for each Windows NT user, Windows NT group, SQL Server user, or SQL Server role in a database.

sysobjects

All

Contains one row for each object in a data base, such as tables, constraints, rules, and stored procedures.

Metadata Retrieval

You can query a system table like any other table to retrieve information about the system. However, you should not write scripts that directly query the system tables because if the system tables are changed in future product versions, your scripts may fail or may not provide accurate information.

Caution: Writing scripts that directly modify the system tables is strongly discouraged. Changing a system table may make it impossible for SQL Server to operate normally.

When you write applications that retrieve metadata from system tables, you should use system stored procedures, system functions, or system-supplied information schema views.

System Stored Procedures

To make it easier for you to gather information about the state of the server and database objects, SQL Server provides a collection of prewritten queries called system stored procedures.

The names of most system stored procedures begin with the sp_ prefix. The following table describes three commonly used system stored procedures:

System stored procedure

Description

sp_help [object_name]

Provides information on the specified database object

sp_helpdb [database_name]

Provides information on the specified database

sp_helpindex [table_name]

Provides information on the indexes for the specified table

Example

The following example executes a system stored procedure to get information on the employee table.

sp_help employee

Many other stored procedures are used to create or modify system information or database objects by modifying the system tables. For example, the system stored procedure, sp_addlogin, creates a new login account in the master..sysxlogins system table.

As you have seen, some system stored procedures modify and query the system tables for you so that you do not have to do so directly.

System Functions

System functions provide a method for querying system tables from within Transact-SQL statements. System functions return specific, single values. The following table describes commonly used system functions and the corresponding information that is returned:

System function

Parameter passed

Results

DB_ID

Database name

Returns the database ID

USER_NAME

User ID

Returns the user's database name

COL_LENGTH

Table and column names

Returns the column width

STATS_DATE

Table and index IDs

Returns the date when statistics for the specified index were last updated

DATALENGTH

Expression

Returns the actual number of bytes of the value of an expression of any data type

Example

The following example uses a system function in a query to get the user name for a user ID of 10.

SELECT USER_NAME(10)

Information Schema Views

Information schema views provide an internal, system table-independent view of the SQL Server metadata. These views conform to the ANSI SQL standard definition for the information schema. Information schema views allow applications to work properly, even if future product versions change the system tables significantly.

In SQL Server, all information schema views are owned by a predefined information_schema user.

Each information schema view contains metadata for the data objects stored in a particular database. The following table describes commonly used information schema views:

Information schema view

Description

Information_schema.tables

List of tables in the database for which the current user has permissions.

Information_schema.columns

Information on columns in the database to accessible the current user.

information_schema.tables_privileges

Security information for table privileges granted to or by the current user in the current database.

Example

The following example queries an information schema view to get a list of tables in a database.

SELECT * FROM information_schema.tables

To retrieve metadata

In this exercise, you will execute two queries to return the metadata from specific database objects using information schema views. Remember that information_schema is a predefined database user that is the owner of the information schema views.

  1. Execute the following statement to return a list of all the user-defined tables in a database:

    USE Northwind
     SELECT * FROM information_schema.tables
      WHERE table_type = 'base table'
    
  2. Execute the following statement to return the primary key and foreign key columns for the orders table:

    USE Northwind
     SELECT * FROM information_schema.key_column_usage
      WHERE table_name = 'Orders'
    

Notice that a primary key is defined on the OrderID column.

Lesson Summary

After creating a database there are many options that you can use to automatically control aspects of the database, such as making it read-only or having it shrink and grow automatically. When necessary, you can shrink or grow databases; but normally, you will allow this to happen automatically. Advanced database file management is achieved through associating database files together in filegroups. Information about a database is called metadata and SQL Server provides many ways to retrieve metadata, including system stored procedures, SQL Server Enterprise Manager, and information schema views.

Lesson 5: The library Database

The library database, shown in the illustration below, is used in the examples and exercises throughout this book. The database design has three groups of tables: those containing member information, those containing item (book) information, and those containing loan information.

After this lesson you will be able to:

  • Describe the structure of the library database used in the course exercises and examples

Estimated lesson time: 20 minutes

The library database is generated at the beginning of each chapter, using the bldlib.cmd file, which you will find in c:\sqlimpl\exercise\bldlib The scripts, used to generate the library database, place 10,000 members in the database and generate 52,000 loan history records. Because some of the data is generated randomly, any queries that you run against the library database may return result sets that do not match the sample result sets given in this book.

Member Information

Member information in the library database is stored in the member, adult, and juvenile tables, as shown in Figure 3.4.

Cc917617.f03xx04(en-us,TechNet.10).gif

Figure 3.4: Member information tables.

Note: In the figures of the library tables, the following abbreviations are used: PK (primary key), FK (foreign key), and NN (not null).

The member table is the master table, while adult and juvenile tables store sub-component information. All three tables use the member_no column as a primary key. Because the value in this column is different for each member and uniquely identifies each row of information, the member_no column is a good choice for a primary key.

These entities could have been modeled in several different ways: for example, as a single table or as a member table and a juvenile table. If a single table had been used for all members, many addresses would have been duplicated, because juveniles in this model have the same address as their parents.

Librarians must be able to track birth dates of juveniles only, so splitting the membership information into several tables eliminates the null column values that would have resulted for the birth dates of adults.

Dividing the tables in this fashion also models the scenario in a way that reflects the membership of the library: member-to-adult is a one-to-one relationship; adult-to-juvenile is a one-to-many relationship.

Item Information

Item (book) information in the library database is stored in the title, item, and copy tables, as shown in Figure 3.5. The key table of this group is the title table.

Figure 3.5: Item information tables.

Figure 3.5: Item information tables.

For each listing in the title table, one or more entries exist in the item table because a book may be available in several languages, may be available in softback or hardback, and may be loanable or not loanable. Therefore, title-to-item is a one-to-many relationship. Furthermore, for each listing in the item table, one or more copies of that item can exist. Therefore, item-to-copy is also a one-to-many relationship.

The item table has a loanable column. Rather than placing the information from this column in the copy table, the database designer assumes that all copies of a particular item are either loanable or not loanable.

Notice that the copy table has a primary key made up of two columns. This type of primary key is called a composite key. The combination of isbn and copy_no uniquely identifies each row in the table.

The copy table contains a duplicate title_no column. This group of tables has been denormalized in order to reduce the number of joins needed to retrieve information. Duplicating the title_no column makes it possible to find the title of a copy by querying only the title table. Without the duplicate title_no column, it would be necessary to query the item and title tables in order to find the title of a copy.

The on_loan column in the copy table is derived data—that is, data that could be generated with a query each time the information is needed. The information is kept in the table in order to make it readily available and to reduce the number of calculations that must be performed. The on_loan column is populated with information from the loan table, as shown in Figure 3.6. Because the loan table is changed frequently, locks could prevent a user from getting this information. The copy table is more likely to be used in a read-only fashion, so it would not be necessary to prevent users from accessing information stored there.

Cc917617.f03xx06(en-us,TechNet.10).gif

Figure 3.6: Loan information tables.

Loan Information

The reservation, loan, and loanhist tables contain the loan information for the library database, as shown in Figure 3.6. The reservation table tracks current reservations for each book, the loan table tracks information on currently loaned books, and the loanhist table stores information on books that have been loaned and returned.

Note: In Figure 3.6, FK1 implies a composite foreign key and FK defines a single column foreign key.

It is possible to combine the loan and loanhist tables to reduce redundancy, but combining them may create other problems. The loanhist table is essentially a history of all loans and could become unwieldy. Over time, librarians may want to back up information from this table, so it makes sense to keep all this information in its own table. In addition, this business model requires several queries to be made against the loanhist table. These queries would be easier to implement and faster to run if the history information were kept separately from the loan information.

The loan and loanhist tables also represent different functions of the application. When a book is checked out, an entry is made to the loan table. When a book is returned, an entry is made to the loanhist table, and the corresponding entry is deleted from the loan table. By maintaining separate tables for each function and denormalizing the tables, users can access the information more quickly. However, because the tables are denormalized, they require more maintenance. For example, when item.title_no is updated, the title_no column must be updated in the loan, loanhist, and copy tables. Because updates to the title_no column may be infrequent, denormalization may speed up queries.

A foreign key constraint is not created on the member_no column of the loanhist table because the source data comes from the loan table and the loan table already maintains referential integrity. Leaving out the foreign key constraint has been done intentionally so that a member can be deleted without having to delete the member history information. You can address this scenario in other ways. For example, you could use an inactive member table instead of deleting the member.

To create the library database

In this exercise, you will write and execute a statement to create the library database. You will find the entire script for this exercise in c:\sqlimpl\exercise\ch03\creabase.sql.

  1. Log on your computer as Administrator or another user that is a member of the Windows NT Administrators group.

  2. Open SQL Server Query Analyzer and log on the (local) server with Microsoft Windows NT authentication. Your account is a member of the Windows NT Administrators group, which is automatically mapped to the SQL Server sysadmin role.

  3. In the DB list box on the toolbar, click master, which makes master the current database for your connection. You can also change you database context by executing the USE statement.

    USE master
    
  4. Write and execute a statement that creates the library database. Use the values in the following table:

    For this parameter

    Use this value

    Database Logical File Name

    Library_data

    Operating System Data File Name

    C:\mssql7\data\library.mdf

    Data File Initial Size

    15 MB

    Data File Maximum Size

    30 MB

    Data File Growth Increment

    1 MB

    Log Logical File Name

    Library_log

    Operating System Log File Name

    c:\mssql7\data\library.ldf

    Log File Initial Size

    2 MB

    Log File Maximum Size

    7 MB

    Log File Growth Increment

    1 MB

    CREATE DATABASE library
     ON
       PRIMARY ( NAME=library_data,  
       FILENAME='c:\mssql7\data\library.mdf', 
       SIZE=15MB,
       MAXSIZE=30MB,
       FILEGROWTH=1MB) 
     LOG ON 
       ( NAME=library_log, 
       FILENAME='c:\mssql7\data\library.ldf', 
       SIZE=2MB,
       MAXSIZE=7MB,
       FILEGROWTH=1MB)
    
  5. Execute the sp_helpdb system stored procedure to view information on the library database and to verify that the database was created.

    EXEC sp_helpdb library
    

To increase the size of the library log file

In this exercise, you will write and execute a statement to increase the maximum size of the library_log to 10 MB and the current log size to 4 MB. You will find the script for this exercise in c:\sqlimpl\exercise\ch03\altebase.sql.

  1. Write and execute a statement that increases the maximum size of the library_log file to 10 MB.

    USE master
     GO 
     ALTER DATABASE library
       MODIFY FILE (NAME=library_log,
       MAXSIZE=10MB)
     GO
    
  2. Write and execute a statement that increases the current size of the library_log file to 4 MB.

    USE master
     GO 
     ALTER DATABASE library
       MODIFY FILE (NAME=library_log,
        SIZE=4MB)
     GO
    
  3. Execute the sp_helpdb system stored procedure to view information on the library database and to verify the changes.

    EXEC sp_helpdb library
    

To set a database option

In this exercise, you will write and execute a statement to set a database option for the library database transaction log. You will find the script for this exercise in c:\sqlimpl\exercise\ch03\logopt.sql.

  1. Execute the following statement to turn on the option that clears the transaction log automatically each time that SQL Server performs a checkpoint:

    EXEC sp_dboption library, 'trunc. log on chkpt.', true
    
  2. Execute the sp_helpdb system stored procedure to view information on the library database in order to verify that the option has been changed.

    EXEC sp_helpdb library
    

Lesson Summary

The library database is used for examples and exercises throughout this training kit. You should familiarize yourself with the library database schema and the library database case study in Appendix B.

Review

The following questions are intended to reinforce key information presented in this chapter. If you are unable to answer a question, review the appropriate lesson and then try the question again. Answers to the questions can be found in Appendix A, "Questions and Answers," located at the back of the book.

Use the questions provided here to review module topics.

  1. You are creating a database that you expect will have a high level of INSERT, UPDATE, and DELETE activity. Should you accept the default transaction log size of 25 percent of the total database size? What must you consider if the database were going to be used primarily for queries?

  2. You are creating a database on multiple disks that will be queried intensively by users. What are some steps that you can take to improve performance and avoid disk contention?

  3. On a routine monitoring of the data files and transaction log, you notice that the transaction log is extremely close to being full. What would happen if the log filled? What steps can you take to avoid running out of transaction log space?

  4. Your company's mission-critical accounting records are in your database. Which transaction log modification option is not advisable to use?

The above article is courtesy of Microsoft Press http://www.microsoft.com/mspress/. Copyright 1999, Microsoft Corporation.

We at Microsoft Corporation hope that the information in this work is valuable to you. Your use of the information contained in this work, however, is at your sole risk. All information in this work is provided "as -is", without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by Microsoft Corporation. Microsoft Corporation shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages. All prices for products mentioned in this document are subject to change without notice.

International rights = English only.

Link
Click to order


Was this page helpful?
(1500 characters remaining)
Thank you for your feedback
Show:
© 2014 Microsoft