The File System
Each database has one or more files used to store indexes and data. In a lot of database scenarios, you will not implement more than one data file and one log file. In some instances, however, you might want to implement a filegroup. You might also have a performance gain through the appropriate placement of objects within these groups.
The first file created for this purpose is referred to as the primary file. The primary file contains the information needed to start up a database and is also used to store some or all of the data. If desired, secondary files can be created to hold some of the data and other objects. Some databases might be large enough or complex enough in their design to have multiple secondary files used for storage. Normally, the log is maintained in a single file. The log file is used to store changes to the database before these changes are recorded in the data files themselves. The storage of information into log files in this manner enables SQL Server to use these files as an important part of its recovery process. Every time the SQL Server is started, it uses the log files for each of the databases to determine what units of work were still being handled at the time the server was stopped.
The filenames given to all data and log files can be any desired name, although it is recommended that you select a name that indicates the content of the file. The file extensions for the primary data file, secondary data file(s), and log files can also be any chosen set of characters. It is recommended for consistency and standardization that the extensions be .mdf, .ndf, and .ldf for the primary, secondary, and log files, respectively.
Creating Files and Filegroups
Filegroups enable a group of files to be handled as a single unit, and thus make implementations that require multiple files easier to accommodate. With filegroups, SQL Server provides an administrative mechanism of grouping files within a database. You might want to implement filegroups to spread data across more than one logical disk partition or physical disk drive. In some cases, this provides for increased performance as long as the hardware is sufficient to optimize reading and writing to multiple drives concurrently.
You can create a filegroup when a database is created, or you might add them later when more files are needed or desired. After a filegroup has been assigned to a database, you cannot move its files to a different filegroup. Therefore, a file cannot be a member of more than one filegroup. SQL Server provides for a lot of flexibility in the implementation of filegroups. Tables, indexes, text, ntext, and image data can be associated with a specific filegroup, allocating all pages to one specific group. Filegroups can contain only data files; log files cannot be part of a filegroup.
If you place indexes into their own filegroup, the index and data pages can be handled as separate physical read elements. If the associated filegroups are placed onto separate physical devices, each can be read without interfering with the reading of the other. This is to say that while an index is read through in a sequential manner, the data can be accessed randomly without the need for manipulating the physical arm of a hard drive back and forth from the index and the data. This can improve performance and at the same time save on hardware wear and tear.
Placing an entire table onto its own filegroup offers many benefits. If you do so, you can back up a table without having to perform a much larger backup operation. Archived or seldom-used data can be separated from the data that is more readily needed. Of course, the reverse is true: A table that needs to be more readily available within a database can be placed into its own filegroup to enable quicker access. In many instances, planned denormalization (the purposeful creation of redundant data) can be combined with this feature to obtain the best response.
Placing text, ntext, and image data in their own filegroup can improve application performance. Consider an application design that allows the data for these column types to be fetched only upon user request. Frequently, it is not necessary for a user to view pictures and extensive notes within a standard query. Not only does this accommodate better-performing hardware, but it also can provide faster query responses and less bandwidth saturation, because data that is not required is not sent across the network.
Filegroups can provide for a more effective backup strategy for larger database environments. If a large database is placed across multiple filegroups, the database can be backed up in smaller pieces. This is an important aspect if the time to perform a full backup of the entire database is too lengthy.
After a determination has been made to use a filegroup strategy for storing data, always ensure that when a backup is performed against a filegroup the indexes are also backed up at the same time. This is easily accomplished if the data and indexes are stored in the same filegroup. If they are located on separate filegroups, ensure that both the data and the index filegroups are included in a single backup operation. Be aware that SQL Server does not enforce backup of data and index filegroups in a single operation. You must ensure that the files associated with the indexes tied to a particular dataset are backed up with the data during a filegroup backup.
Objects can easily be moved from one filegroup to another. Using the appropriate property page, you just select the new filegroup into which you want to move the object. Logs are not stored in filegroups. You can, however, use multiple log files and place them in different locations to obtain better and more varied maintenance and allow more storage space for log content.
File Placement for Performance and Reliability
The placement of the files related to a SQL Server 2000 database environment helps to ensure optimum performance while minimizing administration. Recoverability can also be improved in the event of data corruption or hardware failures if appropriate measures are taken. On the exam, you must be prepared to respond to these requirements and properly configure the interactions with the file system.
It is absolutely mandatory to understand the basics of the file system and its use by SQL Server. Know when to split off a portion of the database structure and storage to a separate physical disk drive. Many processes performed within SQL Server can be classified as sequential or random. In a sequential process, the data or file can be read in a forward progression without having to locate the next data to be read. In a random process, the data is typically more spread out, and getting at the actual physical data requires multiple accesses.
Where possible, it is desirable to keep sequential processes running without physical interruption caused by other processes contending for the device. Using file placement strategies to keep random processes separate from sequential ones enables the configuration to minimize the competition over the placement of the read/write heads.
As a minimum requirement for almost any implementation, you should separate the normal sequential processing of the log files from the random processing of the data. You also improve recoverability by separating the data from the log and placing them on separate physical volumes. If the volume where the data is stored is damaged and must be restored from backup, you will still have access to the last log entries. The final log can be backed up and restored against the database, which gives something very close to 100% recoverability right to the point of failure.
An interesting and flexible strategy is to provide a separate drive solely for the log. This single volume does not have to participate in RAID architecture, but RAID might be desired for full recoverability. If you give the log the space of an entire volume, you give the log more room to grow and accumulate over time without the need for periodic emptying. Less frequent log backups are needed and the best possible log performance is achieved.
Two primary concerns in most data environments are data recoverability in the event of the inevitable failures and considerations for minimal downtime. In the industry, one of the optimum ratings to strive for is the elusive "five nines" (99.999). This rating means that over any given period (a generally accepted standard of 365 days minimum), the server remained online and servicing the end user 99.999% of the time. In other words, the total downtime for an entire year would be a little more than five minutes.