IT tutorials
 
Database
 

SQL Server 2012 : Delivering A SQL Server Health Check (part 8)

12/9/2013 6:35:20 PM
- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019

DATABASE-LEVEL QUERIES

After running all the server- and instance-level queries, you should have a fairly good idea of which database or databases are the most resource intensive on a particular instance of SQL Server. In order to get more details about a particular database, you need to switch your database context to that database and run a set of database-specific queries. The code in Listing 32 shows how to switch your database context using T-SQL. Be sure to change the name of the database to the name of the database that you are interested in investigating further.

LISTING 32: Switching to a user database

-- Database specific queries ******************************************************

-- **** Switch to a user database *****
USE YourDatabaseName;
GO

This code merely switches your database context to the named database. Be sure to change the name to the database that you want to look at. Many people make the mistake of running these queries while connected to the master system database. If you do that, you will get a lot of mostly useless information about your master database.

After you are sure you are pointing at the correct database, you can find out how large it is with the query shown in Listing 33.

LISTING 33: Database file sizes and space available

-- Individual File Sizes and space available for current database
SELECT name AS [File Name], physical_name AS [Physical Name], size/128.0 AS [Total
Size in MB],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space
In MB], [file_id]
FROM sys.database_files WITH (NOLOCK) OPTION (RECOMPILE);

-- Look at how large and how full the files are and where they are located
-- Make sure the transaction log is not full!!

This query shows you where the data and log files for your database are located. It also returns how large and how full they are. It is a good way to help monitor and manage your data and log file sizing and file growth. I don’t like to see my data files getting too close to being 100% full, and I don’t like to see my log files getting more than 50% full. You should manually manage the growth of your data and log files, leaving autogrow enabled only for an emergency.

The next query, shown in Listing 34, shows a DMV that enables you to focus solely on your log file size and space used.

LISTING 34: Transaction log size and space used

-- Get transaction log size and space information for the current database
SELECT DB_NAME(database_id) AS [Database Name], database_id,
CAST((total_log_size_in_bytes/1048576.0) AS DECIMAL(10,1))
AS [Total_log_size(MB)],
CAST((used_log_space_in_bytes/1048576.0) AS DECIMAL(10,1))
AS [Used_log_space(MB)],
CAST(used_log_space_in_percent AS DECIMAL(10,1)) AS [Used_log_space(%)]
FROM sys.dm_db_log_space_usage WITH (NOLOCK) OPTION (RECOMPILE);

-- Another way to look at transaction log file size and space

This query, using a DMV introduced in SQL Server 2008 R2 Service Pack 1, enables you to directly query your log file size and the space used, as a percentage. It would be relatively easy to use this DMV to write a query that could be used to trigger an alert when a percentage of log file usage that you specify is exceeded. Of course, if you are properly managing the size of your transaction log, along with how often you take transaction log backups when you are in the FULL recovery model, you should not run into problems that often. The obvious exceptions are if something happens with database mirroring, replication, or a long-running transaction that cause your transaction log to fill up despite frequent transaction log backups.

The next query, shown in Listing 35, will enable you to gather some I/O statistics by file for the current database.

LISTING 35: I/O statistics by file for the current database

-- I/O Statistics by file for the current database
SELECT DB_NAME(DB_ID()) AS [Database Name],[file_id], num_of_reads, num_of_writes,
io_stall_read_ms, io_stall_write_ms,
CAST(100. * io_stall_read_ms/(io_stall_read_ms + io_stall_write_ms)
AS DECIMAL(10,1)) AS [IO Stall Reads Pct],
CAST(100. * io_stall_write_ms/(io_stall_write_ms + io_stall_read_ms)
AS DECIMAL(10,1)) AS [IO Stall Writes Pct],
(num_of_reads + num_of_writes) AS [Writes + Reads], num_of_bytes_read,
num_of_bytes_written,
CAST(100. * num_of_reads/(num_of_reads + num_of_writes) AS DECIMAL(10,1))
AS [# Reads Pct],
CAST(100. * num_of_writes/(num_of_reads + num_of_writes) AS DECIMAL(10,1))
AS [# Write Pct],
CAST(100. * num_of_bytes_read/(num_of_bytes_read + num_of_bytes_written)
AS DECIMAL(10,1)) AS [Read Bytes Pct],
CAST(100. * num_of_bytes_written/(num_of_bytes_read + num_of_bytes_written)
AS DECIMAL(10,1)) AS [Written Bytes Pct]
FROM sys.dm_io_virtual_file_stats(DB_ID(), NULL) OPTION (RECOMPILE);

-- This helps you characterize your workload better from an I/O perspective

This query returns the number of reads and writes for each file in your database. It also returns the number of bytes read and written for each file in the database, and the number of read I/O and write I/O stalls for each file in the database. Finally, it breaks down the read/write ratio and read/write I/O stall ratio into percentage terms. The point of all this information is to help you better characterize your I/O workload at the database-file level. For example, you might discover that you are doing a lot more writes to a particular data file than you expected, which might be a good reason to consider using RAID 10 instead of RAID 5 for the logical drive where that data file is located. Seeing a lot of I/O stalls for a particular database file might mean that the logical drive where that file is located is not performing very well or simply that the database file in question is particularly active. It is definitely something to investigate further.

Next, with the query shown in Listing 36, you are going to take a look at the transaction log Virtual Log File (VLF) count.

LISTING 36: Virtual Log File count

-- Get VLF count for transaction log for the current database,
-- number of rows equals the VLF count. Lower is better!
DBCC LOGINFO;

-- High VLF counts can affect write performance
-- and they can make database restore and recovery take much longer

This query simply tells you how many VLFs you have in your transaction log file. Having a large number of VLFs in your transaction log can affect write performance to your transaction log. More important, it can have a huge effect on how long it takes to restore a database, and how long it takes a database to become available in a clustering failover. It can also affect how long it takes to recover a database when your instance of SQL Server is started or restarted. What is considered a large number of VLFs?

I don’t like to see more than a couple of hundred VLF files in a transaction log. For the most part, having fewer VLFs is better than having a large number, but I don’t worry too much until I start getting more than 200-300. The most common way to get a high VLF count is when you create a database in FULL recovery model with the default settings for the size and autogrowth increment for the transaction log file, and then you don’t take frequent transaction log backups. By default, you start out with a 1MB transaction log file that is set to grow by 10% when autogrow kicks in after the 1MB file fills up completely. The now 1.1MB file will quickly fill up again, and autogrow will make it 10% larger. This happens repeatedly; and each time the transaction log file is grown, more VLFs are added to the transaction log. If the growth amount is less than 64MB, then 4 VLFs will be added to the transaction log. If the growth amount is between 64MB and 1GB, then 8 VLFs will be added to the transaction log. Finally, if the growth amount is over 1GB, then 16 VLFs will be added to the transaction log.

Knowing this, you can see how a 1MB transaction log file can grow and end up with tens of thousands of VLFs. The way to avoid this is to manually manage your transaction file size, and to change the autogrow increment to a more reasonable value. That way you will have fewer growth events (whether manual or autogrows), and therefore a lower VLF count. With a relatively large and active database, I recommend setting the autogowth increment to 8000MB. This way, you only need a few growth events to grow the transaction file to a sufficiently large size, which keeps the VLF count much lower.

Picking a good size for your transaction log file depends on a number of factors. First, how much write activity do you think your database will see with its normal workload? You want to figure out how much transaction log activity is generated in an hour, in terms of MB or GB. One easy way to determine this is to take an uncompressed transaction log backup every hour for a full day. This gives you a good idea of your average and peak log generation rates. Make sure that your transaction log file is large enough to hold at least eight hours of normal activity, and consider when and how often you do maintenance activity such as reorganizing or rebuilding indexes, which generate a lot of transaction log activity. Creating new indexes on large tables and loading or deleting a lot of data also creates a lot of transaction log activity.

You should also consider how often you are going to run transaction log backups (in order to help meet your Recovery Point Objective [RPO] and Recovery Time Objective [RTO]). If you need to run very frequent transaction log backups, you may be able to have a somewhat smaller transaction log file. This also depends on how large your database is and how long it takes to do a full database backup. While a full database backup is running, transaction log backups will not clear the log file. If you have a very slow I/O subsystem and a very large database, your full database backups may take a long time to complete. You want to size your transaction log file to be large enough that it never has to autogrow. One disadvantage to having an extremely large transaction log file (besides wasting some disk space) is that it will take quite a bit longer to restore a copy of your database, as Windows cannot use Windows Instant File Initialization on log files.

If you discover that you have a very high number of VLFs in your transaction log file, you should take a transaction log backup and then immediately shrink the transaction log file — not the entire database, just the transaction log file. After you do this, check your VLF count. It may have not gone down by much, depending on the prior activity and state of your transaction log. If this is the case, simply repeat the transaction log backup and transaction log file shrink sequence several times until you get the VLF count down. By this time, your transaction log file will probably be very small, so you are going to want to immediately grow it back in reasonable increments to a reasonable size, based on the factors previously discussed. Keep in mind that if you decide to grow your transaction file in 8000MB increments and you have a slow I/O subsystem, you may see a performance impact during the file growth operation, as Windows Instant File Initialization does not work on transaction log files.

 
Others
 
 
 
Top 10
 
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
Technology FAQ
- Is possible to just to use a wireless router to extend wireless access to wireless access points?
- Ruby - Insert Struct to MySql
- how to find my Symantec pcAnywhere serial number
- About direct X / Open GL issue
- How to determine eclipse version?
- What SAN cert Exchange 2010 for UM, OA?
- How do I populate a SQL Express table from Excel file?
- code for express check out with Paypal.
- Problem with Templated User Control
- ShellExecute SW_HIDE
programming4us programming4us