Next, the query shown in Listing 41 will provide some information about your cached stored procedures from a logical reads perspective.
LISTING 41: Top cached stored procedures by total logical reads
-- Top Cached SPs By Total Logical Reads (SQL Server 2012).
-- Logical reads relate to memory pressure
SELECT TOP(25) p.name AS [SP Name], qs.total_logical_reads
AS [TotalLogicalReads], qs.total_logical_reads/qs.execution_count
AS [AvgLogicalReads],qs.execution_count,
ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GETDATE()), 0)
AS [Calls/Second], qs.total_elapsed_time,qs.total_elapsed_time/qs.execution_count
AS [avg_elapsed_time], qs.cached_time
FROM sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id= DB_ID()
ORDER BY qs.total_logical_reads DESC OPTION (RECOMPILE);
-- This helps you find the most expensive cached
-- stored procedures from a memory perspective
-- You should look at this if you see signs of memory pressure
This query returns the top cached procedures
ordered by total logical reads. Logical reads equate to memory
pressure, and indirectly to I/O pressure. A logical read occurs when a
query finds the data that it needs in the buffer pool (in memory). Once
data is initially read off of the I/O subsystem, it goes into the SQL
Server buffer pool. If you have a large amount of physical RAM and your
instance-level max server memory setting is at an appropriately high
level, you will have a relatively large amount of space for the SQL
Server buffer pool, which means that SQL Server is much more likely to
subsequently find what it needs there, rather than access the I/O
subsystem with a physical read.
If you are seeing signs of memory pressure, such
as persistently low page life expectancy values, high memory grants
outstanding, and high memory grants pending, look very closely at the
results of this query. Again, you need to pay close attention to the cached_time column to ensure that you are really looking at the most expensive stored procedures from a memory perspective.
After I have identified the top several stored
procedure offenders, I like to run them individually (with appropriate
input parameters captured from SQL Server Profiler) in SSMS with the SET STATISTICS IO ON
command enabled and the graphical execution plan enabled. This enables
me to start troubleshooting why the queries in the stored procedure are
generating so many logical reads. Perhaps the queries are doing
implicit conversions that cause them to ignore a perfectly valid index,
or maybe they are using T-SQL functions on the left side of a WHERE
clause. Another common issue is a clustered index or table scan due to
a missing index. There are many possible reasons why a query has a very
large number of logical reads.
If you are using SQL Server 2008 or later and you
have Enterprise Edition, you should take a look at SQL Server data
compression. Data compression is usually touted as a way to reduce your
I/O utilization requirements in exchange for some added CPU
utilization. While it does work very well for that purpose (with
indexes that are good candidates for compression), it can also reduce
your memory pressure in many cases. An index that has been compressed
will stay compressed in the buffer pool, until the data is updated.
This can dramatically reduce the space required in the buffer pool for
that index.
The next query, shown in Listing 42, looks at the most expensive stored procedures from a physical reads perspective.
LISTING 42: Top cached stored procedures by total physical reads
-- Top Cached SPs By Total Physical Reads (SQL Server 2012).
-- Physical reads relate to disk I/O pressure
SELECT TOP(25) p.name AS [SP Name],qs.total_physical_reads
AS [TotalPhysicalReads],qs.total_physical_reads/qs.execution_count
AS [AvgPhysicalReads], qs.execution_count, qs.total_logical_reads,
qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count
AS [avg_elapsed_time], qs.cached_time
FROM sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id= DB_ID()
AND qs.total_physical_reads > 0
ORDER BY qs.total_physical_reads DESC,
qs.total_logical_reads DESC OPTION (RECOMPILE);
-- This helps you find the most expensive cached
-- stored procedures from a read I/O perspective
-- You should look at this if you see signs of I/O pressure or of memory pressure
This query returns the top cached stored
procedures ordered by total physical reads. Physical reads equate to
disk I/O cost. A physical read happens when SQL Server cannot find what
it needs in the SQL Server buffer pool, so it must go out to the
storage subsystem to retrieve the data. No matter what kind of storage
you are using, it is much slower than physical memory.
If you are seeing signs of I/O pressure, such as
I/O-related wait types in your top cumulative wait types query, or high
times for disk seconds/read in Windows Performance Monitor, examine the
results of this query very closely. Don’t forget to consider how long a
stored procedure has been in the cache by looking at the cached_time
column. A very expensive stored procedure that was just recently cached
will probably not show up at the top of the list compared to other
stored procedures that have been cached for a long period of time.
After identifying the top several stored
procedure offenders, run them individually (with appropriate input
parameters captured from SQL Server Profiler) in SSMS with the SET STATISTICS IO ON
command enabled and the graphical execution plan enabled. This will
help you determine why the queries in the stored procedure are
generating so many physical reads. Again, after you have exhausted
standard query-tuning techniques to improve the situation, you should
consider using SQL Server data compression (if you have Enterprise
Edition) to further reduce the amount of data being read off of the I/O
subsystem. Other options (besides standard query tuning) include adding
more physical RAM to your server and improving your I/O subsystem.
Perhaps you can add additional spindles to a RAID array, change the
RAID level, change the hardware cache policy, and so on.
Next, take a look at the most expensive cache stored procedures for logical writes. To do that, use the query shown in Listing 43.
LISTING 43: Top cached stored procedures by total logical writes
-- Top Cached SPs By Total Logical Writes (SQL Server 2012).
-- Logical writes relate to both memory and disk I/O pressure
SELECT TOP(25) p.name AS [SP Name], qs.total_logical_writes
AS [TotalLogicalWrites], qs.total_logical_writes/qs.execution_count
AS [AvgLogicalWrites], qs.execution_count,
ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GETDATE()), 0)
AS [Calls/Second],qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count
AS [avg_elapsed_time], qs.cached_time
FROM sys.procedures AS p WITH (NOLOCK)
INNER JOIN sys.dm_exec_procedure_stats AS qs WITH (NOLOCK)
ON p.[object_id] = qs.[object_id]
WHERE qs.database_id= DB_ID()
ORDER BY qs.total_logical_writes DESC OPTION (RECOMPILE);
-- This helps you find the most expensive cached
-- stored procedures from a write I/O perspective
-- You should look at this if you see signs of I/O pressure or of memory pressure
This query returns the most expensive cached
stored procedures ordered by total logical writes, meaning simply the
stored procedures that are generating the most write activity in your
database. You might be surprised to see SELECT type stored procedures show up in this list, but that often happens when the SELECT procedures INSERT intermediate results into a temp table or table variable before doing a later SELECT operation.
Especially with OLTP workloads that see a lot of
intensive write activity, you should pay attention to the results of
this query. As always, consider the cached_time
column before making any judgments. After you have identified the
actual top offenders in this query, talk to your developers to see if
perhaps they are updating too much information, or updating information
too frequently. I would also be looking at the index usage on your most
frequently updated tables. You might discover that you have a number of
nonclustered indexes that have a high number of writes, but no reads.
Having fewer indexes on a volatile, write-intensive table will
definitely help write performance. After some further investigation and
analysis, you might want to drop some of those unused indexes.
From a hardware perspective, adding more physical
RAM to your server might help even out your write I/O workload a little
bit. If SQL Server has more RAM in the buffer pool, it will not have to
issue automatic checkpoints to write to the data file(s) quite as
often. Going longer between automatic checkpoints can help reduce total
write I/O somewhat because more data in the same data pages might have
been modified over that longer period of time. A system that is under
memory pressure will also be forced to have the lazy writer write dirty
pages in memory to the disk subsystem more often.
Finally, improving your I/O subsystem,
especially the LUN where your transaction log is located, would be an
obvious step. Again, adding more spindles to the RAID array, changing
from RAID 5 to RAID 10, and making sure your RAID controller hardware
cache is used for writes instead of reads will all help write
performance.