This section evaluates common tools for
managing, manipulating, and interpreting PerfMon logs. Because PerfMon
logs can be saved or converted to comma-separated value (CSV) files,
there are many options for data analysis, including loading the file
into SQL Server, or analyzing it with Microsoft Excel or almost any
other data manipulation tool.
Using SQL Server to Analyze PerfMon Logs
Analyzing large quantities of
performance logs with SQL Server can be a useful solution when data
analysis through other methods could be cumbersome and labor intensive.
The data load process from CSV files could simply make use of the ad
hoc Import/Export Wizard launched from SQL Server Management Studio, or
alternately this process could be automated and scheduled.
SQL Server can’t read the native binary log file
(BLG) file type, so you should either write PerfMon logs to a log file
as a CSV file type or use the Relog utility to convert the file
post-capture (more detail to follow) from BLG to CSV. It is also
possible for PerfMon to log directly to a SQL Server database through a
DSN, although there is additional overhead with this process, which can
be avoided by logging to file.
Analyzing PerfMon logs from within a database has
the benefit of data access through the familiar language of T-SQL,
which means problems should be easier to identify, and you can write
queries looking for specific problem conditions. Here’s an example
where three counters could be used to identify a low-memory condition:
- Available memory less than 100MB
- Page life expectancy less than 60 seconds
- Buffer cache hit ratio less than 98%
If the PerfMon logs have already been
imported into SQL Server, the following query could be used to identify
any instance during the data capture window when the low memory
condition existed:
SELECT *
FROM subset
WHERE Mem_Avail_Bytes < 1000000
AND Buff_Mgr_PLE < 60
AND Buff_Cache_Hit_Ratio < 98
This example should be modified to
reflect the table and column names specified during the data import,
but the concept could be adapted for any number of scenarios.
Additionally, this method could be used to manage performance data
across a number of servers, and Reporting Services could be used to
present the data.
Combining PerfMon Logs and SQL Profiler Traces
A feature first available in SQL Server
2005 was the capability to combine PerfMon logs with SQL Profiler
traces. Using Profiler to combine logs in this way enables the viewing
of T-SQL code that’s running on the server, combined with the hardware
impact of running the code, such as high CPU or low memory.
The combined view presents a time axis that can
be navigated, by selecting a moment when a CPU spike occurred; the
Profiler trace automatically relocates to the T-SQL that was executing
at the time of the spike.
Using Relog
Relog can be used to create new log
files with a new sampling rate or a different file format than existing
PerfMon logs. Relog was first included in Windows XP, and it can be
useful when handling large logs or many surplus counters are included.
Additionally, there are situations when a log contains data for many
hours but the time frame of interest is much shorter; Relog can assist
in extracting the interesting time window for easier analysis. Table 1 shows a summary of Relog parameters.
TABLE 1: Summary of Relog Parameters
OPTION |
DESCRIPTION |
-? |
Displays context-sensitive help |
-a |
Appends output to the existing binary file |
-c <path [path ...]> |
Filters counters from the input log |
-cf <filename> |
Filters file listing performance counters from the input log. The default is all counters in the original log file. |
-f <CSV|TSV|BIN|SQL> |
Specifies the output file format |
-t <value> |
Writes only every nth record into the output file. The default is to write every record. |
-o |
Specifies the output file path or SQL database |
-b <dd/MM/yyyy HH:mm:ss[AM|PM]> |
Begin time for the first record to write into the output file |
-e <dd/MM/yyyy HH:mm:ss[AM|PM]> |
End time for the last record to write into the output file |
-config <filename> |
Settings file containing command options |
-q |
Lists performance counters in the input file |
-y |
Answers yes to all questions without prompting |
The following sections demonstrate three example scenarios in which Relog would be useful, including the syntax used.
Extracting Performance Data for a Specific Time Window
This technique can be useful when using
PerfMon to log over many hours or days. Were a problem to occur, for
example, at 10:30 a.m. on March 15, it would be useful to extract the
time frame from 10:00 to 11:00 to provide a manageable log size,
without losing any data points. The command looks as follows:
Relog Server001_LOG.blg -b 15/03/2012 10:00:00 -e 15/03/2012 11:00:00 -o
Server001_LogExtract.blg
Extracting Specific Performance Counters
Sometimes monitoring tools or other
engineers gather logs containing extraneous counters. In these
situations, you can extract specific counters for analysis using Relog.
The Relog parameter -c enables
counters to be specified. In the following example only the
memory-related counters would be extracted to a newly created log file:
Relog Server001_Log.blg -c "\Memory*" -o Server001Memory_Log.blg
Furthermore, it is possible to perform
more complex filtering by passing Relog a text file containing a subset
of counters from the original performance log. The following command
can be used to extract those counters specified in filter file from the
original log:
Relog Server001_Log.blg -cf CounterList.txt -o Server001Overview_Log.blg
The preceding example requires CounterList.txt to contain a single counter per line with the counters to be extracted.
Converting Log Files to New Formats
PerfMon creates log files in a binary
log format (BLG) by default. In some situations it can be desirable to
convert a performance log to a new format to enable applications other
than PerfMon to read the log. For example, this can be useful when
importing the data to SQL Server or analyzing performance in Excel. The
following example shows how to convert the BLG file to a CSV file:
Relog Server001_Log.blg -f CSV -o Server001_Log.csv
Using LogMan
LogMan can be used to schedule the
starting and stopping of logs. This can be a useful alternative to
using the Windows AT scheduler or the scheduler functions available
within PerfMon. The great benefit of using LogMan is that you can
centrally control the start and stop of Performance monitoring. Using
LogMan, it’s possible to define a data collector and copy that
collector to multiple servers from a single, central location. Table 2 summarizes the LogMan command-line actions. The syntax is as follows:
TABLE 2: Summary of LogMan Usage
VERB |
DESCRIPTION |
Create |
Creates a new data collector |
Query |
Queries data collector properties. If no name is given, all data collectors are listed. |
Start |
Starts an existing data collector and sets the begin time to manual |
Stop |
Stops an existing data collector and sets the end time to manual |
Delete |
Deletes an existing data collector |
Update |
Updates properties of an existing data collector |
Import |
Imports a Data Collector Set from an XML file |
Export |
Exports a Data Collector Set to an XML file |
logman [create|query|start|stop|delete|update|import|export] [options]
The following example creates a collector named
DBOverviewLog, which contains all Processor, Memory, and LogicalDisk
counters with a sample interval of 30 seconds and a max log file size
of 254MB:
Logman create counter "DBOverviewLog" -si 30 -v nnnn -max 254 -o
"D:\logs\DBOverview" -c "\Processor(*)*" "\Memory(*)*" "\LogicalDisk(*)*"
Table 3 describes the four options available with LogMan, including the useful -s parameter, which enables the collector to be created, started, and stopped on remote computers.
TABLE 3: LogMan Options
OPTION |
DESCRIPTION |
-? |
(a) Displays context-sensitive help |
-s <computer> |
(b) Performs the command on the specified remote system |
-config <value> |
(c) Setting file containing command options |
-ets |
(d) Sends commands to Event Trace Sessions directly without saving or scheduling |
Using LogMan it’s possible to script collection
for a baseline data set from an entire application environment. This
could be incredibly useful when doing performance testing, baselining
application performance, or troubleshooting live problems.
Using LogParser
LogParser is a simple to use yet
powerful tool for log file analysis, popularized for analyzing logs
from IIS web servers. LogParser can be used to examine a range of log
types and can provide output in various forms. Once installed,
LogParser enables pseudo-SQL querying of log files! This can be great
when searching Windows Event Logs, IIS logs, or PerfMon logs.