Tempdb Initial Sizing and Autogrowth
A default installation of any SQL
Server edition will create a tempdb database with an 8MB data file and
a 1MB transaction log file. For a lot of SQL Server installations these
file sizes won’t be enough, but they are configured to autogrow by 10%
as needed. You can see the properties window for tempdb on a default
installation of SQL Server 2012 Developer Edition in Figure .
Although the autogrow feature enables a more
hands-off approach to maintaining many SQL Server installations, it’s
not necessarily desirable because the files cannot be used while they
are autogrowing, and it can lead to fragmentation of the files on the
hard disk, leading to poor performance.
This is a recommendation that would apply to any
SQL Server database, but for tempdb it’s even more relevant. When you
restart your SQL Server instance, tempdb is re-created (files will be
reused if they already exist) and sized to the value specified in the
database properties, which as you’ve just seen is only 8MB for the data
file and 1MB for the log file by default.
We’ve reviewed many SQL Server installations with
tempdb files of tens of GBs that have autogrown to that size and have
the default properties set. The next time SQL Server is restarted,
tempdb will be just 8MB and will have to start autogrowing all over
again.
Figure 2 illustrates an example scenario of tempdb sizing.
In this case, you can see the size of the initial
files, which the DBA has set to 200MB and 50MB. The workload running
against SQL Server has then caused the tempdb files to autogrow to
2450MB and 560MB.
SQL Server is then restarted and tempdb returns
to 200MB and 50MB, as set by the DBA, and would have to autogrow again
to fulfill the workload.
To What Size Should Tempdb Be Set?
This is obviously a difficult question
to answer without more details about the workload, but there is still
some guidance that you can use. First of all, unless you’re running SQL
Server Express, set tempdb to be bigger than the default; that’s an
easy one.
Next, if you can give tempdb its own disk, then
configure it to almost fill the drive. If nothing else will ever be on
the drive, then you’re better off setting it to be larger than you’ll
ever need. There’s no performance penalty, and you’ll never have to
worry about autogrow again.
If you can’t put tempdb on its own disk, then
you’ll need to manage size and autogrow a bit more closely. You could
just let it autogrow for a while and then manually set it to be a bit
larger than what it grows to, or you could just make it a reasonable
size in relation to your other databases and set large autogrow amounts.
To What Size Should Autogrow Be Set?
If you’ve moved tempdb to its own drive
and configured it to almost fill the disk, then arguably you don’t need
to enable autogrow. That would be a reasonable choice in this scenario,
but it may be worth leaving it on if you still have a small amount of
disk space left over.
The best way to think of autogrow for any
database, not just tempdb, is as a last resort. Your databases should
be sized appropriately so they don’t need to autogrow, but you still
configure it just in case you need it.
Using fixed-growth amounts is generally a better
approach for autogrow because it makes autogrow events more
predictable. Autogrowing a 10GB transaction log by 10%, for example,
will take a long time and will affect the availability of the database.
The Instant File Initialization (IFI) feature in
Windows Server 2003 and later can make things a bit easier for
autogrowing the data files, but it doesn’t work for log files because
of the way they are used.
IFI is used automatically by SQL Server if the
service account is a local administrator (which it shouldn’t be as a
security best practice) or if the account has the Manage Volume
Maintenance Tasks advanced user rights. To give the service account the
necessary rights, you can use the Local Group Policy Editor, shown in Figure 3, by running gpedit.msc.
Once IFI is working, you can set autogrow to be
large fixed amounts for data files. 50MB or 500MB are good values
depending on the size of the database, but any size is created
virtually instantly so you avoid any downtime.
NOTE
If you’ve
configured multiple data files and you want to allow autogrow, consider
enabling trace flag 1117, which will force all data files to grow
uniformly so you don’t break the load balancing between files.
For transaction log files, however, you need to
be a lot more conservative and use a figure that balances the time it
takes to autogrow and the usefulness of the extra space. Autogrowing by
1MB, for example, is quick, but you might need to do it so often that
it becomes a bottleneck. Autogrowing by at least 10MB for the
transaction log is a good place to start, but you may need it to be
higher to provide enough space to avoid autogrowing again quickly. The
best option is to avoid autogrowing in the first place by correctly
sizing the files.
Configuring Multiple Tempdb Data Files
The use of multiple data files as an
aid to reducing allocation contention problems for tempdb. Another reason you might want to use multiple
data files is to increase the I/O throughput to tempdb — especially if
it’s running on very fast storage.
When you create multiple data files they will all
be in the primary filegroup and SQL Server uses a proportional fill
algorithm to determine which file to use for each request to create an
object. If all the files are exactly the same size, then SQL Server
uses the files in a “round robin” fashion, spreading the load equally
across the files. This is, of course, exactly what you want.
Microsoft recommends up to
a 1:1 mapping between the number of files and logical CPUs because
during testing of massive workloads they’ve seen performance benefits,
even with hundreds of data files.
A more pragmatic approach however, is to have a 1:1 mapping between files and logical CPUs up to eight,
and then add files if you continue to see allocation contention or if
you’re looking to push the I/O subsystem harder. The performance
benefit from adding files diminishes each time, and in our experience,
eight is the sweet spot, especially if you’re implementing this as a
pro-active measure.
Whether or not you configure multiple
data files as a best practice on all your SQL Servers or just on those
for which you’ve detected issues is a choice only you can make.
However, you might want to configure them on all the servers you work
with as a proactive measure, as it’s hard to see a downside.