1. IMPROVE EFFICIENCY WITH SQL SERVER MANAGEABILITY FEATURES
In many organizations, the number and variety of
SQL Server instances, combined with a lack of proper management, means
the operational DBA team can be very busy with reactive tasks. In these
situations it can be difficult to invest the time required to address
the root cause, standardize the environment, and reduce the flow of
break/fix support incidents.
The topic of manageability is broad and means
different things to different groups of people. Manageability can mean
developing build and deployment standards, rationalizing
high-availability technologies, and implementing standardized database
maintenance procedures. The benefits of effective manageability are
fewer problems and quicker resolution when they do occur, and fast
response time to new business requests.
2. MANAGEABILITY ENHANCEMENTS IN SQL SERVER 2012
This section provides a brief overview
of manageability enhancements in SQL Server 2012. The first important
change to mention is several enhancements to the database restore
usability— including the addition of a visual timeline for
point-in-time restore. This means it’s possible to point the restore
database wizard at a folder that can contain full, differential, and
log backups; and, using a sliding timescale bar, select the point
required for restore. The wizard will construct the correct restore
command based on the sequence and precedence required to complete the
point-in-time restore. Furthermore, the Page Restore dialog provides
the capability to easily restore corrupt pages from a database backup,
and to roll forward transaction logs.
Another important manageability enhancement can be found in the Database Engine Tuning Advisor (DTA).
In previous releases, the DTA has required a query or Profiler trace in
order to provide recommendations to improve performance. In SQL Server
2012, the DTA can use the plan cache as a source for tuning. This saves
effort and may improve the usefulness of the recommendations.
A common manageability problem with database
migrations and database mirroring has been resolved through a new
concept introduced in this version: contained databases.
The contained database solution addresses the issue whereby SQL Server
logins can become orphaned when migrating a database between servers or
SQL Server instances. The contained database addresses this by enabling
users to connect to the database without authenticating a login at the
database engine level. This provides a layer of abstraction from the
SQL Server instance and therefore mobility. Similarly, the concept of partially contained databases
separates application functionality from instance-level functionality.
This provides mobility but it lacks some features; for example,
replication, change tracking, or change data capture cannot be
utilized, as these require interaction with instance- or
management-level objects, which are outside the database and cannot
currently be contained within the database.
The data-tier application (DAC or DACPAC),
first introduced in SQL Server 2008, did not enjoy widespread adoption.
One of the reasons why the uptake was limited was because the
deployment method for schema upgrades was cumbersome and not practical
for anything beyond a small database. The DACPAC schema upgrade process
was impractical because a side-by-side approach was used, whereby a new
database was created (with a unique name) alongside the existing
database; database migration took place; and then the original database
was dropped; and the new, temporary database was renamed to the proper
database name. This process has been improved, and the new DAC upgrade
process uses an in-place upgrade that simplifies the old method.
Finally, there are a number of enhancements in
SQL Server Management Studio that improve functionality and usability,
including improvements to IntelliSense and new breakpoint functionality.
3. POLICY-BASED MANAGEMENT
The Policy-Based Management (PBM)
feature, introduced in SQL Server 2008, enables DBAs to enforce
standards and automate health-check-type activities across an entire
SQL Server environment. The PBM feature provides a framework for DBAs
to enforce organizational standards for naming conventions, security,
and configuration settings, and to provide regular reports and alerts
on these conditions.
The PBM feature requires an initial investment in
terms of understanding the mechanics and implementation, but the
benefits of the solution can be quickly realized through rapid
deployment and automation across an entire organization. Therefore, the
return on investment (ROI) of the initial investment required to
configure and implement the platform can be rapid. In addition, many
DBAs carry out morning checks, and automating a lightweight 15-minute
morning check could save more than 65 hours per year! Clearly, the
benefits of automation — including scalability and consistency —
present a strong business case for investing effort in a solution such
as PBM.
3.1 Overview
Policy-Based Management provides a
mechanism for DBAs to manage configuration and deployment standards and
compliance within the SQL Server environment. Managing compliance
reduces variation within an organization, which in turn reduces the
complexity and effort required to support and maintain the provisioning
of benefits, such as reduced resolution time for issues and
efficiencies in terms of the effort expended for such issues.
The types of policy that can be implemented by
PBM include database-levels checks, such as ensuring that Auto Close
and Auto Shrink are disabled, enforcing object-naming conventions, and
ensuring that instance-level configuration options, such as Max Degree
of Parallelism and Max Server Memory, are correctly configured.
Three key aspects of PBM are required to get started:
- Facet — Object properties for checks (e.g., database, login, or server). Facets are fixed and cannot be added or changed.
- Condition — Evaluates to true or false, and contains logic to validate a setting or option; e.g., to confirm AutoClose is false
- Policy — Applies a condition on a target, determines policy mode, such as evaluate or prevent
In addition, using conditions can be a powerful
way to refine the targets for policies. This can be useful in
situations where different policies or best practices apply to
different servers within an environment. A good example is the database
data and log file autogrow settings. It’s a common best practice to
specify the growth increment based on a fixed size, rather than a
percentage, to avoid disk fragmentation and minimize the synchronous
file grow operation. However, it can be difficult to build a
one-size-fits-all policy for the optimal file growth increment, as many
organizations host databases with files ranging between a couple of
megabytes to several terabytes.
To account for these variations, you can use
conditions to create policies that ensure best practice compliance for
data and log file growth, as shown in Table 1.
TABLE 1: PBM Conditions
DATA AND LOG FILE SIZE |
GROWTH INCREMENT |
<100MB |
100MB |
>100MB and <10GB |
500MB |
>10GB |
1GB |
When defining each policy, it’s possible to choose an evaluation mode that determines the effect of the policy; Table 2 summarizes the options.
TABLE 2: Policy Evaluation Modes
EVALUATION MODE |
DESCRIPTION |
On Demand |
Policies are evaluated manually by a DBA, as required. |
On Schedule |
A pre-defined schedule controls when policies are evaluated. |
On Change Prevent |
The policy will actively prevent an action that could cause a condition to evaluate false (only where rollback is possible). |
On Change Log Only |
Allows a change that will cause a false evaluation, but logs the change |
3.2 Getting Started with PBM
This section describes the steps
required to get a PBM deployment up and running. Three phases are
required: defining a condition, creating a policy, and evaluating this
policy against a local machine. The following steps establish a
condition and policy:
1. Launch SQL Server Management Studio and select Management ? Policy Management.
2. Right-click on Conditions and choose New Condition.
3. Type the condition name Auto Close Disabled.
4. Using the Facet drop-down list, Select Database Performance.
5. In the
Expression pane, select the field name @AutoClose, verify that the
operator shows the equals sign (=), and choose the value False (as
shown in Figure 1). Click OK.
6. Right-click on Policy and choose Create New Policy.
7. Specify the policy name Database – Auto Close.
8. Using the Check condition drop-down list, select the Auto Close Disabled condition.
9. Verify that the Against Targets options shows a check alongside Every Database.
10. Ensure that the Evaluation Mode shows On demand, and Server restriction is None.
11. Click OK.
Now expand the policies folder, right-click on
the policy named Database – Auto Close, and choose Evaluate. The report
will display a list containing one row for each database on the
instance, and hopefully each will display a green check indicating
compliance. Enable the Auto Close option for one database and then
reevaluate the policy to confirm it is functioning correctly.
Now you should see a single database listed with
a red cross mark, indicating noncompliance. Alongside the noncompliant
database is a checkbox; mark this checkbox as enabled. Then click the
Apply button in the lower-right corner of the dialog. Clicking the
Apply button does two things: It changes the database property to
Disable Auto Close, and it reevaluates the policy to show a compliant
database (green) now.
This example demonstrates how effective
PBM can be in identifying and resolving configuration issues within an
environment. If this policy were scheduled, it could find and fix any
sub-optimal configuration within an environment. A clear benefit of
this level of automation is that if any new configuration issue is
introduced — either through a change to an existing database or through
a new database in the environment — compliance could be ensured on the
next policy evaluation.