2. Deploying Mailbox servers: The essentials
The
underlying functionality of a Mailbox server is similar to that of a
database server. Every mailbox-enabled recipient defined in the
organization has a mailbox that is used to store messaging data. Groups
of related mailboxes are organized using databases, and each database
can have one or more database copies associated with it.
With Exchange Server 2007, you needed dedicated hardware
for clustered Mailbox servers, those servers could not run other roles,
and failover occurred at the server level. Microsoft re-engineered
Exchange 2010 and Exchange 2013 to provide continuous availability
while eliminating these restrictions. For Exchange 2013 specifically,
this means:
-
You do not need dedicated clustering hardware for highly available
Mailbox servers. Key components of Windows clustering are managed
automatically by Exchange Server.
-
You do not need to use Local Continuous Replication (LCR), Cluster
Continuous Replication (CCR), or Standby Continuous Replication (SCR).
LCR has been discontinued. Key features of CCR and SCR have been
combined, enhanced, and made available through database availability
groups.
-
You can combine Exchange roles on highly available Mailbox servers,
provided you don’t plan to use Windows Network Load Balancing. This
means you could create a fully redundant Exchange organization using
only two Exchange servers. In this case, each server would have the
Mailbox and Client Access roles. You would also need a witness server
for the database availability group, which doesn’t have to be an
Exchange server.
The underlying technology built into database availability groups is
the key ingredient that makes high availability possible. The related
framework ensures failover clustering occurs in the background and
doesn’t normally require administrator intervention. As a result,
Exchange Server 2013 doesn’t need or use a cluster resource
dynamic-link library (DLL) and uses only a small portion of the Windows
clustering components, including heartbeat capabilities and the cluster
database.
Database availability groups use continuous replication to achieve
high availability. With continuous replication, Exchange Server 2013
uses its built-in asynchronous replication technology to create copies
of mailbox databases and then keeps the copies up to date using
transaction log shipping and replay. Lagged copies can automatically
play down log files to automatically recovery from certain types of
issues. For example, if Exchange detects that a low disk space
threshold has been reached, Exchange automatically replays the logs
into the lagged copy to play down the log files. If Exchange detects
that page patching is required, Exchange automatically replays the logs
into the lagged copy to perform page patching. If Exchange detects that
there are fewer than three available healthy copies (whether active or
passive) for more than 24 hours, Exchange automatically replays the
logs into the lagged copy to play down the log files.
Any server in a group can host a copy of a mailbox database from any
other server in the group. When a server is added to a group, it works
with other servers in the group to provide automatic recovery from
failures that affect mailbox databases, including server failure,
database corruption, disk failure, and network connectivity failure.
Although Exchange 2010 used a scheduled script to alert you that only a
single copy of a database was available, this functionality is now
integrated into Exchange along with other managed availability features
for internal monitoring and recovery.
When you create a
database availability group, Exchange adds an object to Active
Directory representing the group. This object stores information about
the group, including details about servers that are members of the
group. When you add the first server to the group, a failover cluster
is created automatically and the heartbeat is initiated. As you add
member servers to the group, the heartbeat components and the cluster
database are used to track and manage information about the group and
its member servers, including server status, database mount status,
replication status, and mount location.
Because Exchange Server 2013 databases are represented at the
organization level, they are effectively disconnected from the servers
on which they are stored, which makes it easier to move databases from
one server to another. However, it also means you can work with
databases in many different ways and that there are also several
requirements when working with databases. Keep the following in mind
when working with databases in Exchange Server 2013:
-
Database names must be unique throughout your Exchange organization.
This means you cannot name two databases identically even if they are
on two different Mailbox servers.
-
Every mailbox database, except copies, have a different globally
unique identifier (GUID). Copies of a database have the same GUID.
-
Mailbox servers that are part of the same database availability
group do not require cluster-managed shared storage. However, the full
paths for all database copies must be identical on host Mailbox servers.
-
Exchange 2013 no longer has public folder databases. Instead,
special mailboxes are now used to store the public folder hierarchy and
content. Like traditional mailboxes, special mailboxes for public
folders are stored in mailbox databases and are replicated as part of
any database availability group you configure.
For a successful deployment of a Mailbox server, the storage
subsystem must meet the storage capacity requirements and must be able
to perform the expected number of input/output (I/O) operations per
second. Storage capacity requirements are determined by the number of
mailboxes hosted on a server and the total storage size allowed per
mailbox. For example, if a server hosts 2,500 mailboxes that you allow
to store up to 2 gigabytes (GB) each, you need to ensure there are at
least 5 terabytes of storage capacity above and beyond the storage
needs of the operating system and Exchange itself.
I/O performance of the storage subsystem is measured in relation to
the latency (delay) for each read/write operation to be performed. The
more mailboxes you store on a specific drive or drive array, the more
read/write operations there are performed and the greater the potential
delay. To improve performance, you can use multiple mailbox databases
on separate disks. You might also want to store databases with their
transaction log files on separate disk drives, such that database A and
related logs are on disk 1, database B and related logs are on disk 2,
and so on. In some scenarios, you might want the databases and logs to
be on separate disks.
I/O performance in
Exchange Server 2013 running on 64-bit architecture is improved
substantially over 32-bit architecture. On Mailbox servers, a 64-bit
architecture enables a database cache size of up to approximately 90
percent of total random access memory (RAM). A larger cache increases
the probability that data requested by a client will be serviced out of
memory instead of by the storage subsystem.
Unlike Exchange 2010 which required separate volumes for each
database copy whether passive or active, Exchange 2013 allows a server
to host multiple databases on the same volume. This allows you to have
a mix of active and passive copies on the same volume. As part of your
planning, look closely at the input/output per second (IOPS)
capabilities of your storage architecture and place database copies
appropriately. Because active copies will use more IOPS than passive
copies, you’ll typically want no more than one active database copy on
a volume while allowing multiple passive copies. For example, if you’re
configuring a four-server database availability group, you might want
to configure storage so that each server has a large volume with its
active database copy and passive copies of the databases on the other
servers.
Like Exchange 2010, Exchange 2013 is optimized so that servers can
use large disks with 2 to 8 terabytes of storage efficiently. However,
as part of your planning, you need to understand how Exchange 2013 uses
automatic reseed to recover from disk failure, database corruption
events, and other issues that require a reseed of a database copy. With
automatic reseed, Exchange can automatically restore database
redundancy using spare disks that have been pre-provisioned.
The larger the database, the longer it takes Exchange to reseed it.
If a database is too large, it can’t be reseeded in a reasonable amount
of time. With a typical reseed rate of 20 MB per second, it would take
Exchange:
-
About 28 hours to reseed a 2-terabyte database.
-
About 42 hours to reseed a 3-terabyte database.
-
About 56 hours to reseed a 4-terabyte database.
Because of this, the total reseed time may be the most important limiting factor for sizing databases.