In some circumstances it becomes necessary to migrate users quickly between servers within Database Availability Groups. In some instances move mailbox is not an option. When storage can be maintained we can utilize the swing method to complete the migration.
Some instances where these instructions have been implemented include:
- Replacement of off lease hardware (for example nodes) where additional storage does not exist for move mailbox or to replicate database copies.
- Upgrading nodes to new operating systems by migrating storage / databases from a previous version of windows and Exchange to a new version of Windows with same Exchange version.
The swing method involves identifying a database copy to be moved between nodes, migrating the storage between nodes, and then migrating additional copies of the database between nodes.
There are several considerations when deciding to implement the storage swing migration. Some of these considerations include:
- The complexity of the steps and the ability to test prior to implementing against production users.
- The loss of all lagged database copies.
- The need for end user downtime to complete the transition.
- Maintaining enough storage to hold all log files during the transition process due to log file truncation being blocked either by circular logging or backups.
- Content indexes must be completely rebuilt once databases and storage are migrated.
In this article we have implemented the following architecture for testing and documentation:
Step 0: Ensure all support teams are aware of the actions to be performed.
It is important that all support teams are prepared for the actions to be taken in these steps. Ensuring that storage can be migrated quickly is paramount to reducing downtime. Also depending on your hardware, additional steps may need to be performed to ensure that storage can be imported. For example, in DAS environments when moving storage chassis between nodes, RAID configurations must be imported into new controllers (as opposed to SAN environments, where LUNs can be mapped to different servers).
Step 1: Ensure storage has appropriate labels.
When creating partitions, you can assign labels. Maintaining meaningful labels on the storage helps you to be aware of the volumes you are working with. You cannot rely on the disk numbers because as storage is migrated between servers, different disk numbers may be assigned. But when disks are moved between servers, disk labels are maintained.
In this example, I have generic volume names on my server – for example Data0 / Data1 / Data2…etc.
These volume names are generic and do not help identify the data contained within these volumes. Using the Disk Management tool, I can change the volume names to something more meaningful, such as an indication of the data being stored.
Naming volumes in this manner can reduce potential confusion when migrating storage between servers.
Step 2: Create new database objects on the new database availability groups for the databases that will be migrated.
In this step, you create new database objects on the target DAG. I recommend that these new databases be created on the same DAG member. This will serve as the node where the initial storage migration will occur and where you should be able to quickly restore services. In larger DAGs, where there is no single node that will host all database copies, you can spread the new databases out across the DAG members. Ensure to keep track of these database locations so storage is migrated to the appropriate servers.
In the example below, I create the new mailbox databases, but I don’t specify a path for log files or databases. Thus, these databases are created at the default location (%ProgramFiles%\Microsoft\Exchange Server\v14\Mailbox). The databases are not mounted at this point, and therefore no storage is being used. I am creating the databases and then allowing sufficient time for Active Directory replication and for the Microsoft Exchange Replication and Information Store services to detect the databases.
[PS] C:\>New-MailboxDatabase -Name NEW-DB0 -Server MBX-2A
Name Server Recovery ReplicationType
---- ------ -------- ---------------
NEW-DB0 MBX-2A False None
[PS] C:\>New-MailboxDatabase -Name NEW-DB1 -Server MBX-2A
Name Server Recovery ReplicationType
---- ------ -------- ---------------
NEW-DB1 MBX-2A False None
I can validate the configuration with the following command:
[PS] C:\>Get-MailboxDatabase -Server MBX-2A -Status | fl name,*mounted*,*path*
Name : NEW-DB0
MountedOnServer : MBX-2A.exchange.msft
Mounted : False
EdbFilePath : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB0\NEW-DB0.edb
LogFolderPath : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB0
TemporaryDataFolderPath :
Name : NEW-DB1
MountedOnServer : MBX-2A.exchange.msft
Mounted : False
EdbFilePath : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB1\NEW-DB1.edb
LogFolderPath : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB1
TemporaryDataFolderPath :
Step 3: Remove all truncate and replay lags from database copies.
In this step, you disable truncation and replay lag settings from all database copies that have them configured. It is necessary to have all databases up to date before migrating them to new storage. By disabling replay and truncation lag settings, the administrator can decrease the amount of downtime required to move storage between nodes. Truncation and replay lag settings are dynamic, and when disabled, log file replay and log file truncation will start on lagged copies at the next Replication service update cycle. Sufficient time should be allowed between this step and the day of migration in order to allow any lagged copies to replay outstanding log files.
In the following example, databases assigned to MBX-1C are databases with lagged copies:
[PS] C:\>Get-MailboxDatabaseCopyStatus *
Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex
Length Length State
---- ------ --------- ----------- -------------------- ------------
DB0\MBX1-A Mounted 0 0 Healthy
DB1\MBX1-A Healthy 0 0 1/28/2014 1:59:43 PM Healthy
DB1\MBX-1B Mounted 0 0 Healthy
DB0\MBX-1B Healthy 0 0 1/28/2014 1:36:57 PM Healthy
DB0\MBX-1C Healthy 0 267 1/28/2014 1:36:57 PM Healthy
DB1\MBX-1C Healthy 0 253 1/28/2014 1:59:43 PM Healthy
NEW-DB0\MBX-2A Dismounted 0 0 Unknown
NEW-DB1\MBX-2A Dismounted 0 0 Unknown
To disable the lagged copy we utilize the set-mailboxdatabasecopy command:
set-mailboxdatabasecopy DB0\MBX-1C –replayLagTime 0.0:0:0 –truncationLagTime 0.0:0:0
set-mailboxdatabasecopy DB1\MBX-1C –replayLagTime 0.0:0:0 –truncationLagTime 0.0:0:0
Eventually, the queues should decrease down to zero.
[PS] C:\>Get-MailboxDatabaseCopyStatus *
Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex
Length Length State
---- ------ --------- ----------- -------------------- ------------
DB0\MBX1-A Mounted 0 0 Healthy
DB1\MBX1-A Healthy 0 0 1/28/2014 1:59:43 PM Healthy
DB1\MBX-1B Mounted 0 0 Healthy
DB0\MBX-1B Healthy 0 0 1/28/2014 1:36:57 PM Healthy
DB0\MBX-1C Healthy 0 0 1/28/2014 1:36:57 PM Healthy
DB1\MBX-1C Healthy 0 0 1/28/2014 1:59:43 PM Healthy
NEW-DB0\MBX-2A Dismounted 0 0 Unknown
NEW-DB1\MBX-2A Dismounted 0 0 Unknown
Depending on the duration of the original lag settings, this procedure could take several hours or days to complete. Once the replay queues are at zero, the lagged copy is considered successfully disabled.
Step 4: Validate database copy health
This step needs to be performed immediately before migrating storage. It is imperative that all database copies be healthy prior to proceeding with further steps. You can use Get-MailboxDatabaseCopyStatus to validate that all database copies are healthy.
[PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX1-A
Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex
Length Length State
---- ------ --------- ----------- -------------------- ------------
DB0\MBX1-A Mounted 0 0 Healthy
DB1\MBX1-A Healthy 0 0 1/29/2014 5:32:30 AM Healthy
[PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX-1B
Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex
Length Length State
---- ------ --------- ----------- -------------------- ------------
DB1\MBX-1B Mounted 0 0 Healthy
DB0\MBX-1B Healthy 0 0 1/29/2014 5:32:22 AM Healthy
[PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX-1C
Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex
Length Length State
---- ------ --------- ----------- -------------------- ------------
DB0\MBX-1C Healthy 0 0 1/29/2014 5:32:22 AM Healthy
DB1\MBX-1C Healthy 0 0 1/29/2014 5:32:30 AM Healthy
If any database is unhealthy, it needs to be fixed. In some instances, that may require reseeding which can take several hours. Appropriate time should be allotted to ensure that any remediation steps can be followed.
Step 5: Dismount databases and validate database dismount
Next, the databases that will be migrated are dismounted. They will need to remain dismounted until the storage has been successfully migrated, and services are restored in the new database availability group.
Use Dismount-Database to dismount the databases:
[PS] C:\>Dismount-Database DB0 -Confirm:$False
[PS] C:\>Dismount-Database DB1 -Confirm:$False
Use Get-MailboxDatabase –Status to verify the databases are dismounted.
[PS] C:\>Get-MailboxDatabase DB0 -Status | fl *mounted*
MountedOnServer : MBX1-A.exchange.msft
Mounted : False
[PS] C:\>Get-MailboxDatabase DB1 -Status | fl *mounted*
MountedOnServer : MBX-1B.exchange.msft
Mounted : False
Step 6: Ensure log file copy and replay has completed.
There can be conditions that occur between validating copy status and dismounting databases that do not result in all log files being copied to the passive node. So in this step, you manually copy all log files to the passive node.
Using an administrative command prompt on the server hosting the passive copy, navigate to the log file directory for a database you are migrating.
Execute a command similar to the following – robocopy \\<ActiveNode>\<Drive$>\<log folder path> . /E (This example assumes the command prompt is already present in the target log file directory)
F:\DB1>robocopy \\mbx-1b\f$\DB1 . /e /xf *.chk
------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
------------------------------------------------------------------------------
Started : Wednesday, January 29, 2014 9:50:13 AM
Source : \\mbx-1b\f$\DB1\
Dest : F:\DB1\
Files : *.*
Exc Files : *.chk
Options : *.* /S /E /DCOPY:DA /COPY:DAT /R:1000000 /W:30
------------------------------------------------------------------------------
364 \\mbx-1b\f$\DB1\
*EXTRA Dir -1 F:\DB1\incseedInspect\
100% New File 1.0 m E01.log
0 \\mbx-1b\f$\DB1\IgnoredLogs\
0 \\mbx-1b\f$\DB1\inspector\
------------------------------------------------------------------------------
Total Copied Skipped Mismatch FAILED Extras
Dirs : 3 0 0 0 0 1
Files : 364 1 363 0 0 0
Bytes : 362.00 m 1.00 m 361.00 m 0 0 0
Times : 0:00:00 0:00:00 0:00:00 0:00:00
Speed : 22310127 Bytes/sec.
Speed : 1276.595 MegaBytes/min.
Ended : Wednesday, January 29, 2014 9:50:13 AM
This will ensure that any missing log files, as well as the updated ENN.log, are available on all copies for replay.
Step 7: Replay all log files into databases.
Next, use eseutil to replay all log files into all database copies. This will ensure that all copies are up to date prior to migrating storage to the remote nodes. This step will be performed on all servers hosting a passive or active database copy.
Launch an administrative command prompt and navigate to the log file directory.
Run eseutil /r ENN where ENN is the first three digits of the log prefix for that log sequence.
F:\DB1>eseutil /r e01
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 14.03
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating RECOVERY mode...
Logfile base name: e01
Log files: <current directory>
System files: <current directory>
Performing soft recovery...
Restore Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Operation completed successfully in 21.938 seconds.
When completed on all database copies we can proceed to the verification step next.
Step 7: Validate database headers.
Once all log files have been copied and replayed, you must ensure that all database copies reflect this work. Compare the database headers of each database to ensure that they are equal. Specifically, compare the attributes LastConsistent and LastDetached, which you can view using eseutil.
To dump the header of the database utilize eseutil /mh.
E:\DB0>eseutil /mh DB0.edb
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 14.03
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating FILE DUMP mode...
Database: DB0.edb
DATABASE HEADER:
Checksum Information:
Expected Checksum: 0x1c1c8028
Actual Checksum: 0x1c1c8028
Fields:
File Type: Database
Checksum: 0x1c1c8028
Format ulMagic: 0x89abcdef
Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,17
Engine ulVersion: 0x620,17
Created ulVersion: 0x620,17
DB Signature: Create time:01/27/2014 12:18:49 Rand:3626382 Computer:
cbDbPage: 32768
dbtime: 962549 (0xeaff5)
State: Clean Shutdown
Log Required: 0-0 (0x0-0x0)
Log Committed: 0-0 (0x0-0x0)
Log Recovering: 0 (0x0)
GenMax Creation: 00/00/1900 00:00:00
Shadowed: Yes
Last Objid: 7494
Scrub Dbtime: 0 (0x0)
Scrub Date: 00/00/1900 00:00:00
Repair Count: 0
Repair Date: 00/00/1900 00:00:00
Old Repair Count: 0
Last Consistent: (0x1E6,1,2CA) 01/29/2014 10:57:52
Last Attach: (0x16A,1,270) 01/29/2014 10:12:49
Last Detach: (0x1E6,1,2CA) 01/29/2014 10:57:52
Dbid: 1
Log Signature: Create time:01/27/2014 12:18:48 Rand:3652040 Computer:
OS Version: (6.2.9200 SP 0 NLS ffffffff.ffffffff)
Previous Full Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Incremental Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Copy Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Differential Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Current Full Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Current Shadow copy backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
cpgUpgrade55Format: 0
cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0
ECC Fix Success Count: none
Old ECC Fix Success Count: none
ECC Fix Error Count: none
Old ECC Fix Error Count: none
Bad Checksum Error Count: none
Old bad Checksum Error Count: none
Last checksum finish Date: 00/00/1900 00:00:00
Current checksum start Date: 00/00/1900 00:00:00
Current checksum page: 0
Operation completed successfully in 0.125 seconds.
After dumping the header of each database copy for the same database, compare the LastConsistent and LastDetach times. If these times are equal across all copies of the same database, then log file copy and log file replay were successful.
DB0\MBX-1A
Last Consistent: (0x1E6,1,2CA) 01/29/2014 10:57:52
Last Detach: (0x1E6,1,2CA) 01/29/2014 10:57:52
DB0\MBX-1B
Last Consistent: (0x1E6,1,2CA) 01/29/2014 10:57:53
Last Detach: (0x1E6,1,2CA) 01/29/2014 10:57:53
DB0\MBX-1C
Last Consistent: (0x1E6,1,2CA) 01/29/2014 10:57:52
Last Detach: (0x1E6,1,2CA) 01/29/2014 10:57:52
If any of the copies does not equal for any reason, the database should be mounted on the source server, and then start back at Step 4 of this document. If all database headers are equal, proceed with storage migration.
Step 8: Migrate storage to the new node.
When moving storage to the new server, start by migrating from a server hosting the passive database copy. In this example, I will focus on DB0 which was passive on server MBX-1B. The steps to move storage between servers depend on your storage implementation, and as such they are not covered in this article. These steps should have been tested and validated prior to this point.
I recommend migrating the storage from a single node first. This allows the original active database and storage to remain intact in case there are any issues with the storage migration or the database on the target server. After services have been established on the target server the additional databases and storage can be migrated.
In this example, the new databases were created on server MBX-2A. I am moving the storage from MBX-1B to MBX-2A. After bringing the disks online on MBX-2A using the Disk Management tool, appropriate drive letters or mount points can be assigned. IMPORTANT: note the drive letters and paths used in this procedure. You will need to repeat this step on other servers using the exact same paths. Failure to implement the same drive letters or paths will result in failure of subsequent steps.
Step 9: Mount the migrated database on the new node.
In Step 2 above, you created your database objects. In this step, match one of those databases to the files that were moved from the original DAG. First, use Set-Mailbox to configure the allowFileRestoreFlag on each database.
Using set-mailbox we will set the allowFileRestoreFlag on each database.
[PS] C:\>Set-MailboxDatabase -Identity NEW-DB1 -AllowFileRestore:$TRUE
Once the allowFileRestoreFlag has been set, change the database and log file paths for the new database object to match the migrated storage. It is very important that when setting the EDB file path that you use the correct file name. The paths do not have to be and may not be the same as they were on the original server depending on the configuration used in Step 8.
Use Move-DatabasePath to set the database and log file paths, as shown below.
[PS] C:\>Move-DatabasePath NEW-DB1 -LogFolderPath f:\DB1 -EdbFilePath g:\DB1\DB1.edb -ConfigurationOnly:$TRUE -Confirm:$FALSE
Confirm
This operation will skip the safety check and make the change to Active Directory directly. Do you want to continue?
Be sure to allow ample time for Active Directory replication to occur. Then, mount the database using Mount-Database.
[PS] C:\>Mount-Database NEW-DB1
If the command completes successfully, the database mount status can be verified with Get-MailboxDatabase –Status.
[PS] C:\>Get-MailboxDatabase -Identity NEW-DB1 -Status | fl *mounted
MountedOnServer : MBX-2A.exchange.msft
Mounted : True
Although the database is mounted, mailboxes still reference the original dismounted database in the original database availability group.
Step 10: Move mailboxes to reference the migrated database.
Begin the process of restoring mailbox access by moving the mailboxes from the original database to the new database. This is accomplished using Get-Mailbox and Set-Mailbox.
[PS] C:\>Get-Mailbox -Database DB1 | Set-Mailbox -Database NEW-DB1
Confirm
Rehoming mailbox "exchange.msft/LoadGen Objects/Users/MBX-1B/DB1/MBX-1B 0B63EF06-LGU000001" to database "NEW-DB1". This
operation will only modify the mailbox's Active Directory configuration. Be aware that the current mailbox content
will become inaccessible to the user.
[Y] Yes [A] Yes to All [N] No [L] No to All [?] Help (default is "Y"): a
After allowing sufficient time for Active Directory replication, users should be able to access their mailboxes. Transport services may need to be restarted to force re-categorization of messages to deliver to new servers.
Step 11: (Optional): Migrate storage associated with other database copies.
This step is optional if all storage for all database copies was migrated in step 8. In this step, complete the migration of storage from the original servers to the new servers that will house the database copies. It is important that all paths on the new servers match, so pay careful attention to how the disks are presented and how driver letters / mount points are assigned.
Step 12: Add database copies of new databases to additional DAG nodes using migrated storage.
After storage has been completely migrated, the original databases should now be available on servers in the new DAG. Using Add-MailboxDatabaseCopy, you can re-instate passive copies of the database using the databases that were migrated from the original DAG. The Replication service will match these databases to the new log file stream and begin log file replay. If truncation and / or replay lag was previously configured, the copies may be added with the lag at this time.
[PS] C:\>Add-MailboxDatabaseCopy NEW-DB1 -MailboxServer MBX-2B
[PS] C:\>Add-MailboxDatabaseCopy NEW-DB1 -MailboxServer MBX-2C -ReplayLagTime 7.0:0:0
The success of these operations can be validated with Get-MailboxDatabaseCopyStatus.
[PS] C:\>Get-MailboxDatabaseCopyStatus NEW-DB1\*
Name Status CopyQueue ReplayQueue LastInspectedLogTime ContentIndex
Length Length State
---- ------ --------- ----------- -------------------- ------------
NEW-DB1\MBX-2A Mounted 0 0 Healthy
NEW-DB1\MBX-2B Healthy 0 0 1/29/2014 8:59:19 PM Crawling
NEW-DB1\MBX-2C Healthy 0 162 1/29/2014 8:59:19 PM Crawling
Once this procedure has been successfully completed on all database copies, the original servers can be decommissioned, if necessary.