scripting
51 TopicsModern Auth and Unattended Scripts in Exchange Online PowerShell V2
Today, we are happy to announce the Public Preview of a Modern Auth unattended scripting option for use with Exchange Online PowerShell V2. This feature provides customers the ability to run non-interactive scripts using Modern Authentication.251KViews14likes148CommentsMailbox Migration Performance Analysis
When you're migrating on-premises mailboxes to Office365, there are a lot of factors that can impact overall mailbox migration speed and performance. This post will help you investigate and correct the possible causes by using the AnalyzeMoveRequestStats.ps1 script to analyze the performance of a batch of move requests to identify reasons for slower performance. The AnalyzeMoveRequestStatsscript provides important performance statistics from a given set of move request statistics. It also generates two files - one for the failure list, and one for individual move statistics. Step 1: Download the script (see attachment on this post) and import using the following syntax > . .\AnalyzeMoveRequestStats.ps1 Step 2: Select the move requests that you want to analyze In this example, we’re retrieving all currently executing requests that are not in a queued state. > $moves = Get-MoveRequest | ?{$_.Status -ne 'queued'} Step 3: Get the move reports for each of the move requests you want to analyze Note: This make take a few minutes, especially if you’re analyzing a large number of moves > $stats = $moves | Get-MoveRequestStatistics –IncludeReport Step 4: Run the ProcessStats function to generate the statistics > ProcessStats -stats $stats -name ProcessedStats1 The output should look similar to the following: StartTime: 2/18/2014 19:57 EndTime: 3/3/2014 17:15 MigrationDuration: 12 day(s) 19:10:55 MailboxCount: 50 TotalGBTransferred: 2.42 PercentComplete: 95 MaxPerMoveTransferRateGBPerHour: 1.11 MinPerMoveTransferRateGBPerHour: 0.43 AvgPerMoveTransferRateGBPerHour: 0.66 MoveEfficiencyPercent: 86.36 AverageSourceLatency: 123.55 AverageDestinationLatency: IdleDuration: 1.16% SourceSideDuration: 78.93% DestinationSideDuration: 19.30% WordBreakingDuration: 9.63% TransientFailureDurations: 0.00% OverallStallDurations: 4.55% ContentIndexingStalls: 1.23% HighAvailabilityStalls: 0.00% TargetCPUStalls: 3.32% SourceCPUStalls: 0.00% MailboxLockedStall: 0.00% ProxyUnknownStall: 0.00% How to read the results The first step in understanding the results are to understand the definitions of the report items: Report Item Definition StartTime Timestamp of the first injected request EndTime Timestamp of the last completed request. If there isn’t a completed/autosuspended move, this is set to the current time. MigrationDuration EndTime – StartTime MailboxCount # of mailboxes TotalGBTransferred Total amount of data transferred PercentComplete Completion percentage MaxPerMoveTransferRateGBPerHour Maximum per-mailbox transfer rate MinPerMoveTransferRateGBPerHour Minimum per-mailbox transfer rate AvgPerMoveTransferRateGBPerHour Average per-mailbox transfer rate. For onboarding to Office 365, any value greater than 0.5 GB/h represents a healthy move rate. The normal range is 0.3 - 1 GB/h. MoveEfficiencyPercent Transfer size is always greater than the source mailbox size due to transient failures and other factors. This percentage shows how close these numbers are and is calculated as SourceMailboxSize/TotalBytesTransferred. A healthy range is 75-100%. AverageSourceLatency This is the duration calculated by making no-op WCF web service calls to the source MRSProxy service. It's not the same as network ping and 100ms is desirable for better throughput. AverageDestinationLatency Similar to AverageSourceLatency, but applies to off-boarding from Office 365. This value isn’t applicable in this example scenario. IdleDuration Amount of time that the MRSProxy service request waits in the MRSProxy service's in-memory queue due to limited resource availability. SourceSideDuration Amount of time spent in the source side which is the on-premises MRSProxy service for onboarding and Office 365 MRSProxy service for off-boarding. The typical range for this value is 60-80% for onboarding. A higher average latency and transient failure rate will increase this rate. A healthy range is 60-80%. DestinationSideDuration Amount of time spent in the destination side which is Office 365 MRSProxy service for onboarding and on-premises MRSProxy service for off-boarding. The typical range for this value is 20-40% for onboarding. Target stalls such as CPU, ContentIndexing, and HighAvailability will increase this rate. A healthy range is 20-40%. WordBreakingDuration Amount of time spent in separating words for content indexing. A healthy range is 0-15%. TransientFailureDurations Amount of time spent in transient failures, such as intermittent connectivity issues between MRS and the MRSProxy services. A healthy range is 0-5%. OverallStallDurations Amount of time spent while waiting for the system resources to be available such as CPU, CA (ContentIndexing), HA (HighAvailability). A healthy range is 0-15%. ContentIndexingStalls Amount of time spent while waiting for Content Indexing to catch up. HighAvailabilityStalls Amount of time spent while waiting for High Availability (replication of the data to passive databases) to catch up. TargetCPUStalls Amount of time spent while waiting for availability of the CPU resource on the destination side. SourceCPUStalls Amount of time spent while waiting for availability of the CPU resource on the source side. MailboxLockedStall Amount of time spent while waiting for mailboxes to be unlocked. In some cases, such as connectivity issues, the source mailbox can be locked for some time. ProxyUnknownStall Amount of time spent while waiting for availability of remote on-prem resources such as CPU. The resource can be identified by looking at the generated failures log file. Next, you need to identify which side of the migration components is slower by looking at the SourceSideDuration and DestinationSideDurationvalues. Note: The SourceSideDuration value + DestinationSideDuration value is usually, but not always, equal to 100%. If you see that the SourceSideDuration value is greater than the normal range of 60-80%, this means the source side is the bottleneck. If the DestinationSideDurationvalue is greater than the normal 20-40%, this means the destination side of the migration is the bottleneck Causes of source side slowness in onboarding scenarios There are several possible reasons the source side of the migration in an onboarding scenario may be causing slower than normal performance. High transient failures Most common reason for transient failures is the connectivity issue to the on-premises MRSProxy web service on your Mailbox servers. Check the TransientFailureDurations and MailboxLockedStallvalues in the output and also check the failure log generated by this script to verify any failures. The source mailbox may also get locked when a transient failure occurs and this will lower migration performance. Misconfigured network load balancers Another common reason for connectivity issues are misconfigured load balancers. If you're load balancing your servers, the load balancer needs to be configured so that all calls for a migration specific request is directed to the same server hosting the MRSProxy service instances. Some load balancers use the ExchangeCookieto associate all the migration requests to the same Mailbox server where the MRSProxy service is hosted. If your load balancers are not configured correctly, migration calls may be directed to the “wrong” MRSProxy service instance and will fail. This causes the source mailbox to be locked for some time and lowers migration performance. High network latency The Office 365 MRSProxy service makes periodic dummy web service calls to the on-premises MRSProxy service and collects statistics from these calls. The AverageSourceLatencyvalue represents the average duration of these calls. If you see high values for this parameter (greater than 100ms), it can be caused by: Network latency between the Office 365 and on-premises MRSProxy services is high In this case, you can try to reduce the network latency by Migrate mailboxes from servers closer to Office 365 datacenters. This is usually not feasible, but can be preferred if the migration project is time critical. Delete empty mailbox folders or consolidate mailbox folders. High network latency affects the move rate more if there are too many folders. Increase the export buffer size. This reduces the number of migration calls, especially for larger mailboxes and reduces the time spent in network latency. You can increase the export buffer size by adding the ExportBufferSizeOverrideKB parameter in the MSExchangeMailboxReplication.exe.config file Example: ExportBufferSizeOverrideKB="7500" Important: You need to have Exchange 2013 SP1 installed on your Client Access server to increase the export buffer size. This will also disable cross forest downgrade moves between Exchange 2013 and Exchange 2010 servers. Source servers are too busy and not very responsive to the web service calls In this care you can try to release some of the system resources (CPU, Memory, Disk IO etc.) on the Mailbox and Client Access servers. Scale issues The resource consumption on the on-premises Mailbox or Client Access servers may be high if you're not load balancing the migration requests or you're running other services on the same servers. You can try distributing the source mailboxes to multiple Mailbox servers and moving mailboxes to different databases located on separate physical hard drives. Causes of destination side slowness in onboarding scenarios There are several possible reasons the destination side of the migration in an onboarding scenario may be causing slower than normal performance. Since the destination side of the migration is Office 365, there are limited options for you to try to resolve bottlenecks. The best solution is to remove the migration requests and re-insert them so that migration requests are assigned to less busy Office 365 servers. Office 365 system resources: This is likely due to insufficient system resources in Office 365. Office 365 destination servers may be too busy with handling other requests associated with normal support of services for your organization. Stalls due to word breaking: Any mailbox content that is migrated to Office365 is separated into individual words so that it can be indexed later on. This is performed by the Office 365 search service and coordinated by the MRS service. Check the WordBreakingDuration values to see how much time is spent in this process. If it's more than 15%, it usually indicates that the content indexing service running on the Office 365 target server is busy. Stall due to Content Indexing: Content indexing service on the Office 365 servers is too busy. Stall due to High Availability: High availability service that is responsible to copy the data to multiple Office 365 servers is too busy. Stall due to CPU: The Office 365 server's CPU consumption is too high. Karahan Celikel and the Migration Team195KViews2likes23CommentsAnnouncing Deprecation of Remote PowerShell (RPS) Protocol in Exchange Online PowerShell
Today, we are announcing that starting June 1, 2023, we will start blocking RPS connections to Exchange Online and will block RPS for all tenants by July 1, 2023. Please switch to REST cmdlets.178KViews4likes82CommentsRunning PowerShell cmdlets for large numbers of users in Office 365
Update 11/11/2019: for a significant update on this subject, please see Updated: Running PowerShell cmdlets for large numbers of users in Office 365. When PowerShell was introduced back in Exchange 2007 it was a boon too all us Exchange administrators. It allowed us as admins to manage large numbers of objects quickly and seamlessly. We have come to rely on it for updating users, groups, and other sets of objects. With Office 365 things have changed a bit. No longer do we have a local Exchange server to talk too for running our cmdlets, instead we have to communicate over the internet to a shared resource. This introduces a number of challenges for managing large numbers of objects that did not exist previously. PowerShell Throttles Office 365 introduces many throttles to ensure that one tenant or user can’t negatively impact large swaths of users by over utilizing resources. In PowerShell this shows up primarily as Micro Delays. WARNING: Micro delay applied. Actual delayed: 21956 msecs, Enforced: True, Capped delay: 21956 msecs, Required: False, Additional info: .; PolicyDN: CN=[SERVER]-B2BUpgrade-2015-01-21T03:07:53.3317375Z,CN=Global Settings,CN=Configuration,CN=company.onmicrosoft.com,CN=ConfigurationUnits,DC=FOREST,DC=prod,DC=outlook,DC=com; Snapshot: Owner: Sid~EURPR05A003\a-ker531361849849165~PowerShell~false BudgetType: PowerShell ActiveRunspaces: 0/20 Balance: -1608289/2160000/-3000000 PowerShellCmdletsLeft: 384/400 ExchangeCmdletsLeft: 185/200 CmdletTimePeriod: 5 DestructiveCmdletsLeft: 120/120 DestructiveCmdletTimePeriod: 60 QueueDepth: 100 MaxRunspacesTimePeriod: 60 RunSpacesRemaining: 20/20 LastTimeFrameUpdate: 1/23/2015 10:39:08 AM LastTimeFrameUpdateDestructiveCmdlets: 1/23/2015 10:38:48 AM LastTimeFrameUpdateMaxRunspaces: 1/23/2015 10:38:48 AM Locked: False LockRemaining: 00:00:00 Throttles like this one in O365 can be thought of like a bucket. The service is always pouring more time into the top of the bucket and we as administrators are pulling time out of the bottom of the bucket. As long as the service is adding more time than we are using then we don’t run into any issues. The problem is when we are running an intensive command like Get-MobileDeviceStatistics, Set-Clutter, or Get-MailboxFolderStatistics. These take a significant amount of time to run for each user and consume a fair bit of resources doing so. In this scenario we are pulling more out than the service is putting in so we end up getting throttled. Another way to look at it is to examine the return and do the math to find out how much time we can spend running commands. If we examine the Micro Delay warning we can find our recharge rate here: ActiveRunspaces: 0/20 Balance: -1608289/2160000/-3000000 PowerShellCmdletsLeft: 384/400 In this case mine is 2,160,000 milliseconds. So that is how many milliseconds I can spend per hour consuming resources. The value you see here will vary depending on the number of mailboxes in your tenant. If we take our recharge rate and divide it by the number of milliseconds in one hour we get how much of each hour we can spend actively consuming resources in our session. 2,160,000 milliseconds recharge rate / 3,600,000 milliseconds per hour = 0.6 Or in other words 0.6 * 60 minutes = 36 minutes Now when we are using the Shell for daily tasks we are never going to come anywhere close to this limit. Since it takes a bit of time for someone to type a command, plus most of us aren’t typing PowerShell command as fast as they can for hours at a time. A quick solution if you are getting Micro Delayed is to introduce a sufficient pause between each command so that you don’t exceed your percent usage. $(Get-Mailbox) | foreach { Get-MobileDeviceStatistics -mailbox $_.identity; Start-Sleep -milliseconds 500 } Session stability The next common issue I see is problems with Session Stability. Since we are connecting over the internet the stability of our session becomes a major concern. If we are going to have a script running Get-InboxRule against 180,000 users for four days, then the chance of us dropping the HTTPS session at some point in that time period is pretty high. These session drops can be caused by any number of things Firewall dropping a long running session O365 CAS server getting upgraded or rebooted Upstream network issues Most of the reasons for session drops we as admins have no control over. This issue will manifest itself as you coming back to a PowerShell window that is asking for your credentials. Overcoming this one quickly isn’t easy. We would need to monitor the status of the connection and rebuild it if it encountered any errors. This can be done but it takes more than a few lines of code and could be a challenge to integrate every time we needed to use it. Data return Finally, we have the issue of data return. Before we can run a set command on a large number of mailboxes we must get an array of objects to execute against. Most of us do this with something similar to $mbx = Get-Mailbox -ResultSize Unlimited. But, if we are running this against thousands of mailboxes just the amount of data coming back from the server can take a significant amount of time. This one we can easily get around but using two simple PowerShell tricks. First we make use of Invoke-Command to run our get command on the remote server instead of on the local host. This means that we can send our command across the wire and have the O365 server execute it for us taking a good bit of the client out of the loop. This is great since it puts the onus to do the work a lot closer to source of the data. But, it isn’t a perfect solution. This is because we are not able to run anything complex using a remote session and Invoke-Command. Instead we must restrict ourselves to fairly strait forward cmdets and simple filtering. Anything else will result in the server returning an error and rejecting our command. This is where our second trick Select-Object comes in. By coupling Invoke-Command with Select-Object we can rapidly get our objects and only return a limited amount of data. Invoke-Command -session (Get-Pssession) -scriptblock {Get-Mailbox -resultsize unlimited | select-object -property Displayname,Identity,PrimarySMTPAddress} This command invokes a session on the server to run Get-Mailbox but uses Select-Object to only return the properties that I need to identify each object to operate on. In my testing with 5000 Mail Contacts the invoke-command was on average 35% faster and returned significantly less data for my PowerShell client to deal with. A scripted solution Now that we know what challenges we face can we come up with a consistent, reusable solution to overcome them when we do have to run a cmdlet against a large collection of object. To that end we have developed a highly generic wrapper script, Start-RobustCloudCommand.ps1, that will take your PowerShell command and execute it against the service in a robust manner that seeks to avoid throttles and actively deal with session issues. Couple this with Invoke-Command for quickly getting the list of objects to operate on and we can start getting back most of our pre service PowerShell functionality. You can download the script here. First, a key disclaimer. This script will NOT make your commands complete faster. If you are running a command that takes 5s per user and you have 7,200 users to run it against, it will still take at least 10 hours to complete. Using, this wrapper script will actually slow it down a bit. What it will do is try very hard to ensure that the PowerShell command is able to run uninterrupted, without failure, and without admin interaction for those 10+ hours. How to use the script Using the wrapper involves three steps Build the PowerShell script block you want to run. Collect the objects you want to run against. Wrap your script block and execute. Build the PowerShell Script Block This is pretty much the same as building any PowerShell command. I like to build and test my commands against a single user before I try to use them in the Start-RobustCloudCommand.ps1. Our first step is nice and easy, just write the PowerShell that we want against a single users data returned from Invoke-Command. $mbx = invoke-command -session (get-pssession) -scriptblock {get-mailbox $mailboxstring | select-object -property DisplayName,Identity,PrimarySMTPAddress} Get-MobileDeviceStatistics -mailbox $mbx.primarysmtpaddress.tostring() | select-object @{Name="DisplayName";Expression={$mbx.Displayname}},Status,DeviceOS,DeviceModel,LastSuccessSync,FirstSyncTime Here I populated a variable $mbx with a single mailbox's information but I made sure to do it exactly like I am going to do when I gather the full data set. So I used Invoke-Command and Select-Object to only pull a minimum set of properties that I wanted for running Get-MobileDeviceStatistics. Using invoke-command can at time result in unexpected data types in the results. This means when I send the resulting object into my command I can get errors where it is unable to deal with the data type presented. Most of these cases can be resolved by using .tostring() to convert the results into a string so that the cmdlet can understand them. Next, I ran my Get-MobileDeviceStatistics command and again used Select-Object to get just the output information that I wanted. I also used the ability of Select-Object (Example 4 here) to allow me to create a property on the fly so that I could populate the Display Name of the mailbox in my output, something that isn't in the output of Get-MobileDeviceStatistics by default. Now that I have my script working and giving me the data that I want I just need to make a minor alteration so that it will work in the script block potion of Start-RobustCloudCommand.ps1. The script block isn't able to read the $mbx variable since inside the script we are using local Invoke-Command -input to pass each object into the command one at a time. So to read the values of these objects we have to use the automatic variable $input. Therefore, our command becomes the following: Get-MobileDeviceStatistics -mailbox $input.primarysmtpaddress.tostring() | select-object @{Name="DisplayName";Expression={$input.Displayname}},Status,DeviceOS,DeviceModel,LastSuccessSync,FirstSyncTime Easy to make the change but highly critical to the script functioning. Collect the objects you want to run against. Now that we have our command that we want to run we need to gather all of the objects that we want to run it against. Here we use what we learned above to quickly and efficiently gather the objects that we are going to operate on. Invoke-Command -Session (Get-PSSession) -ScriptBlock {Get-Mailbox -resultsize unlimited | select-object -property DisplayName,Identity,PrimarySMTPAddress} | Export-Csv c:\temp\users.csv $mbx = Import-Csv c:\temp\users.csv You will notice that I exported the output to a CSV file then imported it back into a variable. When I am operating on a large collection of objects I prefer this method because it gives me a written out collection of objects that I am working with. Later, if something beyond my control goes wrong, I can use the CSV file to restart the script with the objects that I have already run against removed from the file, giving me a manual "resume" functionality. Wrap your script block and execute. Finally, we put everything we have together and feed it into the wrapper script. $cred = Get-Credential .\Start-RobustCloudCommand.ps1 -Agree -LogFile c:\temp\10012015.log -Recipients $mbx -ScriptBlock { Get-MobileDeviceStatistics -Mailbox $input.primarysmtpaddress.tostring() | Select-Object @{Name="DisplayName";Expression={$input.Displayname}},Status,DeviceOS,DeviceModel,LastSuccessSync,FirstSyncTime } -Credential $cred Here we can see the script iterating thru each of my test users, executing the command and sending the resulting output to the screen. Also we can see that the script is logging everything it is doing to the screen and to the log file that I specified. Information to the screen is great for a demo but generally you would want this information in a file so that you could review it later. All we have to do is add | Export-Csv c:\temp\devices.csv -Append to the end of our script block and it will push the output of each command into our CSV. Remember that since this is basically a very complex foreach loop we have created you will need the -append otherwise it will overwrite itself for every user. .\Start-RobustCloudCommand.ps1 -Agree -LogFile c:\temp\10012015.log -Recipients $mbx -ScriptBlock { Get-MobileDeviceStatistics -Mailbox $input.primarysmtpaddress.tostring() | Select-Object @{Name="DisplayName";Expression={$input.Displayname}},Status,DeviceOS,DeviceModel,LastSuccessSync,FirstSyncTime | Export-Csv c:\temp\devices.csv -Append } -Credential $cred Conclusion This solution is designed for customers who are having to run cmdlets against Large numbers of users in the service (>5000) and are running into issues. It is a bit complex to get working exactly how you want but it has been designed to be highly generic to allow it to handle most anything you throw at it. Hopefully for our folks with large user sets they will be able to start utilizing this to make those changes across their tenant, or run reports easier. Matthew Byrd125KViews1like7CommentsExchange 2010 and 2013 Database Growth Reporting Script
Introduction Often times in Exchange Support we get cases reporting that the size of one or more Exchange databases is growing abnormally. The questions or comments that we get will range from “The database is growing in size but we aren’t reclaiming white space” to “All of the databases on this one server are rapidly growing in size but the transaction log creation rate is “normal”. This script is aimed at helping collect the data necessary in determining what exactly is happening. For log growth issues, you should also reference Kevin Carker’s blog post here. Please note when working with Microsoft Support, there may still be additional data that needs to be captured. For instance, the script does not capture things like mailbox database performance monitor logging. Depending on the feedback we get, we can always look at building in additional functionality in the future. Test it, use it, but please understand it is NOT officially supported by Microsoft. Most of the script doesn’t modify anything in Exchange, it just extracts and compares data. Note: The space dump function will stop (and then restart) the Microsoft Exchange Replication service on the target node and replay transactions logs into a passive copy of the selected database, so use this with caution. We put this function in place because the only way to get the true white space of a database is with a space dump. People often think that the AvailableNewMailboxSpace is the equivalent of whitespace, but as Ross Smith IV notes in his 2010 Database Maintenance blog “Note that there is a status property available on databases within Exchange 2010, but it should not be used to determine the amount of total whitespace available within the database. AvailableNewMailboxSpace tells you how much space is available in the root tree of the database. It does not factor in the free pages within mailbox tables, index tables, etc. It is not representative of the white space within the database.” So again, use caution when executing that function of script as you probably don’t want to bring a lagged database copy into a clean shutdown state, etc. Before we get into an example of the script, I wanted to point out something you should always check when you are troubleshooting database growth cases – what is the total deleted item size in the database and are any users on Litigation hold. The following set of commands will export the mailbox statistics for any user that is on Litigation Hold for a specific database and furthermore will give you the sum of items in the recoverable items folder for those users (remember we use the subfolders Versions and Purges when Lit Hold is enabled). 1. Export the users mailbox statistics per database that have litigation hold enabled get-mailbox -database <database name> -Filter {LitigationHoldEnabled -eq $true} | get-mailboxstatistics | Export-CSV LitHoldUsers.csv 2. Import the new CSV as a variable: $stats = Import-Csv .\LitHoldUsers.csv 3. Get the sum of Total Deleted Item Size for the Lit Hold users in the spreadsheet $stats | foreach { $bytesStart = $_.TotalDeletedItemSize.IndexOf("(") ; $bytes = $_.TotalDeletedItemSize.Substring($bytesStart + 1) ; $bytesEnd = $bytes.IndexOf(" ") ; $bytes = $bytes.Substring(0, $bytesEnd) ; $bytes } | Measure-Object –Sum This will give you the sum for the specific database of recoverable items for users on litigation hold. I’ve seen cases where this amount represented more than 75% of the total database size. You also want to confirm what version of Exchange you are on. There was a known store leak fix that was ported to Exchange 2010 SP3 RU1. I don’t believe the KB is updated with the fix information, but the fix was put in place, so before you start digging in too deep with the script, make sure to install SP3 RU1 and see if the issue continues. Ok moving onto the script. What can the script do you ask? The script can do the following: Collects mailbox statistics across the specified database, adding mutable note properties for future use in differencing Collects database statistics for the specified database, adding mutable note properties for later differencing. Collects mailbox folder statistics for all mailboxes on the specified database, adding mutable properties for later differencing Compares size and item count attributes of the input database from the differencing database, returning a database type object with the modified attributes Compares size and item count attributes of the input mailbox from the differencing mailbox, returning a mailbox type object with the modified attributes Compares size and item count attributes of the input folder from the difference folder, returning a folder type object with the modified attributes. Compares size and item count attributes of the input report from the difference report, returning a report type object with the modified attributes. Exports a copy of a report (database, mailbox, and folder statistics) to the specified path or current directory in *.XML format Imports an *.XML report and exports it to *.CSV format. Imports the report details from the specified file path (database, mailbox, and folder statistics) Outputs database details and top 25 mailboxes by size and top 25 folders by size Collects a space dump, ESEUTIL /MS, from a passive copy of the specified database and writes to *.TXT Searches for events concerning Online Maintenance Overlap and Possible Corruption, outputting them to the screen Collects and exports current Store Usage Statistics to *.CSV You can download the ExchangeDatabaseGrowthReporting.PS1 script as an attachment of this blog post. Sample script run Issue reported: “Mailbox Database 0102658021” is rapidly growing in size. List options to use with -mode switch: Choose Collect and Export the data for the database that is growing in size (enter mode 1) Specify a path or use the current working directory. Quotes around the path is optional, but the path must already exist. Specify the database name. It will run against the active copy. Quotes are optional. Depending on the size of the database, folder counts, etc. this could take some time to run from here. Once the report is generated you will be prompted to select the top # of items to display from each report. 25 is the default if you just press enter. The onscreen reports will now generate. Note DB size on disk here is 1.38GB. The onscreen reports that you can scroll through include the Database size details and individual reports for the Top 25 of the following: mailboxes by item size, mailboxes by DeletedItemSize, mailboxes by item count, mailboxes by deleted item count, mailboxes by associated items, Folders by size, Folders by item count, and Folders by deleted item count. The Full XML report will be stored in the location you specified. If you close out of the PowerShell window and wish to review the reports again, just run the script in mode 2 (quotes are optional). Now we have a valid report at a single point in time of what's going on in the database. Since we are troubleshooting a “Database Growth” issue, we will need to wait some time for the database to grow. If you have ample space on the database drive, then I would run the report every 24 hours. Once you ready, compile a second report of the database (same way you did the first above) Press enter for top 25 items and the onscreen report will start scrolling through. As you can see below our database size increased on disk from 1.38 GB to 1.63 GB. So what grew? Well now we will use Mode 3 of the script to compare the 2 XML reports. Note the second XML report in the directory: Run the script with –mode 3. You will be prompted to enter the full file pat h for the original report and then the second report after the DB growth was recognized. Once the differential is completed you will see a report that is similar to the first two reports. Keep in mind this is a DIFFERENTIAL Report, so it is reporting on how many items in a particular folder grew or how much the DB grew, etc. As you can see above the size on disk shows 256mb. This is actually how much the database grew as we know that it went from 1.38gb to 1.63gb. If I scroll through the reports, I can see that the Administrator mailbox is where most of the growth took place (which is where I added the content). This data can be used to tell what user(s) might be causing the additional growth. As noted earlier, we have had some “phantom” growth cases as well where we had known store leaks which is why it is imperative to make sure you have installed Exchange 2010 SP3 RU1. Its possible that you could run into that type of scenario here, but the data should support that as you would see DB on disk grow but no real growth in the mailboxes at which point you would need to engage Microsoft Support. A quick note on the Actual Overhead value. This is calculated by taking the physical size of the database and subtracting the AvailableNewMailboxSpace, TotalItemSize and TotalDeletedItemSize. Remember that AvailableNewMailboxSpace is not the true amount of whitespace, so the actual number may be a little higher than what is reported here. Other script parameters The remaining modes of the script should be pretty self explanatory. Mode 4 – Export Store Usage Statistics just uses the built in Get-StoreUsageStatistics function allowing you to run it at a server or database level. Mode 5 – Will search the application log for events concerning Online Maintenance Overlap and Possible Corruption, outputting them to the screen. We probably didn’t get every event listed here, so we can add events as we see them. Mode 6 – Will search the server that it is run on for passive copies of databases. It will alert you to any that are configured as lagged copies. If you choose to run this against a passive copy to get the true white space, then it will stop the Microsoft Exchange Replication service, do a soft replay of logs needed to bring the passive copy into a clean shutdown, and then run an ESEUtil /MS against the passive copy. Once completed it will restart the Replication service. Mode 7 – will just read in one of the XML reports created from Mode 1 and break it out into its individual component reports in CSV format. Jesse and I decided to build this because we continue to see cases on database growth, so a special thanks to him for running with the idea and compiling the core components of the script. We both had been running our own versions of this while troubleshooting cases, but alas, his core script was better (I still got to add some of the fun ancillary components). We’d like to thank Bill Long for planting the idea in our heads as he worked so many of these cases from a debugging standpoint as well as David Dockter and Rob Whaley for their technical review. Hopefully this helps you troubleshoot any database growth issues you run across. We look forward to your comments and are definitely open to suggestions on how we can make this better for you. Happy Troubleshooting! Jesse Newgard and Charles Lewis Sr. Support Escalation Engineers85KViews0likes39Comments