Friday, January 30, 2015

How to determine the SSL TPS of your workload?

The KEMP Technologies LoadMaster range of load balancers goes from very affordable entry level models up to the real work horses. To choose the right option you need to think about the number of network interfaces you need, how many real or virtual servers you want to be able to use and estimate your expected throughput and SSL TPS.

image

Of all those parameters SSL TPS is the one that confuses some people.

What is SSL TPS?

SSL TPS is the number of SSL (Secure Sockets Layer) Transactions per Second. First we need to understand what a Transaction is. A SSL transaction consists of three phases:

image

The Session Establishment phase is the most expensive from a performance point-of-view. This is where the authentication and handshake, key exchange takes place and the encrypted sessions basically is created. The Data Transfer phase is where the actual data is being transferred and during the Session Closure phase the client and server tear down the connection.

So TPS is the number of new SSL sessions per second, not to be confused with concurrent (already established) SSL sessions.

SSL and ADCs

Creating a SSL session requires CPU resources and our common x86 processors are not particularly good at this task. This is why certain ADCs have a dedicated CPU to perform this task, this is called an ASIC (Application Specific Integrated Circuit). The LoadMaster LM-2600, LM3600 and LM-5400 are examples of ADCs with an SSL ASIC. Traditionally an ADC with SSL ASIC was used to offload the SSL traffic and transfer the traffic over unencrypted HTTP to the real server.

Today SSL offloading enables the ADC to perform L7 task such as content switching and Intrusion Prevention Detection (IPS). And with the power of modern hardware it's common practice to even re-encrypt the traffic again before it leaves the ADS to the real server.

Calculate the TPS

To calculate the expected SSL TPS you need to understand both the traffic characteristics of your application as well as the expected load the users will cause.

For a typical HTTP application you need to understand:

  • the number of unique visitors
  • the number of HTML pages loaded per user session
  • the number of requests made to the web-server per HTML page

Plan for peak usage, burst load can be up to three or four times the average load.

Measure the TPS

A more hands-on and practical way to determine SSL TPS may be to simply measure it from a production or lab deployment. If you don't have an existing solution in place to measure, I suggest you download a trial version of the KEMP LoadMaster VLM. The VLM comes with a 30 days temporary license which should be sufficient to perform some tests in your environment.

After you created the Virtual Service and directed users to the LoadMaster you can read the TPS and throughput in real-time in the System Metrics section of the Home page.

image

This screenshot is taken from a small Exchange 2013 environment with ~700 active users with Outlook Anywhere in Online Mode and an average of 1.5 ActiveSync device per user.

This customer plans to use the LoadMaster for several other applications in the near feature. The choice for the VLM-2000 with its 2 Gbps throughput and up to 1.000 SSL TPS seems to be the right one, this unit offers more than enough performance with sufficient headroom for peak usage.

An alternative approach would be to enable SNMP on the LoadMaster:

image

The MIB can be located under the Tools section of the LoadMaster Documentation site. Then use your favorite SNMP tool to collect and log the data, for instance Peassler's PRTG.

Friday, January 16, 2015

Soon: Import PST files to Office 365

In an on-premises environment an admin can use the New-MailboxImportRequest cmdlet to import a batch of PST files to a mailbox or even directly into the users In-place Archive mailbox with the -IsArchive switch. Currently this is not possible with Exchange Online.

Of course there are some alternatives, such as an Outlook based manual import.

image

However, if your organization wants to move away from PST, and you should, then a manual process may not be the best solution.

When Microsoft bought the PST Importer tool from Red Gate and re-released it in 2012 as PST Capture (and more recent as PST Capture 2.0) it looked like this would be the perfect tool to locate and import PST files. Unfortunately the tool has severe shortcomings, most important areas are features, stability, performance and the fact that the tool is not supported through Office 365 Support.

So it is great news that Microsoft is working on providing...

The ability to import data into Office 365 in a quick and easy manner has been a known constraint of Office 365, and a solution for this issue has emerged as a key request from customers.  The engineering team has been working on a solution that will allow quicker imports of data into Exchange Online Archive Mailboxes.  You will now be able to import Exchange Online data through PST files into the service without using third party tools.

The announcement continues with the mention of Drive Shipping and Network Based Ingestion:

Drive Shipping and Network Based Ingestion options will use Azure-based services to import data.  Over time we will be extending this to other data types across Office 365.

Imagine you would be able to ship a 4TB USB drive to Microsoft and have them import your files to Exchange Online or SharePoint Online!

Expect the experience to be quite different from what you would do on-premises. Because the actual import process is handled by the Mailbox Replication Service (MRS) it won't be possible to have your local files imported into Exchange Online with the New-MailboxImportRequest cmdlet. Instead expect in interface to upload (or ship) your files to an Azure datacenter and start the import process from here.

Note that the announcement specifically mentions Exchange Online Archive Mailboxes. I hope it will be possible to import the data to the primary mailbox too to facilitate scenarios where that makes more sense.

If you want to be the first to know what Microsoft has in the pipeline for Office 365, make sure to keep an eye on the Office 365 roadmap.

image

Thursday, January 15, 2015

Update: Confusion around the new Office 365 150 MB onboarding limit

January 16th 2015: Added an update below this article...

Earlier this week, Microsoft announced a change in the maximum supported item size to migrate to Exchange Online.

Office 365 Exchange Online message size onboarding limit increase — We are making a change to allow customers to migrate larger mail messages to Exchange Online. We now allow messages up to 150MB to be migrated to the service. The change is available immediately to all customers and is published as a new limit in the Exchange Online limits page in the Office 365 service description. We are not changing other limits such as mailbox size or maximum send/receive limit for messages. This change enables customers with large messages to easily migrate their existing content to the service.

The previous limit was 25 MB. Customers needed to check if items larger than 25 MB existed in the mailboxes before that could be migrated to Exchange Online. The users then needs to be informed to export the item from his mailbox and store it on a file share. Alternatively administrators can perform the export from the Exchange side.

Microsoft already updated the Exchange Online Limits document to include the new 'Message size limit - migration' value.

image

There has been some confusion on the subject of migration. Some people, including me, assumed that this new limit was applied to mailboxes being moved with the Mailbox Replication Service. This would limit this improvement to Hybrid Migrations. How about Outlook Anywhere based Cutover and Staged migrations, and how about the IMAP migration? Or 3rd party migration tools using EWS to migrate the data over to the Exchange Online mailboxes?

Exchange MVP Henrik Walter is very clear:

Does the new 150 MB message size limit apply to third party tools?

That depends but typically no. The reason for this is because most third party tools provisions the mailbox (meaning it wll have the limit for the mailbox plan enforced) prior to migration unlike MRS based moves.

MCM/MCSM Gare Steere believes the change applies to both MRS based moves and IMAP migrations. That leaved Staged and Cutover migration with the current 25 MB limit.

My expectation is that the changed limit will apply to MRS move for sure, that makes perfect sense because this is the most 'enterprise' friendly way to migrate mailboxes. I would expect Microsoft to keep trying to deliver the best experience for this migration method. The other native migration tools are being used for small scale migrations and have already severe limitations, I think Microsoft will give less priority to improve this methods.

At this time it's not possible to share a definitive answer because there's no official statement other than the initial announcement. Why don't we just test then? Because changed like are being rolled out in Office 365 over a certain period of time, it's not immediately available for all tenants. So if we test and find the test to fail, it can be because the change has not been applied to our tenant yet.

In the mean time I will post when I have more information. If you did more information, please let me know in the comments section!

Update:

My sources tell me (how cool does that sound!) there is much confusion and discussion with Microsoft internally. The internal communication gives the impression this only applies to MRS moves however it remains unclear how this will work exactly. If the limit is applied on the store level, can a user move the large items between folders?

And then there's this issue with hybrid moves some people reported in December:

The value of property 'MaxReceiveSize' exceeds the maximum allowed for user *****. The allowed maximum receive size is 150 MB.
+ CategoryInfo : InvalidArgument: (
afeucht@xxxxxx.com :MailboxOrMailUserIdParameter) [New-MoveRequest], RuleValidationException
+ FullyQualifiedErrorId : [Server=BLUPR02MB147,RequestId=99b351c5-8fae-4372-a3c4-8575ab1e16d2,TimeStamp=12/18/20149:19:03 PM] [FailureCategory=Cmdlet-RuleValidationException] 6264EEEB,Microsoft.Exchange.Management.RecipientTasks.NewMoveRequest

Microsoft confirmed this issue and had fixed it in less then a week, however I am sure that this error has something to do with the changed message size limit. Is this a confirmation that the new limit is enforced by the mailbox move process on the MRS? Interesting...

Monday, January 12, 2015

Considering an Exchange 2013 DAG without AAP? Careful!

Exchange 2013 SP1 can now benefit from a couple of new clustering features in Windows Server 2012 R2, read all about them in the Scott Schnoll's blog post Windows Server 2012 R2 and Database Availability Groups.

My personal favorite is the option create a DAG without a Cluster Administrative Access Point. This feature allows Exchange to use a cluster without an assigned IP address, IP Address or Network Name cluster resources or Computer Name Object. Windows Server 2012 R2 and Exchange 2013 SP1 no longer need those to manage the cluster and are able to talk to the cluster API directly.

A DAG without an AAP reduces the complexity and simplifies DAG management. Everyone who has worked with Exchange 2000/2003 clusters will agree that reducing the complexity can improve the stability and availability of Exchange greatly.

Unfortunately there are many 3rd party solutions which still require the legacy cluster objects, for instance backup software trying to access the database through the DAG CNO. An example of such software is BackupExec 2012-2014:

Symantec states in HOWTO99184   Backing up Exchange data that:

Backup Exec requires an Exchange DAG to be configured with a Cluster Administrator Access Point to facilitate connectivity to the Cluster Name and Cluster IP address.

Symantec NetBackup has a similar issue however can be tricked to talk to a static server by editing the hosts file: Backing up an Exchange 2013 IP less DAG. Another example is NetApp SnapManager which currently does not support a DAG without AAP.

Unfortunately there's no (supported) way to convert your DAG to a DAG with an AAP so you need to destroy and rebuilt your DAG to correct this issue. So check any dependencies carefully before you opt to deploy a DAG without an AAP.