Friday, May 1, 2015

Migrating On-premise .Net Web Applications to Microsoft Azure - Part 4


Content Management systems, Document management systems and other major user generated content based solutions deals with huge amount of data. Apart from this, websites integral files such as javascript files, CSS, Images and other media files etc. which are downloaded on every page request takes huge bandwidth of your origin server.

Usually onpremise applications neglect page load performance and bandwidth issues, but if we can offload these static resource from your application server to purpose built Azure Content Delivery Network (CDN), the bandwidth usage of the origin server as well as the page loading performance will actually double.

Other reason to adopt Azure CDN is, keeping these static files very next to your customer will also reduce multi network hops, and thereby you can gain significant page performance. As of this writing Microsoft Azure has around 31 POP locations spread across United States, Europe, Asia, Australia and South America.

Adopting CDN is a 2 step process. 1. Create a storage account if it is not already created 2. Create a CDN repository and map the storage account in the origin domain and finally create an endpoint ( to access the static assets.

 Not all applications require CDN. CDN is best suited for heavy traffic websites with lot of static assets and your targeted users spread across many different regions. Analyze your IIS Logs and understand how much bandwidth does the Static assets consume and dynamic content, if your static content bandwidth is less, probably CDN will not make sense.

Active Directory

Many Internal and Enterprise Line of Business Applications leverages Windows Active Directory for authenticating and authorizing their users. These applications had complex multiple configurations related to integration of Onpremise Windows Active Directory setup. ASP.Net 2.0 brought in support for Windows AD integration through ActiveDirectoryMembershipProvider using which developers integrated Windows Active Directory with their Applications.

Migrating to Cloud doesn’t affect or replace the Windows Active directory integration with your application in any way, except simplifying the cluttery configuration process. You can bring in the same capability even while migrating your Applications to Azure Cloud using Azure Active Directory with minimal configuration and code changes.

Integrating Azure Active Directory with your Cloud Application is a 3 Step Process.

  1. Setup Active Directory on Azure
  2. Setup Authentication / Authorization configuration wizard using Configure tab of the Website and associate the newly created Active Directory from the Step 1. 
  3. Select or create the Azure Active Directory app for the Website

Once you complete the steps mentioned above, your application will only allow users from the Associated Active Directory.  However there are known limitations such as

  • Target Framework must be .Net 4.5
  • All users in the configured directory will have access to the application.
  • Entire site is placed behind login the requirement.
  • Head less authentication/authorization for API scenarios or service to service scenarios are not currently supported.
  • There is no distributed log-out so logging the user out will only do so for this application and not all global sessions.

If your existing application is fine with the given current restrictions, it’s wise to choose default Authentication/Authorization capability of Azure WebApps.

Queue/Service Bus

Queues are not new, it’s been there ever since Multi-Tiered and Multi-Layered applications were built.  Queues helped our applications to distribute messages between components, servers and layers. Popular use cases around uploading images in Web server and processing it in a district component. Prior to Queues or Service Bus, Developers used to build custom Queue mechanism using SQL Server and manage manipulation like insertion, retrieval and deletion.

While migrating applications which uses custom Queue messaging can be replaced with Azure Queue storage solution. It offers SDKs for wide variety of programming languages including Java, PHP, and Python along with .Net.

Azure offers scalable and reliable messaging queue, however adopting Azure Queues requires modifying the existing application according to Azure Queue SDK. 
Carefully analyze 

  • Current Queue infrastructure
  • Complexities involved in adopting Azure Queues
  • Benefits of Azure Queue

Weigh the benefits and challenges involved and choose the suitable solution.

Cloud – On premise Connectivity

 While migrating the applications to the cloud, the preliminary step is to identify the dependent systems such as Database, Third Party Tools, Data feeds etc. There might be concerns around Data Security, Compliance, and Legal restrictions etc., if so you might be forced to keep the database on your premise and migrate the application alone to Azure. Microsoft Azure provides Azure Hybrid Connectivity Tools which helps the developer to just move the Application and still consume the data from the onpremise database or any custom sources.

Hybrid Connection Manager, is a feature of BizTalk services provided by Microsoft to bridge the connectivity between Onpremise Services with Azure Cloud Services.  It can be installed on a dedicated or shared server inside your corporate firewall and let azure to connect to your designated Databases including SQL Server, MYSQL & Oracle Databases.

Hybrid Connection Manager, uses Shared Access Signature (SAS) to secure the connectivity between your azure account and Onpremise Database. It creates separate security keys for the application and onpremise database, Developers can individual revoke and roll over these keys for security reasons.

Ports to be opened on your Onpremise end

HTTP port; Used for certificate validation.
HTTPS port
Used to connect to Azure. If TCP port 5671 is unavailable, TCP port 443 is used.
Used to push and pull data. If TCP port 9352 is unavailable, TCP port 443 is used.

Search Considerations

Many existing mission critical LOB applications uses SQL Server Full-Text Search indexing engines for better search performance. Typically, these would have costed a lot of investment while developing these Indexing infrastructure onpremise and written custom coding to access the same in the application layer. Developers who don’t want to modify the search section of the application and wants to bring the same Search Indexing Capability in Azure, they can implement using SQL Server 2012 or 2008 editions.

Other alternative is to use Azure Search, a PaaS offering which offloads the Search indexing functionality into Scalable and Highly available Azure Search Service. 

Batch Jobs (Background Tasks)

Usually applications process lot of data in bulk which is also known as Batch jobs. Batch execution includes variety of jobs including bulk database updates, automated transaction processing, ETL process, Digital images processing  (resize, convert, watermark, or otherwise edit image files), files conversion from one format to another etc. The bulk execution of jobs executed sequentially, time based execution for e.g. morning 8.00 am to 9.00. Depends upon the application, server infrastructure, based on the size of the data, based on the logic may take as less minutes to hours based on the intense of the data.

If the volume of the data is huge and the systems provisioned capacity is less, it may take more time, if other applications are dependent on the output of this system, the subsequent process might also get delayed. Onpremise applications usually provision less system resource for such batch systems and scalability is often forgotten in such scenarios.

With Azure cloud, developers now have resources at their disposal, instead of running a batch job in 1 server for 10 hours, developer can now boot 10 different servers and complete the job within 1 hour and the dependent applications can then proceed further on with the same cost.

Migrating these batch process to Microsoft Azure has various benefits including on demand resources, hyper scale parallel processing, integration of Azure storage etc. Azure Batch is a new service which follows a distributed job execution architecture built by Microsoft, Hence the existing custom built Batch processing cannot be migrated as such to the new Azure Batch processing system.

Existing Batch processing systems have to be rebuilt using Microsoft.Azure.Batch SDKs, which can obtained from Nuget packages. If the existing has a complex batch processing system and you don’t want to re-build or customize it, we recommend you to deploy the same in VM and execute it.