Friday, May 1, 2015

Migrating On-premise .Net Web Applications to Microsoft Azure - Part 5

Schedulers


Schedulers is an important part of the batch jobs if its time bound. Traditionally, developers use Windows Server’s task scheduler to take care of executing tasks in a fixed time interval.  This is good enough for executing a job within the server or in a remote server, it also provides a basic monitoring window to display the current tasks, view task properties and history etc.

However, this system is not highly scalable and reliable simply because if the server where the scheduler configured goes down, all the jobs will ultimately fail. Unless your customer scream or you have a monitoring system for the hosted server nobody will notice this.

You can replace your existing scheduling mechanism with Azure Scheduler which provides advanced scalable and reliable Scheduling functionality in Azure. Benefits of Azure Scheduler includes


  • Call HTTP/HTTPS services inside your Azure subscription and resources at your onpremise
  • Run jobs on any schedule—now, later or recurring
  • Use Azure Storage queues for long-running or offline jobs


Scalability and High Availability Considerations


Key aspect of the application migration is to enable the existing application to leverage the elastic scalability (Scale up and Scale Down) aspect of Azure cloud. It is important to analyze and refactor the application to adopt to the scalability attribute of Azure Cloud. Often, this doesn’t require any rewriting or modifying your existing application and it can be easily achieved using configurations
with less effort. Scalability strategy may differ from one application to another and time zone to time zone. For example, a multinational company might have branches all over the world, so it might be accessed heavily in the Daytime and accessed with the same heavy traffic during the night time, whereas an organization having employees only in a specific country or region might scale down the CRM application during the Evening and night time due to less traffic. 

Identifying the Scalability pattern of the application and configuring the scalability requirements will help you to leverage TCO and ROI of adopting Azure Cloud.

Business Continuity (Backup & Resiliency Data Recovery)


Onpremise applications suffered backup deficiency due to unavailability of efficient storage, manual overhead involved, connectivity, geo locations involved etc. Out of many applications we analyzed, we found very few applications had Backup and Data recovery planned, rest all had only either files backup or database back up that too very few were automated and most of them were manual.

Whether the application you are migrating have/haven’t possess a Backup and Data recovery strategy, it’s recommended to setup Application and Database backup especially when the application is hosted on cloud.  Azure Backup & Site Recovery is a suite of backup solutions for your applications, databases and even you’re VMs.  

TCO & ROI

Microsoft Azure new breed of services helps enterprise and SMBs to move out of Capex to Opex way of looking at IT investments. Existing applications heavily suffered from variety of hardware and software shortages due to Capex burdens and the manual interventions required to acquire and setup machines. Enterprises either over or less provisioned their resources which attracted huge IT investments and ongoing maintenance. With Azure Cloud, virtually huge amount of hardware and software resources are available for the developers on demand with the simple pay as you go model without the need of understanding the license terms and conditions.

From the TCO & ROI point of view, Migrating to cloud may look like a costly initiative in a short term, but operating wisely by architecting your application to leverage the key promise of cloud i.e Elastic Scalability and Virtually No limit Hardware with bundled licensed options and Enterprise Agreement for Enterprise and SMB customer will help to reap the benefit in the long term.

Below is a simple cost estimation for a medium sized 3 tier Web Application with the components like Compute, Database, Storage etc. But this is just an indicative cost estimation, but the actual cost may vary depend on your actual application requirement.


Resource
Description
Cost
Web Role
Standard D2 (2 cores, 7 GB RAM, 100 GB SSD) Count 2
$473.19$ (0.636/hr)
Worker Role
Standard A2 (2 cores, 3.5 GB RAM)

$238.08
($0.32/hr)
SQL Database
Standard (Performance Level S2, 5 DBs)
$375.00
Redis Cache
Basic (6 GB)
$133.92 ($0.18/hr)
Storage (Geo Redundant)


Block Blobs
100 GB (Geo Redundant)
$4.80
Tables & Queues
100 GB (Geo Redundant)
$9.50
Storage Transaction
6 Millions
$0.22
Storage Backup
200 GB
$9.60
Bandwidth
United States + Europe egress (10GB)
$0.44


$1244. 75


Deployment/Continuous integration & Maintenance


The biggest benefit of adopting cloud is the simplification of deployment and continuous integration of solutions to production, thereby releasing new features, enhancement and bug fixes in nearly minutes and hours compared to days and weeks. Conventional deployment mechanisms allowed developers with very less option like FTPing the deployment to production servers. Configuring the development, staging and Testing/QA environments was a pain and usually takes months to
configure and provision. On top of these challenges, synchronizing product to development or development to staging was a huge challenge.

By adopting Azure cloud, developers are empowered with so many options and simplified mechanisms to develop and deploy the solution in a click of a button. Getting a development environment with all the preloaded softwares, replicating the production data to the staging environment, duplicating development environment to testing is now simplified.

Visual Studio IDE is a one stop solution for developers to develop, debug (locally as well as remotely) stress test and deploy the application to Azure Cloud. Visual Studio natively integrates with Team Foundation Server (TFS). VS + TFS online lets you to collaborate, version control, continuously build, integrate and test your solution on cloud. 

Developers can also choose Open Source alternatives like GitHub to build a collaborative development and deployment environment and use NuGet to consume and distribute packages within your team and others. 

Overall Migration Approach


Migrating your onpremise applications to Azure cloud may be challenging or a breezy path, it completely depends on the application, the size and how it is architect or built. But, irrespective of the application, you have to go through the various stages of Application Migration, right from application discovery, dependency identification till migration and testing the production application.  We recommend spending at least 30-40% of time allocated to analysis phase and do a detailed analysis and developing a migration plan. 


Conclusion


In this whitepaper, we have captured the various real time scenarios and use cases usually found in the existing onpremise application migrations and discussed about the various strategy and recommendations for the suitable migration approaches. However, not all applications are equal, hence we recommend you to strategize the best approach based on your business application and criticality and take a step by step approach and migrate your applications to cloud.
Need assistance in your migrations, analysis, or lift and shift, feel free to reach out to us! We are glad to help you out with any kind of migration scenarios.


Migrating On-premise .Net Web Applications to Microsoft Azure - Part 4

CDN 

Content Management systems, Document management systems and other major user generated content based solutions deals with huge amount of data. Apart from this, websites integral files such as javascript files, CSS, Images and other media files etc. which are downloaded on every page request takes huge bandwidth of your origin server.

Usually onpremise applications neglect page load performance and bandwidth issues, but if we can offload these static resource from your application server to purpose built Azure Content Delivery Network (CDN), the bandwidth usage of the origin server as well as the page loading performance will actually double.

Other reason to adopt Azure CDN is, keeping these static files very next to your customer will also reduce multi network hops, and thereby you can gain significant page performance. As of this writing Microsoft Azure has around 31 POP locations spread across United States, Europe, Asia, Australia and South America.

Adopting CDN is a 2 step process. 1. Create a storage account if it is not already created 2. Create a CDN repository and map the storage account in the origin domain and finally create an endpoint (http://8KMiles.vo.msecnd.net/static_images/thumbnails) to access the static assets.

 Not all applications require CDN. CDN is best suited for heavy traffic websites with lot of static assets and your targeted users spread across many different regions. Analyze your IIS Logs and understand how much bandwidth does the Static assets consume and dynamic content, if your static content bandwidth is less, probably CDN will not make sense.

Active Directory

Many Internal and Enterprise Line of Business Applications leverages Windows Active Directory for authenticating and authorizing their users. These applications had complex multiple configurations related to integration of Onpremise Windows Active Directory setup. ASP.Net 2.0 brought in support for Windows AD integration through ActiveDirectoryMembershipProvider using which developers integrated Windows Active Directory with their Applications.

Migrating to Cloud doesn’t affect or replace the Windows Active directory integration with your application in any way, except simplifying the cluttery configuration process. You can bring in the same capability even while migrating your Applications to Azure Cloud using Azure Active Directory with minimal configuration and code changes.

Integrating Azure Active Directory with your Cloud Application is a 3 Step Process.


  1. Setup Active Directory on Azure
  2. Setup Authentication / Authorization configuration wizard using Configure tab of the Website and associate the newly created Active Directory from the Step 1. 
  3. Select or create the Azure Active Directory app for the Website


Once you complete the steps mentioned above, your application will only allow users from the Associated Active Directory.  However there are known limitations such as


  • Target Framework must be .Net 4.5
  • All users in the configured directory will have access to the application.
  • Entire site is placed behind login the requirement.
  • Head less authentication/authorization for API scenarios or service to service scenarios are not currently supported.
  • There is no distributed log-out so logging the user out will only do so for this application and not all global sessions.


If your existing application is fine with the given current restrictions, it’s wise to choose default Authentication/Authorization capability of Azure WebApps.

Queue/Service Bus

Queues are not new, it’s been there ever since Multi-Tiered and Multi-Layered applications were built.  Queues helped our applications to distribute messages between components, servers and layers. Popular use cases around uploading images in Web server and processing it in a district component. Prior to Queues or Service Bus, Developers used to build custom Queue mechanism using SQL Server and manage manipulation like insertion, retrieval and deletion.

While migrating applications which uses custom Queue messaging can be replaced with Azure Queue storage solution. It offers SDKs for wide variety of programming languages including Java, PHP, and Python along with .Net.


Azure offers scalable and reliable messaging queue, however adopting Azure Queues requires modifying the existing application according to Azure Queue SDK. 
Carefully analyze 

  • Current Queue infrastructure
  • Complexities involved in adopting Azure Queues
  • Benefits of Azure Queue

Weigh the benefits and challenges involved and choose the suitable solution.

Cloud – On premise Connectivity

 While migrating the applications to the cloud, the preliminary step is to identify the dependent systems such as Database, Third Party Tools, Data feeds etc. There might be concerns around Data Security, Compliance, and Legal restrictions etc., if so you might be forced to keep the database on your premise and migrate the application alone to Azure. Microsoft Azure provides Azure Hybrid Connectivity Tools which helps the developer to just move the Application and still consume the data from the onpremise database or any custom sources.


Hybrid Connection Manager, is a feature of BizTalk services provided by Microsoft to bridge the connectivity between Onpremise Services with Azure Cloud Services.  It can be installed on a dedicated or shared server inside your corporate firewall and let azure to connect to your designated Databases including SQL Server, MYSQL & Oracle Databases.

Hybrid Connection Manager, uses Shared Access Signature (SAS) to secure the connectivity between your azure account and Onpremise Database. It creates separate security keys for the application and onpremise database, Developers can individual revoke and roll over these keys for security reasons.

Ports to be opened on your Onpremise end


Port
Description
80
HTTP port; Used for certificate validation.
443
HTTPS port
5671
Used to connect to Azure. If TCP port 5671 is unavailable, TCP port 443 is used.
9352
Used to push and pull data. If TCP port 9352 is unavailable, TCP port 443 is used.


Search Considerations

Many existing mission critical LOB applications uses SQL Server Full-Text Search indexing engines for better search performance. Typically, these would have costed a lot of investment while developing these Indexing infrastructure onpremise and written custom coding to access the same in the application layer. Developers who don’t want to modify the search section of the application and wants to bring the same Search Indexing Capability in Azure, they can implement using SQL Server 2012 or 2008 editions.

Other alternative is to use Azure Search, a PaaS offering which offloads the Search indexing functionality into Scalable and Highly available Azure Search Service. 

Batch Jobs (Background Tasks)


Usually applications process lot of data in bulk which is also known as Batch jobs. Batch execution includes variety of jobs including bulk database updates, automated transaction processing, ETL process, Digital images processing  (resize, convert, watermark, or otherwise edit image files), files conversion from one format to another etc. The bulk execution of jobs executed sequentially, time based execution for e.g. morning 8.00 am to 9.00. Depends upon the application, server infrastructure, based on the size of the data, based on the logic may take as less minutes to hours based on the intense of the data.

If the volume of the data is huge and the systems provisioned capacity is less, it may take more time, if other applications are dependent on the output of this system, the subsequent process might also get delayed. Onpremise applications usually provision less system resource for such batch systems and scalability is often forgotten in such scenarios.

With Azure cloud, developers now have resources at their disposal, instead of running a batch job in 1 server for 10 hours, developer can now boot 10 different servers and complete the job within 1 hour and the dependent applications can then proceed further on with the same cost.

Migrating these batch process to Microsoft Azure has various benefits including on demand resources, hyper scale parallel processing, integration of Azure storage etc. Azure Batch is a new service which follows a distributed job execution architecture built by Microsoft, Hence the existing custom built Batch processing cannot be migrated as such to the new Azure Batch processing system.


Existing Batch processing systems have to be rebuilt using Microsoft.Azure.Batch SDKs, which can obtained from Nuget packages. If the existing has a complex batch processing system and you don’t want to re-build or customize it, we recommend you to deploy the same in VM and execute it.

Migrating On-premise .Net Web Applications to Microsoft Azure - Part 3

Application Consideration

.Net versions


Existing applications built on different older versions of .Net for e.g. .Net 1.1, 2.0 0r 3.5. It’s important to identify the current version of .Net runtime of your application, because while you are deploying your application on cloud, you must choose the targeted .Net framework. Currently, the default .Net versions supported by Azure WebApp is .Net V3.5 and V4.5.

During the migration, it’s recommended to recompile your application to target either V3.5 or V4.5 for better performance and security reasons. However during the recompilation process you may encounter certain libraries are obsolete and replaced with the new libraries of the targeted framework for e.g. 4.5.

It’s highly recommended to refactor the code and replace obsoleted binaries with the recommended alternates, but in certain cases where your application might use some third party tools & components which might expect a specific older version on which it’s built. In such cases, you may go back to the respective vendor and get the updated latest version of components for the target framework of your application if not, you may continue to support those old frameworks by including the below configuration in your web.config file.


<configuration>
   <startup>
      <supportedRuntime version="v1.1.4322"/>
      <supportedRuntime version="v1.0.3705"/>
   </startup>
</configuration>

Migrating your application to the latest version of .Net can be best accomplished through Visual Studio IDE. To convert an .Net V2.0 application to .Net V4.5, open your solution using Visual Studio IDE, the Visual Studio Conversion Wizard will pop up and will request you to back up the application, click yes to backup else click no to proceed with the Conversion. On the next screen, select the targeted framework for your application, it’s recommended to install the targeted version of .Net before the starting conversion, if not the conversion wizard will assist you to download and install the respective .Net framework.  After the successful conversion, you may proceed with deploying the application to Azure right from Visual Studio IDE.  

Cache

Caching helps us to store frequently accessed data stored and retrieved from the Cache Memory of the Server. Existing Asp.Net applications heavily dependent on InProc Cache memory for storing and retrieving Output Buffers, ViewState, Session State and Named Caches. InProc is the fastest caching technique as it stores the caching data within the application server, but the major downfall with InProc cache is, it will lose the objects when the server or the IIS gets restarted. 
Similarly, if your application is hosted in more than 1 server inproc is not an ideal solution as it depends on the Cache memory of the host server. 

Azure offers 3 different Caching solutions 

1. Azure Redis Cache (Preferred)
2. Azure AppFabric Managed Cache
3. Azure AppFabric InRole Cache

Out of all 3, Azure Redis Cache is the newest offering based on popular open source Redis Cache which is fully managed and serviced by Microsoft. Azure Redis Cache has lot of advantage over the rest 2 options because it supports running atomic operations on these types, like appending to a string; incrementing the value in a hash; pushing to a list; computing set intersection, union and difference; or getting the member with highest ranking in a sorted set etc. Other features include support for transactions, pub/sub, keys with a limited time-to-live, and configuration settings to make Redis behave more like a traditional cache.

Azure AppFabric Managed Cache offers providers for storing ViewState, Session State, and Output Buffer etc. Azure AppFabric InRole Cache, is a self-hosted alternative where the developers must take care of patching and updating themselves.

Store SessionState in Redis Cache

Session State dependent Web applications can easily move from InProc Cache to Azure Redis Cache in 2 steps.

  1. Create a Azure Cache Node
  2. Update your Web.config to refer Azure Redis Cache instead of InProc Cache


Comment out the Following Configurations in the web.config File



<!-- <sessionState mode="InProc" customProvider="DefaultSessionProvider">
  <providers>
    <add name="DefaultSessionProvider" type="System.Web.Providers.DefaultSessionStateProvider, System.Web.Providers, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" connectionStringName="DefaultConnection" />
  </providers>
</sessionState> -->


Include the following configuration in the web.config File


<sessionState mode="Custom" customProvider="8KMilesSessionStateStore">
  <providers>
    <!-- Syntax
      <add name="MySessionStateStore"  host = "127.0.0.1" [String] port = "" [number]  accessKey = "" [String]
        ssl = "false" [true|false] throwOnError = "true" [true|false] retryTimeoutInMilliseconds = "0" [number]
        databaseId = "0" [number] applicationName = "" [String]  connectionTimeoutInMilliseconds = "5000" [number]
        operationTimeoutInMilliseconds = "5000" [number]  />
    -->
    <add name="8KMilesSessionStateStore " type="Microsoft.Web.Redis.RedisSessionStateProvider" host="8KMilesSessionStateStore.redis.cache.windows.net" 
    accessKey="..." ssl="true" />
  </providers>
</sessionState>


To programmatically read and write objects into Redis Cache download StackExchange.Redis provider from Nuget via Visual Studio IDE. Please refer the below example for reference

Logs


Error handling and management of events and logs is a cumbersome process but it’s required to make sure the health of the application is always good as well as analyze the errors and resolve them on a timely basis. Most existing asp.net applications use one or the other out of the box Logging libraries and SDKs or built a custom logging mechanism to capture the logs and monitoring infrastructure. These logs usually retained for couple of weeks or a month to reduce the storage cost of these logs.  Usually developers stores the log files in a text file and store in a centralized location with the overhead of parsing and conversion to feed the data to log monitoring application, other option is to store these log data in the SQ Server Database.

When migrated to Microsoft Azure provides native support for Log Tracing and Diagnostics. Azure provides 3 different ways to collect and store the log information.

They are


  1. File System Storage
  2. Table Storage
  3. Blob Storage


File System storage helps you to store the logs in the text format on the host server, and then you can configure to move the data to table storage or Azure SQL on a timely intervals for advanced monitoring and analysis. Table Storage option will directly store the data in Azure Table storage without having to storage the data in the host server, thirdly Blob Storage which is also very similar to File System storage but this will save the log files in the CSV format.

Another option is “Azure Operational Insights” exclusive for applications hosted on VM/Physical Machine hosted on premise and private or hybrid clouds. It’s a cloud based Log/Security/Capacity Planning service fully managed by Microsoft. With this you can completely automate, store, analyze the logs. Click here to read more about this service.

Note: While migrating applications to Azure, it’s recommended not to move the old log files into Azure to avoid raising confusions between the Azure Server Logs and Application and Onpremise logs.

Static File Consideration (Image/Scripts/CSS/Video/Audio etc)

Static assets like Images, Javascript, and CSS are the bulky and bandwidth intensive part of your web applications. Hosting these static files within your application environment has number of problems including 

  1. Increasing the overhead of the application server to serve static contents as well
  2. Increasing the bandwidth cost of the application hosting server
  3. Increasing the size of the application packages during frequent deployment

Offloading these static assets from your application host to a dedicated storage account is a recommended approach. While migrating your application to Microsoft Azure, you can move these static files to Azure Storage and directly serve the static content from the dedicated storage account instead of your application server. This separation of concern will overcome all the disadvantages mentioned above along with the benefits like boosting the performance and reduce your existing hosting costs and ability to update static files by replacing them in CDN without having to deploy entire website.

However when you adopt Blob storage, you have to update all the physical path to logical paths. Refer the below configuration snippet to fix this in the web.config file.



ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("8KMilesSessionStateStore.redis.cache.windows.net,ssl=true,password=...");
IDatabase cache = connection.GetDatabase();

// Perform cache operations using the cache object...
// Simple put of integral data types into the cache
cache.StringSet("key1", "value");
cache.StringSet("key2", 25);

// Simple get of data types from the cache
string key1 = cache.StringGet("key1");
int key2 = (int)cache.StringGet("key2");
Take a look at the Community developed Azure Storage Explorer (Version 6), it can help you to connect to your storage account and manage your static files.

Migrating On-premise .Net Web Applications to Microsoft Azure - Part 2

SQL Date time Considerations

SQL Datetime manipulation is one of the most questioned topic while dealing with applications with time sensitive data. Applications hosted on single server or hosted in multiple servers located within the same datacenter will not impact datetime processing as it is located within a single time zone, but this is not the same in Cloud. As Microsoft offers datacenter in many different regions in many different time zones, it’s important to carefully deal with time sensitive information.

Conventionally developers used Datetime object of .Net 2.0 and SQL Server to store Datetime data with additional overhead of serializing, parsing and datetime conversion.

Datetime object provides Date and Time of a particular server’s calendar where the server resides and doesn’t give any other additional information like Time Zone etc. and hence it is very ambiguous. Some applications store Datetime in UTC format and handle the convention at the application layer which is a good practice to follow.

The alternate is DateTimeOffset, which represents a point in time, typically expressed as a date and time of day, relative to Coordinated Universal Time (UTC) which uniquely and unambiguously identify a single point in time. It’s advisable to update all the Datetime Objects in the application and Database objects to DatetimeOffset for better handling of datetime value in Azure SQL.

Alter table [8kmiles].[Employees] 
Alter column JoinedDate DatetimeOffset

SQL Database tables usually have primary key AKA Clustered indexes on each of the tables when they design the database, but sometime they leave if they don’t have proper awareness or don’t understand the database design principles. If Azure SQL is chosen as the targeted database, it doesn’t entertain tables which doesn’t contain Primary Key constraint, so it’s important to make sure that all the tables updated with Clustered index before migrating to Azure SQL.

Along with clustered index, it is also good practice to create non clustered indexes for the columns that are usually queried upon. Indexes, helps you to improve the performance of the OLTP queries.

While migrating the database, we have to consider index fragmentation aspect i.e. rebuilding and reorganizing the index’s in each and every tables. Best practice for index is always measuring fragmentation in percentage.  

Syntax for reorganizing and rebuilding the index

--Index Rebuild
USE <>
GO
ALTER INDEX ALL ON [table name] REBUILD
GO
--Index Reorganize
USE <>
GO
ALTER INDEX ALL ON [Object Name] REORGANIZE
GO

Pricing & SLA Considerations

Azure SQL is a No Frill Database (SQL Server) as-a-Service on a pay as you go model also priced very competitively compared to regular SQL Server licensing model. Azure SQL offered in three service tiers namely Basic, Standard and Premium Tier which are in (General availability) and SQL Database Service Version (V12) in (Preview). Irrespective of the tier, all the editions provide 99.99 % Uptime SLA and Security, but the primary differences are Database Size, Replicability and Database Throughput Units (DTU).

Standard tier is recommended for most of the applications which has moderate to large I/O, because it provides 250GB of max storage and up to 100 DTUs and standard geo replication capability. However, if your database is currently huge and requires more DTUs its recommended to choose premium Tier or the new edition i.e. SQL Database Service Version (V12) in (Preview) which offers 5X times better performance than the Premium Edition.

Few benefits of SQL Database Service Version (V12)


  • T-SQL support with common language runtime 
  • XML Indexing support
  • Support for In-memory column store for better performance

Azure SQL Security

Azure SQL is inherently secure in all respects. By default only 1433 TCP port is opened and the internal firewall must be instructed to accept connection from your local desktop or from specific server. All the SQL connections are accessed through Encrypted connection, if your application try connect without https, intruders can do man in the middle attack. To make sure, you access only encrypted connection set “Encrypt=True and TrustServerCertificate=False” in the connection string.

As a best practices of security, it is generally recommended to use the latest and updated Azure SDKs and libraries to prevent security vulnerabilities. It is also advisable to prevent your application from Cross Site Scripting issues, usage of parameterized queries is advised to avoid SQL injections.

User Defined Datatypes


If the applications that you are migrating contains user defined common language runtime (CLR) data types or objects, you must update your application to adopt to Azure SQL Supported Datatypes. Please refer the Azure SQL datatypes supported.

There are many applications suffers due to this,  Developers have developed user defined data types for cases like phone numbers in a specific format, alpha numeric employee ID's, IP addresses, etc for better consistency in the application, but Azure SQL doesn’t support this user defined data types at this moment. The alternative solution if you don’t want to change the application code is to choose SQL Server installed on VM.

Other Database options

Generally existing applications primarily used SQL Server irrespective of the type of data, be it Relational Data, Non-Relational Data, Map Data, Object Data, Graph etc, due to the unavailability of database technologies which are available now. Hence all the different variety of data was dumped only to SQL server or any other Relational Database for that matter. 

But this is not the case today, with the services like 

  • Document DB (Document based non-relational storage, Ideal for building next general scalable web applications), 
  • Azure Table, a columnar storage to store Key pair Values
  • Azure Queue Store, a large message storage service

While migrating the Database to cloud, it is also best to analyze the kind of data resides in the SQL Store and find the right storage solution for better performance and scalability. NoSQL type Databases like Azure DocumentDB and Table Storage are best suitable for Heavy traffic intensive, user content generated web applications which requires massive scale and performance, but on the other hand if your applications heavily depends on ACID principles such Atomicity, Concurrency, Integrity, and Durability and fine with the current performance, it’s wise to stick with SQL store like Azure SQL or SQL Server.

Database Backup and Restore

Back in onpremise, you might have configured different kinds of complex both manual and automated database backup and recovery mechanisms. Moving to Azure SQL will certainly help you to achieve higher resiliency and advanced database replication and redundancy. By default, Azure SQL creates 1 Primary DB and 2 secondary replicas of your database in a separate physical node away from the primary server. In the event of primary DB outage, Azure SQL will automatically promote the secondary DB as the primary DB without you noticing the outage without any data loss. 

Along with the default data backup and recovery options, Azure provides additional database recovery solutions for the Risk Averse Data oriented customers. They are

1. Standard Geo Replication
2. Active Geo Replication

These solutions helps you to build a highly available and resilient Data driven web solutions. Apart from Basic Tier, these options are available with both Standard and Premium tiers.  To understand the difference between these 2 options it is important to understand the below terminologies.

Estimated Recovery Time (ERT) - Time taken to restore your database to active state when there some disaster happens at your primary datacenter) 
Recovery Point Objective (RPO) -The amount of most recent data changes (time interval) the application could lose after recovery.

Note: Standard GEO Replication and Active Geo Replication offers ERT* < 30 seconds & RPO† < 5 seconds.  

Standard GEO Replication allows us to create a non-readable secondary replica to another region which can be auto/manual auto failure in case of Primary goes down. On the other hand, Active Geo replications allows us to create up to 4 readable replicas which can solve 2 major purpose, 1. We can use it for database load balancing and 2. Database Failover.

To Summarize, Standard Geo Replication is targeted towards application Medium to Large applications with moderate update rates and cost centric customers. Active Geo replication is for high intensive applications with heavy write data load with a bit high pricing. Based on your application, you can evaluate these pointers and choose the right disaster recovery solutions from Azure SQL.

SQL Server to Azure Migration Wizard

Finally the migration (Schema + Data) is the final and the most important part of Azure Migration process. Database migration is the 3 step process

1. Create Schema objects in Azure SQL
2. Load or Move the Data from Local Database to Azure SQL
3. Make sure the Schema and Data in Azure SQL 100% syncs with local Database

There are handful of Tools to help developers with Data migration process, but it is up to the developer to make sure the Schema and the Data are up to the Azure SQL Guideline and Limitations (Click here). Make sure, there are no Custom user defined data types, Clustered indexes are created etc before moving the Schema objects to Azure SQL.

With respect to the Data migration and Sync Tools, there are 3 major ways to do the data movement, they are

1. Using SQL Server Management Studio to create Scripts for the schema and insert scripts for the Data and executing the same against Azure SQL
2. SQL Database Migration Wizard (SQLAzureMW) is a Community developed Open source tool designed to help you migrate your SQL Server 2005/2008/2012/2014 databases to Azure SQL Database. As a word of precaution, this tool doesn’t validate UDFs or existence of Clustered index, hence it is highly advisable to make sure those attributes are defined and validated before using the tool.

Note:  SQLAzureMW Community has also released a CookBook for migrating Databases to Azure SQL Database update V12, here.
3. SQL Server 2008 Integration Services (SSIS) 
4. The bulk copy utility (BCP.exe)
5. System.Data.SqlClient.SqlBulkCopy class

Once the SQL DB migration is completed, make sure the schema, objects and data are identical and same as onpremise Database setup.

To summarize, we highly recommend you to review the limitations and see if they affect your current SQL server implementations. Often these limitations are relaxed and removed with the introduction new versions Azure SQL.

Attribute
Azure SQL
SQL on Azure VM
Size
Max 500 GB
50+ Tb
Ports
only 1433
User based customization is also available
Backup database
Either we can script the database or table using standard customized sql scripts.
By default backup and restore via SSMS
Restore database
Create a database and then execute the series of scripts
By default backup and restore via SSMS
Distributed transaction
Not supported
on demand we can use MSDTC
Scheduling and automation
Not available
By default sql agent service will be executing series of TSql command
Login and securable
2 basic Server level security not allowed for windows authentication
By default Server level security/Database level security
database HA
Highly Available
Log shipping /Mirroring
Table level HA
Highly Available
Replication[Transactional/merge/peer to peer replication]
Instance level HA
Highly Available
Clustering [active/active, active/passive]
Disaster Recovery Solution
Active Geo Replication up to 4 online secondary’s only for premium tiers
Multiple availability Zone or native database technology
Max No Database support
150 including Master
32767 including system database
SQL Server Collation Support
Only  database level col SQL_LATIN1_GENERAL_CP1_CI_AS
It is available for Instance/database/table/Column

Migrating On-premise .Net Web Applications to Microsoft Azure - Part 1

Microsoft first released Classic ASP, a server side scripting language by 1996 with IIS 3.0 as an add-on which opened many gates for developers to build a truly dynamic websites for businesses. Later Asp.Net 1.0, an advanced platform built on top .Net Common language runtime with full support for SOAP messages, XML, Object oriented programming, Web forms etc. From 2002 till 2009, Microsoft released enhancements and new versions of Asp.net runtime, finally by 2010 Asp.net officially released Asp.Net MVC, a cross platform development methodology to build application in Model-View-Controller Architecture which promoted distributed application development methodology. Asp.Net MVC helps developers to build scalable applications because it allows us to take full control of HTML rendered, TDD (Test Drive Development), Support for REST, and without state management components like Viewstates and SessionState mechanisms.

Although there are lot of advantages with the new ASP.Net MVC platform, there are plenty of Internal portals, public facing applications, commercial SaaS applications, built using various older versions of ASP.Net such as Asp.Net 1.1, 2.0,3..5, 4.0 web forms which heavily relies on View state and Session state component. This article doesn’t state applications built on older platforms are not scalable or slow, but the attributes mentioned above is not cloud friendly and anti-stateless which applies additional load on the webserver.

This article concentrates on the important considerations that developers have to bear in mind while they migrate onpremise hosted Asp.Net applications to Microsoft Azure.

 Background 

 Enterprises and Businesses of various sizes approach us with a common set of similar business problems i.e.
  • 1. We have a public facing Web Applications built on ASP.Net 1.1 hosted on premise is having Scalability & Performance issues, Can you please help us.
  • 2. We have an internal Learning Management System built on ASP.Net 2.0 which we would like to host it on Windows Azure Web Hosting Plans 
  • 3. We have a hybrid app (Intranet + Public) built using Classic ASP has Security issues and we would like to Reengineer with the latest Asp.Net Platform.
  • 4. We have Apps built on Classic ASP which has issues with interoperability and Cross Platform integrations, how do we address it.
 These are very few scenarios, but there are plenty of other cases that we come across on a daily basis. Usually while consulting on migrations or up-gradation engagements, the first phase is to sit with business stack holders and understand the criticality of this apps and then dive deep into application study and dependency Identification etc. At 8KMiles, with our deep expertise in handling such projects, we have consolidated 18 key considerations that everyone should keep in mind while involved with application migrations to Microsoft Azure.

 Environment Considerations (PaaS/IaaS) 

 After decided with migrating self-hosted onpremise applications to Microsoft Azure, the next critical decision that you must make is the type of cloud service to host your applications. I.e. PaaS or IaaS. In a nutshell, PaaS provides the complete hardware stack, software stack including the runtime and lets the developer focus strictly on applications and not on the underlying infrastructure, IaaS on the other hand provides just the virtualized hardware alone and hence the responsibility of installing, patching the OS along with management of runtime component falls on the shoulder of developer, which is a painful process unless you have an internal/external IT team who can take care of this automatically or manually.

 From our experience in consulting many enterprise LOB applications, we found PaaS is suitable for their requirement with the benefits like automatic OS update management, integrated deployment management, scalability & elasticity, cost etc. However some customer have chosen Azure VMs which is truly the customer’s individual preference.

 It is very important to understand the basic differences between these combos. Microsoft Azure team has documented the common differences between these offerings and help you choose the best one for your app (Click here). Refer the comparison chart to decide PaaS or IaaS.

General recommendation is to carefully analyze the feature comparison between the three different offerings WebApps, Cloud Service (Web & Worker Role) and VM and wisely choose the one which best suit your requirement.


Database Considerations

Database is a key part of any data driven business solutions. Usually, SQL Server has been the default choice for legacy Classic ASP or ASP.Net applications and some might have worked with Oracle or MYSQL database. Whatever may be the database backend, Microsoft Azure has native database solutions while migrating to Azure Cloud.

As far as SQL Server is concerned, Developers have 2 major options to choose from 

1. Azure SQL (SQL server as a service)
2. Self-Hosted SQL Server on VM (IaaS)

Azure SQL is a fully functional SQL Server database as a service on cloud fully managed and serviced by Microsoft. Azure SQL offers loads of benefits and a natural fit for applications which was using SQL Server onpremise. Primary benefits of Azure SQL includes Elasticity on demand, no installation, automated patching, zero maintenance, high availability, Pay as you go Pricing model, unlimited storage(with elastic sharding) etc.

Along with benefits of Azure SQL, there are significant differences exists due to the inherent distributed nature of this service. They are discussed in details in the upcoming sections.

Self-Hosted SQL Server on VM, is another option where the customer can host SQL Server on their own with the overhead of all the benefits mentioned in the Azure SQL section above. With respect to the pricing, customers can opt for bundled license model or BYOL (Bring your own license) etc. Self-Hosted SQL on VM is ideal for situations like Lift and Shift model of deployment and quick migration path to Azure.

Below are some of the key considerations you have to bear in mind during the SQL Server migration to Azure.

Distributed Transactions

Distributed Transactions are not fully supported yet in Azure SQL. If your application heavily depends on Transactions, it’s not feasible unless you are ready to re-write the data access layer of your application and build custom transactions module to handle the data integrity on your own. If you don’t want to mess up with the application and you want a simple lift and shift and run model deployment, then SQL Server on VM is the best option to choose.

SQL Server Federation/Sharding

To overcome the limitation of single hardware failure and to achieve higher level of query performance, many applications use SQL Server Federation also known us Horizontal Sharding. Sharding was primarily used in scenarios where there is a heavy growth of data in a particular database or in a multi tenancy SaaS scenarios where each customer’s data must be stored in a separate database for better data isolation and data security.

Many existing applications had implemented custom sharding mechanism because of their unique business scenarios. If the applications that you are migrating have sharding implemented, you can achieve the same by using Azure SQL Elastic Scale. It’s technically the same as SQL Server Federation or Sharding, but it’s built for Azure SQL exclusive and doesn’t offer support for SQL Server on VM. 

There are 3 different Sharding techniques available. They are

  • 1. Federation on regular SQL Server 
  • 2. Azure SQL Federation introduced in Web & Business Tier (Deprecated from Sep 2015)
  • 3. Azure SQL Elastic Preview 
If your choice is Azure SQL, your existing Sharding infrastructure have to be entirely re-written using the Azure Elastic Scale API, because it provides a new implementation and SDK overarching the benefits of Elastic Scalability nature of Azure SQL. 

Similarly, if at all you use the sharding infrastructure of Web and Business Tier Azure SQL, Microsoft recommends you to adopt Azure SQL Elastic API. 

If re-writing or modifying your existing code base is not an option, obviously you have to bank on running SQL Server on VM.