Performance of .Net reflection on mobile devices (via Xamarin and C#)

.Net reflection can be very handy to automate development tasks involving objects, without typing (or even knowing) the names of the properties.

For instance, if you need to copy properties between different types of objects (think of domain object to view model copies  – and vice versa – in MVC projects), it is difficult, today, to resist the comforts of frameworks like Automapper, EmitMapper, ValueInjecter, BLToolkit and others.

But how does reflection perform on mobile with Xamarin? In some cases, it can’t perform at all, because Apple doesn’t allow dynamic code creation (think System.Reflection.Emit). In other cases, it performs reasonably well only if we don’t ask the simulator (and more so the device) to crunch very big amounts of objects.

We created a little project  to test how copy of C# objects via reflection performs on mobile. We created a class (“Narcissus Copier”) that uses reflection to copy property among objects.

With Narcissus, we can do two things:

  • Copy values between two objects based on common property names and types (we check what these common properties are each time we copy two objects);
  • Copy values between two objects based on equal property names, with the premise that the corresponding property names and types are “registered” beforehand in the utility (we check what these common properties are only once in the app).

This is a link to the overall solution that includes both the “Narcissus” copier and the iOS project:

https://github.com/RickCSharp/NarcissusCopieriOStest

This is the method that copies the properties of a “source”object on a “destination” object.

// This method allows you to copy an object on another object, based on the common properties they have.
		// Syntax:
		// NarcissusCopier<TypeOfSourceObject, TypeOfDestinationObject>.CopyAnyObject(SourceObjectInstance, DestinationObjectInstance);
		// To improve performance in case the copies of two objects are executed more than once, 
		// the method pair RegisterObjectProperties and CopyRegisteredObject is more indicated
		public static void CopyAnyObject(TSource source, TDestination destination)
		{
			var propertiesOfSource =
                           source.GetType().GetProperties();
			var propertiesOfDestination =
                           destination.GetType().GetProperties();
			var propertiesInCommon =
				(from a in propertiesOfSource
				 join b in propertiesOfDestination
				 on new { a.Name, a.PropertyType }
                                 equals new { b.Name, b.PropertyType }
				 select a);
			foreach (var propertyInCommon in propertiesInCommon)
			{
			var valueOfPropertyToCopy =
propertyInCommon.GetValue(source, null);
			PropertyInfo propertyOfDestinationToReplace =
destination.GetType().GetProperty(propertyInCommon.Name);	
			propertyOfDestinationToReplace.SetValue(destination,
                          Convert.ChangeType(valueOfPropertyToCopy,
                          propertyOfDestinationToReplace.PropertyType),
                          null);
			}
		}

The idea to test how reflection copy performs is:

  • We create 1000, 10K, 100K, 200K, 500K instances of two pairs of mildly compatible complex classes (i.e.: each pair has some common properties, but some are not common);
  • We copy the values of the properties of the instances of the first type of objects on the properties of the second types of objects;
  • First we do it without reflection (“direct” copy);
  • Then we do it with reflection, but without “registering” the object pairs (that is to say the common properties are evaluated every time we ask the method to perform a copy);
  • Last, we do the copy with reflection more “prepared”, that is to say by first “registering” the object pairs (every time we ask the method to perform a copy, the common properties are already known).

 

We will take advantage of this test to also see how different the iOS simulator and the actual devices perform (or don’t perform, in a case).

Performance in the simulator (real device tests are below)

Here are the results of the direct copy / reflection copy on an iPhone 6plus (iOS 9.3) simulator (this is important to underline because on the real device it will be a totally different story) running on a macBookPro i7 8Gb RAM:

Test 1:
1,000 object copies in the simulator. Not a lot of difference between “direct” copy and copy via reflection

Simulator Screen Shot 02 Jul 2016 19.38.21

Test 2:
10,000 object copies in the simulator. The difference begins to be important (5x slower in the best reflection case).

Simulator Screen Shot 02 Jul 2016 19.39.00

Test 3:
100,000 object copies in the simulator. The non-reflection methods continue to perform well. Code is reused very well by the runtime. Reflection code is not.

Simulator Screen Shot 02 Jul 2016 19.39.38

Test 4:
200,000 object copies in the simulator. Performance degrades in the reflection part of the copy.

Simulator Screen Shot 02 Jul 2016 19.40.41

Test 5:
500,000 object copies in the simulator. This is an extreme case (it wouldn’t be a good idea to modify 500,000 objects in a mobile app, would it?), but this example does show some pitfalls of reflection.

Simulator Screen Shot 02 Jul 2016 20.11.40

Performance on a real device (iPhone 6plus)

Comparing the simulator to the real device is interesting because there is one particular case (the 500,000 objects) that the device cannot even work with because of insufficient memory. This reminds us once again that the simulators must be used with a grain of salt.

Test 1 on iPhone:
1,000 object copies in the real device. Not a lot of difference between “direct” copy and copy via reflection nor between Simulator and real device

IMG_1630

Test 2 on iPhone:
10,000 object copies in the real device. The performance of copy without reflection is still good. The performance of copy via reflection is comparable to that of the simulator.

IMG_1631

Test 3 on iPhone:
100,000 object copies in the real device. The performance of copy without reflection is still good. However, the device begins to degrade both compared to the “simple” copy and compared to the simulator.

IMG_1632

Test 4 on iPhone:
200,000 object copies in the real device. The performance of copy without reflection is still good. The device is three time slower than the simulator in this case.

IMG_1633

Test 5 on iPhone:
…last, 500,000 object copies in the real device. The performance cannot be shown as the creation of 500,000 complex objects on an iPhone 6plus results in the crash of the app.

Syncfusion free “succintly” ebooks

When we discuss technologies on this blog, we generally try to present different commercial alternatives to reach a certain goal. This time we’ll make an exception and we will present a set of books offered (for free, or better: for your availabilty to subscribe to their newsletter) by  one specific company: Syncfusion, to which we’re not affiliated in any way.

Why do we do this? Because they have given us for free (as they have everyone else) a lot of good eBooks that give you the lowdown on interesting development/system admin topics. These books are professional and well written. Everyone can get them for free at www.syncfusion.com

In particular, I liked the “Git Succintly” eBook. I have used Git in the past via dubious GUI’s (Visual Studio – which has no “staging” concept until VS 2013; this may change with VS 2015 update 2 if I’m not mistaken – and Xamarin studio).

This little book, written very concisely by Ryan Hodson (“succint” books are rarely made up of more than 100 pages), tells you everything you have to know to kick the ground running with Git (even professionally) via command line, something I hadn’t done in the past.

git

And… it’s free.

Other cool titles:

  • NancyFX succintly (for microservice fans)
  • Criptography in .Net

What does Syncfusion actually sell? They sell (with an interesting business model that really awards independent developers) components for web and mobile development. You may want to check them out.

Renewing an SSL certificate for a website hosted in Azure

Managing resources in Azure has become easier (well, at least the interface looks better) since Microsoft launched the new portal (the one at portal.azure.com).

Let us see today how you upload, in the new portal, a renewed IP-based SSL certificate for your Azure web app.

Prerequisites

  1. Needless to say, to upload a renewed certificate in Azure you need to have a renewed certificate. You don’t have to wait for the old certificate to expire before installing the new one, though: you can buy the new certificate in advance (one/two months is a pretty safe choice) and use it immediately. However, watch out that some third parties (for example: the bank that allows your eCommerce payments) may need to install the intermediate certificates of your new certificate in their “certificate store” before you replace the certificate in your web server. Check with them if this is the case.
  2. To renew an SSL certificate, you can talk to the issuer of the existing certificate. There are also DNS providers that issue SSL certificates for you via a Certification Authority they trust, so you don’t have to speak to another party.
  3. The new certificate must be in the .pfx format (password-protected) to get along with IIS (Azure also runs Apache actually, but I think most Azure websites are IIS presently. I may be wrong already and I will definitely be wrong in the future).
    I explained how to create the .pfx certificate in this post. However, if your Certificate authority or DNS provider are very kind, you won’t have to go through any of that: they will create a .pfx for you, thank you very much. For instance, dnsimple has an interface that creates the pfx for you when you buy a certificate through them (they buy it at Comodo’s). Dnsimple also provides a matching password you will have to use in Azure in conjunction with the certificate:
Download a pfx format certificate and password from dnsimple

Download a pfx format certificate and its password from dnsimple, or any provider that is “IIS-friendly”

The actual work

  1. Go to portal.azure.com
  2. Choose the blade (new portal terminology for a dynamic window) corresponding to your web app
  3. In the app’s settings, choose “Custom domains and SSL”
  4. Choose “Upload certificate”. Don’t be scared if you’re doing this ahead of time: before you bind the certificate to your site, nothing will change in the configuration. Plus, as we said, you can use the renewed certificate before the old one expires, unless a third party needs the intermediate certificates.
upload renewed pfx certificate in Azure

upload renewed pfx certificate in Azure

5. Once you upload the new certificate, the list of available certificates is incremented by one (see the “Certificates” section in the screenshot below: there is a “2017” certificate below the “2016”).

As you can see in the "certificates" section, I have a new one

As you can see in the “certificates” section, I have a new one

6. Now you would be tempted to ADD a new binding between your hostname and the new certificate. You would want to do that in the SSL bindings configuration (see “SSL bindings” in the screenshot above). Azure will allow you to do that; however, once you save and re-enter the blade, you will see that only the old certificate still has a binding to the hostname.

7. This is why you don’t ADD a new binding between the hostname and the new certificate: you update the existing binding. In the row corresponding to existing binding, select the new certificate you just uploaded and replace the old one, as you see below:

Choose the new certificate in the SSL binding

Choose the new certificate in the SSL binding

8. If your SSL is already IP-based, you won’t have to set the IP binding again: the old configuration is kept.

9. However, in order to check that the new Certificate chain is working, you can use an online tool like SSL shopper’s checker.

Just make sure that you are seeing the latest, non-cached situation in the tool!

SSL-checker

Check your SSL certificate in Azure via an SSL checker

Encrypting a SQL Azure Database to comply with the EU data protection laws

Back in 2014, Microsoft’s president and chief legal officer Brad Smith wrote a note in the company’s blog (you can read it here: http://blogs.microsoft.com/blog/2014/04/10/privacy-authorities-across-europe-approve-microsofts-cloud-commitments/) stating that Azure was the only cloud provider meeting the renewed data protection regulations of the European Union. This award stemmed from policies that were already in place and some that Microsoft committed to implementing in the future.

It has to be noted that, by “data protection”, one does not refer only to possible hackers stealing customer data, but also, as Microsoft says, Protecting customer data from government snooping (read here: http://blogs.microsoft.com/blog/2013/12/04/protecting-customer-data-from-government-snooping/).

The European Commission, on October 15, 2015, ruled that the “Safe Harbor” decision of year 2000 (which affirmed that data were by definition protected when exchanged between EU countries and the US) is invalid. This new ruling followed a complaint of an Austrian Facebook user who affirms the company does not protect his data from the US authorities: http://curia.europa.eu/jcms/upload/docs/application/pdf/2015-10/cp150117en.pdf.

The data protection is thusly interpreted by Amazon as regards their very widely used Web Services cloud: https://aws.amazon.com/compliance/eu-data-protection/

There are also a lot of additional cloud service providers that have addressed the EU’s regulations; here is a good white paper by Cloudera: https://www.cloudera.com/content/dam/cloudera/Resources/PDF/solution-briefs/eu-data-protection-directive-compliance-solution.pdf

Here is how Rackspace comments the October 15th ruling http://blog.rackspace.com/eu-ruling-on-safe-harbor-rackspace-stands-prepared/

If your company is in a European country and you want to use cloud storage, chances are that you may want to ask your cloud provider to keep your data in the EU datacenters. Most providers allow you to choose where your database is located and replicated.

However, this is not enough. One of the requirements of data protection is encryption. Customer data should be encrypted not only when it leaves the EU for back up and geo-replication: you must assure a certain level of security if you allow your customers store personal data in a database.

Starting from October, 2015, if you have a SQL Azure database, you can take advantage of the transparent data encryption. What does it mean? It means your customer data is encrypted with a server-level cryptographic key but you don’t have to change your SQL queries.

I will try to show now how simple this is. I will replicate the same info you find here: https://msdn.microsoft.com/library/dn948096, but with a real-case scenario.

Before encrypting: Back up the DB

SQL Azure DBs are automatically backed up (frequency is set by you: watch out for Azure bandwidth costs!) but it is a good practice to back up your data before any important DDL operation. You can backup your DB to an Azure container.

To do so, from the Azure portal choose “SQL Server”, then select  the server that contains the DB you want to back up before encrypting.

Then, choose the “export” feature.

Exporting a SQL Azure DB to an Azure container

Exporting a SQL Azure DB to an Azure container

 

You have to choose the Azure “blob” where your backup file (.bacpac) will be stored. You also need to provide your server’s username and password (by server, I mean a DB server: being an “as a service” DB, actually, it is not a real server).

 

Configuring an Azure container to export a SQL Azure DB

Configuring an Azure container to export a SQL Azure DB

 

Encrypt the DB

The cool DDL command to encrypt the DB is:

ALTER DATABASE [MYDatabaseName] SET ENCRYPTION ON;

If you are not cool and don’t like writing command lines (I absolutely don’t), you can achieve the same result via the portal, (see screenshots below):

  1. Select the Server
  2. Select the DB
  3. Choose “all settings”
  4. “Transparent data encryption”
  5. “ON”
  6. “Save”

set-encryption-sql-azure
7. Wait for some seconds (depending on the size of the DB, it could be also minutes)
8. You are done.

Keep on querying

Encryption is totally transparent. Keep on querying your DB!

Calling WebServices Asynchronously in .Net: Unique state object is required for multiple asynchronous simultaneous operations

When you call a web service aynchronously, you may want to make sure that responses arrive in the same order in which they were requested.

Is this always the case? Not necessarily. At times you just don’t care if the responses to requests A and B come in order B and A. Other times, order is crucial. One would argue that in these cases synchronous methods are better than asynchronous, but – especially when you are using third party web services, or particular technologies (PCLs are an example) – you cannot always choose to “go synchronously”.

.Net helps you enforce the correct request – response order by throwing the error that is the subject of this post. If a client requests multiple async methods in a short period of time, .Net tries to prevent the inconsistencies by complaining thusly:

ERROR: System.ArgumentException: There was an error during asynchronous processing. Unique state object is required for multiple asynchronous simultaneous operations to be outstanding. ---> System.ArgumentException: Item has already been added. Key in dictionary: 'System.Object' Key being added: 'System.Object' at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add)

When does this happen? Is .Net mean to us?

It happens when you call an async web service method before the previous call has received its

completed

event.

Let us reproduce the issue.
Let us imagine we have a web service that we reference in a Console project with a

myWSRef

reference.
Let us imagine the service exposes an async method called

getProductDataAsync(manufactorCode,productId)

Our client repeatedly calls the async service in a while-loop, like this:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace testwcf
{
    class Program
    {
       static void Main(string[] args)
        {
            string manufactorCode="blahblahCode";
            string productId = "blahblahCode2";

            // we define the client for the WS
            myWSRef.RemoteWS wsClient = new myWSRef.RemoteWS ();

            // we attach a handler to the "Completed" event
            wsClient.getProductDataResponseCompleted += callToProductDataCompleted;
            
            int prodNumb=0;
            while (prodNumb<=100)
            {
                try
                {
                    prodNumb++;
                    string artificialSuffix=new Random().Next().ToString();
                    wsClient.getProductDataAsync(manufactorCode,productId+artificialSuffix);
                }
                catch (Exception e)
                {
                    Console.WriteLine(e);
                }
            }
            Console.ReadLine();
        }

        public static void callToProductDataCompleted(object sender, myWSRef.GetProductDataCompletedEventArgs ea)
        {
            //this is the handler to the webserver reply

            if (ea.Error != null){
                Console.WriteLine("ERROR: " + ea.Error);
                Debug.WriteLine("ERROR: " + ea.Error);
             }
            else {
                Console.WriteLine("Call OK: ");
                Console.WriteLine(ea.Result);
            }
        }
    }
}

What happens if we run this? .Net will throw the runtime error that is the subject of this post.

Let us try to distantiate the calls by asking the Thread to sleep one second.

try
                {
                    prodNumb++;
                    string artificialSuffix=new Random().Next().ToString();
                    wsClient.EditorProductsDataResponse2Async(authCode, editorId, productId + artificialSuffix, idMachine, true, year);
                    Thread.Sleep(1000);
                 }

What happens? Temporarily, the error goes away. In fact, one second is enough for the web service response to call the callToProductDataCompleted routine, and we’re safe… unless… the next call takes three seconds rather than one second. And we’re back to square one.

How to solve this issue for good? Many suggest that every call has a unique GUID.

Stackoverflow offers
this suggestion: every call passes its own Guid to the service.

What about when the web service is done by someone else and you cannot pass it any uniqueID?

One way to solve this is with a semaphore: when the response of a web service call has not been received, you cannot call another one.

This is the code, with the IsBusy semaphore implemented

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace testwcf
{
    class Program
    {

        static Boolean IsBusy = false;  // this will be our semaphore
        static void Main(string[] args)
        {
            string manufactorCode="blahblahCode";
            string productId = "blahblahCode2";

            // we define the client for the WS
            myWSRef.RemoteWS wsClient = new myWSRef.RemoteWS ();

            // we attach a handler to the "Completed" event
            wsClient.getProductDataResponseCompleted += callToProductDataCompleted;
            
            int prodNumb=0;
            while (prodNumb<=100)
            {
               // if IsBusy, it means I am in the middle of a call
                if (IsBusy)
                {
                    continue; //I wait until the web service is not being called by another client still
                }
                try
                {
                    prodNumb++;
                    string artificialSuffix=new Random().Next().ToString();
                    wsClient.getProductDataAsync(manufactorCode,productId+artificialSuffix);
                }
                catch (Exception e)
                {

                    Console.WriteLine(e);
                    // treat the exception, then
                    IsBusy=false; // free the "semaphore" for another call
            }
            Console.ReadLine();
        }

        public static void callToProductDataCompleted(object sender, myWSRef.GetProductDataCompletedEventArgs ea)
        {
            //this is the handler to the webserver reply
             IsBusy = false; // when the WS call has been dutifully served, we "free" the loop to serve another one
            if (ea.Error != null){
                Console.WriteLine("ERROR: " + ea.Error);
                Debug.WriteLine("ERROR: " + ea.Error);
             }
            else {
                Console.WriteLine("Call OK: ");
                Console.WriteLine(ea.Result);

            }
        }
    }
}

There are other methods to avoid the request overlapping, of course, like calling the following method inside the Completed callback function (iteration). Just remember that the callToProductDataCompleted method is called even when the web server throws an error (as: 400 Bad request), so the method is the place to handle those exceptions. By contrast, client errors (as a timeout or network error) will be caught by the catch block of the getProductDataAsync call.

SSL sites in Azure: get an inexpensive certificate, upload it to Azure and make it work

Why https?

SSL certificates have become very inexpensive.

There are at least three reasons why you may what to serve a secure (httpS) version of your site (or even serve only the httpS version of your site):

  1. with httpS, it is almost impossible that a “man in the middle” impersonates your website
  2. search engines reward sites that have an httpS version
  3. conversion rates are higher because clients take their money’s safety very seriously (as you)

In order to serve a secure version of your site, you need an SSL certificate granted by a Certificate Authority, or by a company that is trusted by a Certificate Authority.

Here are some suggestions to implement SSL in Azure, using a certificate issued by one of the most inexpensive Certificate providers (also the most commonly used as of March 2015, their site says): Comodo.

Comodo has lots of resellers. To use it with Azure, you should make sure that you get the certificate from a reseller that provides:

  1. the Certificate for your website (a file that will be called something very similar to www_yoursite_com.crt)
  2. the private key with which that certificate was created (a file that will have a name very similar to www_yourwebsite_com.key)
  3. the “chain” of certificates that will allow the browser to recognize that Comodo is a trusted Authority and it can guarantee the name of your site (this will will have a name very similar to www_yourwebsite_com.pem)

The KEY file is essential. Without the key file, you will not be able to create the .pfx file that Azure needs. If you have no key file, ask your reseller to re-issue the certificate (they might need your .csr request, depending on your provider) and give you also the .key file.

What is in the certificates files?
the CRT file has this format:

—–BEGIN CERTIFICATE—–

(a lot of digits and letters)

—–END CERTIFICATE—–

the KEY file has this format:

—–BEGIN RSA PRIVATE KEY—–

(a lot of digits and letters)

—–END RSA PRIVATE KEY—–

the PEM file has this format:

—–BEGIN CERTIFICATE—–
(a lot of digits and letters)
—–END CERTIFICATE—–

—–BEGIN CERTIFICATE—–
(a lot of digits and letters)
—–END CERTIFICATE—–

(i.e.: the PEM contains multiple certificates)

An example of this?

If you buy a certificate via dnsimple, they will get from Comodo all of the above mentioned files, in this simple format:

obtain certificate and keys from dnsimple

obtain certificate and keys from dnsimple

Are these SSL certificates ready for my Azure website?

Hold your horses. Two things must happen before you can upload.

  1. You have to have at least a “Basic” Azure website plan (“shared” will not do)
  2. You have to create a .pfx file, the Microsoft-peculiar format for SSL files (the main difference with .crt files is that it contains also the private key; plus, it is password-protected).

To satisfy condition (1), one thing is sufficient: pay. “Basic” websites are more expensive than “shared” websites and, of course, of “free” websites. If you want to upgrade from “Shared” to “Basic”, choose your site in the Azure portal and look for the “scale” button. The upgrade takes 30 seconds at worst.

Condition number (2) takes a bit more time. You have two options:

The easy way to convert a .cer file to a .pfx

Use an online tool. This is the fastest way. However, you have to trust the online tool, because you’re basically giving them the “id” of your site. This is a very cozy tool:

https://www.thesslstore.com/ssltools/ssl-converter.php

an online pfx converter

an online pfx converter

Another tool is this:

https://www.sslshopper.com/ssl-converter.html

Are these tools secure? WHO KNOWS. The second tool specifically tells you you’re not doing a very secure thing although these tools do use https [ 🙂 ]

The secure way to convert a .cer file to a .pfx

If you do not like to use a web tool to manage your SSL certificates (I cannot blame you), you need openSSL and be ready to do some command line writing.

OpenSSL for windows can be downloaded here, for example:

https://slproweb.com/products/Win32OpenSSL.html

Once you install openSSL, you open the command line tool as an admin, launch openssl and type this line

OpenSSL> pkcs12 -in www_yoursite_com.cer -in www_yoursite_com.pem -out www_yoursite_com.pfx -export -inkey www_yoursite_com.key -password pass:the_password_you_will_use_in_Azure

openssl to create pfx

openssl to create pfx

Pay attention: the .pem file is the one that includes your site’s certificate. If you use only the intermediate chain certificate files, you will get this error:

No certificate matches private key

Now let us upload the SSL certificate to Azure, shall we?

Now the easy part. This is done in two steps:

  1. upload your certificate to Azure
  2. tell Azure it should use the certificate with your site

To do (1), go to the “Configure” part of the “classic” Azure portal or to the “personalized domains and SSL” menu item of the new portal, then upload your .pfx

To do (2),  go to the SSL binding section of the same page where you uploaded the .pfx file and choose the name of the site you are linking the SSL certificate to.

Both steps are well shown in this MSDN article:

http://azure.microsoft.com/it-it/documentation/articles/web-sites-configure-ssl-certificate/

However, please note that the MSDN note underestimates the importance of linking the certificate to the IP of your Azure website rather than using the “SNI” attribution. 

The note says that only “older browsers” will have some difficulties accepting an SNI certificate as opposed to an IP-bound one. 

You will have issues not only with older browsers: if your pages are called by an http client (as is the case with most eCommerce services), it is very likely that that http client will not be able to interpret the SNI certificate (if the client is Java, they often get this error: javax.net.ssl.SSLHandshakeException)

This is why there are numerous cases in which SNI installation is not enough and you will have to:

1. Choose the IP based installation of your SSL certificate 

2. Check if Azure changed the static IP of your site after such installation 

3. Add/edit an A record in your DNS configuration to point to the new IP address of your Azure website

Good luck with SSL and Azure!

Migrating your SQL Azure servers from v11 to v12

Since January 29, 2015, version 12 of Sql Server “in the cloud” (Azure SQL Server) has been in “General Availability” in West Europe. At the beginning of March of the same year we decided to migrate our Azure databases to this newer version of the popular DB engine.

Version 12 offers:

  • better compatibility with tools and features of “on premise” Sql Server
  • better performance (25%)
  • same price of v11 (or even a discount until the end of March 2015).

There are two ways to update, according to this Microsoft article:

  • either you create a copy of your server, update it and change the connection string when the copy is done
  • or you directly upgrade the version 11 of your server, which is a bit more scary but definitely easier (to make you feel safer, the article says that the upgrade is transactional, by which I mean: if it fails, v11 is restored)

In both cases, the new server and DB instances will be have to be managed via the new portal (portal.azure.com, which, as of today, March 8 2015, is still in “preview” mode) rather than the “classic” portal (manage.windowsazure.com).

How we migrated our Azure SQL from v11 to v12

Before the migration, we backed up the databases contained in the server we wanted to migrate. We did it using the new portal’s “export” feature (from the “browse” icon, you go to “SQL Databases”, choose the database and click on the “export” icon. The destination of the backup is an Azure storage area you already have or create contextually.)

Export SQL Azure Database before migration to v 12

Export SQL Azure Database before migration to v 12

Next up, we upgraded the whole Server. We went back to the “browse” icon, chose the SQL server to upgrade (you don’t upgrade a single database: you upgrade the server). We clicked on “settings” and “Latest update” and here the option to upgrade became available:

Migration of SQL Azure Server to v12

Migration of SQL Azure Server to v12

For the portal to be sure you want to do the migration, you have to re-type the server name. After that, the upgrade begins:

SQL Azure v 12 upgrade

Upgrade of SQL Azure to v 12 in progress

After ten minutes (the server only has a 100 MB DB), the server was correctly updated to v12. All client functionalities seem OK now.

Pingdom says the site is slightly faster. Of course this might be related to a bevy of other reasons, but one of this might be the better performance of v12, mightn’t it?

The new SQL Azure v 12 DB

The new SQL Azure v 12 DB