SSL sites in Azure: get an inexpensive certificate, upload it to Azure and make it work

Why https?

SSL certificates have become very inexpensive.

There are at least three reasons why you may what to serve a secure (httpS) version of your site (or even serve only the httpS version of your site):

  1. with httpS, it is almost impossible that a “man in the middle” impersonates your website
  2. search engines reward sites that have an httpS version
  3. conversion rates are higher because clients take their money’s safety very seriously (as you)

In order to serve a secure version of your site, you need an SSL certificate granted by a Certificate Authority, or by a company that is trusted by a Certificate Authority.

Here are some suggestions to implement SSL in Azure, using a certificate issued by one of the most inexpensive Certificate providers (also the most commonly used as of March 2015, their site says): Comodo.

Comodo has lots of resellers. To use it with Azure, you should make sure that you get the certificate from a reseller that provides:

  1. the Certificate for your website (a file that will be called something very similar to www_yoursite_com.crt)
  2. the private key with which that certificate was created (a file that will have a name very similar to www_yourwebsite_com.key)
  3. the “chain” of certificates that will allow the browser to recognize that Comodo is a trusted Authority and it can guarantee the name of your site (this will will have a name very similar to www_yourwebsite_com.pem)

The KEY file is essential. Without the key file, you will not be able to create the .pfx file that Azure needs. If you have no key file, ask your reseller to re-issue the certificate (they might need your .csr request, depending on your provider) and give you also the .key file.

What is in the certificates files?
the CRT file has this format:

—–BEGIN CERTIFICATE—–

(a lot of digits and letters)

—–END CERTIFICATE—–

the KEY file has this format:

—–BEGIN RSA PRIVATE KEY—–

(a lot of digits and letters)

—–END RSA PRIVATE KEY—–

the PEM file has this format:

—–BEGIN CERTIFICATE—–
(a lot of digits and letters)
—–END CERTIFICATE—–

—–BEGIN CERTIFICATE—–
(a lot of digits and letters)
—–END CERTIFICATE—–

(i.e.: the PEM contains multiple certificates)

An example of this?

If you buy a certificate via dnsimple, they will get from Comodo all of the above mentioned files, in this simple format:

obtain certificate and keys from dnsimple

obtain certificate and keys from dnsimple

Are these SSL certificates ready for my Azure website?

Hold your horses. Two things must happen before you can upload.

  1. You have to have at least a “Basic” Azure website plan (“shared” will not do)
  2. You have to create a .pfx file, the Microsoft-peculiar format for SSL files (the main difference with .crt files is that it contains also the private key; plus, it is password-protected).

To satisfy condition (1), one thing is sufficient: pay. “Basic” websites are more expensive than “shared” websites and, of course, of “free” websites. If you want to upgrade from “Shared” to “Basic”, choose your site in the Azure portal and look for the “scale” button. The upgrade takes 30 seconds at worst.

Condition number (2) takes a bit more time. You have two options:

The easy way to convert a .cer file to a .pfx

Use an online tool. This is the fastest way. However, you have to trust the online tool, because you’re basically giving them the “id” of your site. This is a very cozy tool:

https://www.thesslstore.com/ssltools/ssl-converter.php

an online pfx converter

an online pfx converter

Another tool is this:

https://www.sslshopper.com/ssl-converter.html

Are these tools secure? WHO KNOWS. The second tool specifically tells you you’re not doing a very secure thing although these tools do use https [ ūüôā ]

The secure way to convert a .cer file to a .pfx

If you do not like to use a web tool to manage your SSL certificates (I cannot blame you), you need openSSL and be ready to do some command line writing.

OpenSSL for windows can be downloaded here, for example:

https://slproweb.com/products/Win32OpenSSL.html

Once you install openSSL, you open the command line tool as an admin, launch openssl and type this line

OpenSSL> pkcs12 -in www_yoursite_com.cer -in www_yoursite_com.pem -out www_yoursite_com.pfx -export -inkey www_yoursite_com.key -password pass:the_password_you_will_use_in_Azure

openssl to create pfx

openssl to create pfx

Pay attention: the .pem file is the one that includes your site’s certificate. If you use only the intermediate chain certificate files, you will get this error:

No certificate matches private key

Now let us upload the SSL certificate to Azure, shall we?

Now the easy part. This is done in two steps:

  1. upload your certificate to Azure
  2. tell Azure it should use the certificate with your site

To do (1), go to the “Configure” part of the “classic” Azure portal or to the “personalized domains and SSL” menu item of the new portal, then upload your .pfx

To do (2),  go to the SSL binding section of the same page where you uploaded the .pfx file and choose the name of the site you are linking the SSL certificate to.

Both steps are well shown in this MSDN article:

http://azure.microsoft.com/it-it/documentation/articles/web-sites-configure-ssl-certificate/

However, please note that the MSDN note underestimates the importance of linking the certificate to the IP of your Azure website rather than using the “SNI” attribution. 

The note says that only “older browsers” will have some difficulties accepting an SNI certificate as opposed to an IP-bound one. 

You will have issues not only with older browsers: if your pages are called by an http client (as is the case with most eCommerce services), it is very likely that that http client will not be able to interpret the SNI certificate (if the client is Java, they often get this error: javax.net.ssl.SSLHandshakeException)

This is why there are numerous cases in which SNI installation is not enough and you will have to:

1. Choose the IP based installation of your SSL certificate 

2. Check if Azure changed the static IP of your site after such installation 

3. Add/edit an A record in your DNS configuration to point to the new IP address of your Azure website

Good luck with SSL and Azure!

Migrating your SQL Azure servers from v11 to v12

Since January 29, 2015, version 12 of Sql Server “in the cloud” (Azure SQL Server) has been¬†in “General Availability” in West Europe. At the beginning of March of the same year we decided to migrate our Azure databases to this newer¬†version of the popular DB engine.

Version 12 offers:

  • better compatibility with tools and features of “on premise” Sql Server
  • better performance (25%)
  • same price of v11 (or even a discount until the end of March 2015).

There are two ways to update, according to this Microsoft article:

  • either you create a copy of your server, update it and change the connection string when the copy is done
  • or you directly upgrade the version 11 of your server, which is a bit more scary but definitely easier¬†(to make you feel safer, the article says that the upgrade is transactional, by which I mean: if it fails, v11 is restored)

In both cases, the new server and DB instances will be have to be managed via the new portal (portal.azure.com, which, as of today, March 8 2015, is still in “preview” mode) rather than the “classic” portal (manage.windowsazure.com).

How we migrated our Azure SQL from v11 to v12

Before the migration, we backed up the databases contained in the server we wanted to migrate. We did it using the new portal’s “export” feature (from the “browse” icon, you go to “SQL Databases”, choose the database and click on the “export” icon. The destination of the backup is an Azure storage area you already have or create contextually.)

Export SQL Azure Database before migration to v 12

Export SQL Azure Database before migration to v 12

Next up, we upgraded the whole Server. We went back to the “browse” icon, chose the¬†SQL server¬†to upgrade (you don’t upgrade a single¬†database: you upgrade the server). We clicked on “settings” and “Latest update” and here the option to upgrade became available:

Migration of SQL Azure Server to v12

Migration of SQL Azure Server to v12

For the portal to be sure you want to do the migration, you have to re-type the server name. After that, the upgrade begins:

SQL Azure v 12 upgrade

Upgrade of SQL Azure to v 12 in progress

After ten minutes (the server only has a 100 MB DB), the server was correctly updated to v12. All client functionalities seem OK now.

Pingdom says the site is slightly faster. Of course this might be related to a bevy of other reasons, but one of this might be the better performance of v12, mightn’t it?

The new SQL Azure v 12 DB

The new SQL Azure v 12 DB

the “URL” DNS record that dnsimple made

Some time ago, in one of his Azure videos (generally given on a Friday), Scott Hanselman of MSFT showed a nice domain registrar / manager that works well with Azure websites: dnsimple.

To be honest, almost¬†all registrars work well with Azure. However, dnsimple¬†has a “semi-graphical” UI¬†that helps you add the particular DNS records needed for Azure websites (or Amazon’s, for that matter… or many other cloud services…). For those of us who don’t like typing outside of Visual Studio, semi-GUI is bliss.

dnsimple (www.dnsimple.com)¬†are a small company, which grants¬†quick support and which¬†has some good, original¬†ideas. I am not affiliated to them in any way.¬†They’re not cheap, either in the good or bad¬†meaning of the term. They give good service, but they are a bit more expensive than your average registrar.

One of their good ideas is the URL record for your DNS.

What is a URL record, in addition to being the subject of this post?

Suppose you have registered two domains for the same site. For instance:¬†you have a .com site in addition¬†to an .fr, .it, .de, .co.uk… what-have-you.

For example:¬†one of my customers has the¬†domain www.bvevents.com¬†in addition to¬†the domain www.bvevents.it.¬†Both domains “serve”¬†the same content.¬†Everyone knows search engines do not like duplicate content.

You have basically three ways to avoid search engines think one site is “stealing” content from the other:

1. You fill your pages¬†with the “canonical” metadata to advise search engines you’re not spawning the oceans¬†with the same content in different sites (“canonical” indicates what page is the “original”)

2. You ask your web server to rewrite the URL to only one of the sites; for instance: what does not match ^www\.yoursite\.com$ >>> rewrites to http://www.yoursite.com/{R:1}

3. You use dnsimple’s “URL” “special”¬†records.

dnsimple URL records

dnsimple URL records

A URL record redirects a domain to another with a 301 code¬†you don’t have to set up in your server: dnsimple does it for you.

In our case: we wanted all the “.it” content to be served by the “.com” domain, so we set up¬†a URL entry that redirects all .it pages to the corresponding .com pages. See in the image below what Chrome registers for our .it request.

dnsimple URL records

Automatic redirection with dnsimple URL records

This is an idea that saves you some duplicate content hassles.

Static IP addresses for Azure websites that are not hosted as “cloud services” or “VM”s: still impossible for outbound, but workarounds possible

I hope this article is not valid for long and that static IPs will also soon also applicable for Azure websites. At this moment (October 2014), this is not the case.

Microsoft announced at the end of July that you can finally have reserved¬†IPs for VMs and cloud services. Cloud services CAN host websites (the “web” role) but they’re not as easy to deploy as Azure website services (which are elementary).

The details of the procedure to obtain the static IP (for inbound AND outbound traffic) are in this MSDN article here.

The procedure is¬†not very friendly yet: you have to use powershell or Azure’s API. I haven’t seen a graphic interface yet. Moreover, static IPs can – today – only be assigned to newly deployed stuff, not to¬†already-deployed services.

What happens if you still have an Azure “website”, that is the most simple (and agile) way to deploy your own website to the Azure cloud?

Inbound traffic

You CAN have an static IP address for inbound traffic. Here, in an MSDN blog entry, Benjamin Perkins shows how to do it with the help of SSL certificates.

Outbound traffic: there’s the rub

Why would you want your outbound traffic IP to be static? Because there are cases in which your website, in the background, has to call web services which only accept calls from whitelisted IPs. When is this the case?
– financial services (for instance: online payment)
– other paid web services

Should we give up Azure if we need outbound static IP? Not really. There are two ways to overcome the issue of outbound traffic not being static in Azure websites.

1. Azure websites’s IP addresses are not totally dynamic. There IS a range of IPs that your outbound traffic can use. The list is here. If your remote web server needs to know what IP address you’re going to use to make the calls, you can give them the Azure datacenter IP ranges.

What is the problem with this approach? the list is long, whereas web service providers may accept only a few IP addresses.

In October 2014, the West Europe Data Center IP list is long tens of lines. Chances are your web service provider gives you… say ten IPs you can¬†communicate them?

2. You use a static-IP proxy for your websites calls. I have tested this British service called Quotaguard, that I pay for and with whom I have no affiliation whatsoever. It works.

What do they do? they provide you with a proxy server that does have two static IPs you can communicate to your provider as your “whitelisted” IPs. Your Azure traffic that needs whitelisting can pass via Quotaguard.

They have a lot of implementation examples. For .NET, they focus on web forms and http requests that have a proxy property. In case you are using (as it was my case) objects¬†that have no “proxy” porperties, you can create a Proxy object yourself and link it to the .NET framework object “WebRequest”, like this:

using System.Net;

var proxy = new WebProxy(“http://quotaguard_THE_CODE_QUOTAGUARD_GIVES_YOU@eu-west-1-babbage.quotaguard.com:9293“); // you may want to store this URI in the application’s config page¬†in Azure, rather than hardcoding it
¬† ¬† ¬† ¬† ¬† ¬† proxy.Credentials = new NetworkCredential(“YourQuotaGuardAccount”, “yourQuotaguardpassword”);¬†// you may want to store credentials in secure config files¬†rather than hardcoding them
¬† ¬† ¬† ¬† ¬† ¬† WebRequest.DefaultWebProxy = proxy; // we set the “global” proxy here
¬† ¬† ¬† ¬† ¬† ¬† Now you can use your whitelisted webservice call…
Another version of the same code can be found here:
Enjoy!