Performance of .Net reflection on mobile devices (via Xamarin and C#)

.Net reflection can be very handy to automate development tasks involving objects, without typing (or even knowing) the names of the properties.

For instance, if you need to copy properties between different types of objects (think of domain object to view model copies  – and vice versa – in MVC projects), it is difficult, today, to resist the comforts of frameworks like Automapper, EmitMapper, ValueInjecter, BLToolkit and others.

But how does reflection perform on mobile with Xamarin? In some cases, it can’t perform at all, because Apple doesn’t allow dynamic code creation (think System.Reflection.Emit). In other cases, it performs reasonably well only if we don’t ask the simulator (and more so the device) to crunch very big amounts of objects.

We created a little project  to test how copy of C# objects via reflection performs on mobile. We created a class (“Narcissus Copier”) that uses reflection to copy property among objects.

With Narcissus, we can do two things:

  • Copy values between two objects based on common property names and types (we check what these common properties are each time we copy two objects);
  • Copy values between two objects based on equal property names, with the premise that the corresponding property names and types are “registered” beforehand in the utility (we check what these common properties are only once in the app).

This is a link to the overall solution that includes both the “Narcissus” copier and the iOS project:

https://github.com/RickCSharp/NarcissusCopieriOStest

This is the method that copies the properties of a “source”object on a “destination” object.

// This method allows you to copy an object on another object, based on the common properties they have.
		// Syntax:
		// NarcissusCopier<TypeOfSourceObject, TypeOfDestinationObject>.CopyAnyObject(SourceObjectInstance, DestinationObjectInstance);
		// To improve performance in case the copies of two objects are executed more than once, 
		// the method pair RegisterObjectProperties and CopyRegisteredObject is more indicated
		public static void CopyAnyObject(TSource source, TDestination destination)
		{
			var propertiesOfSource =
                           source.GetType().GetProperties();
			var propertiesOfDestination =
                           destination.GetType().GetProperties();
			var propertiesInCommon =
				(from a in propertiesOfSource
				 join b in propertiesOfDestination
				 on new { a.Name, a.PropertyType }
                                 equals new { b.Name, b.PropertyType }
				 select a);
			foreach (var propertyInCommon in propertiesInCommon)
			{
			var valueOfPropertyToCopy =
propertyInCommon.GetValue(source, null);
			PropertyInfo propertyOfDestinationToReplace =
destination.GetType().GetProperty(propertyInCommon.Name);	
			propertyOfDestinationToReplace.SetValue(destination,
                          Convert.ChangeType(valueOfPropertyToCopy,
                          propertyOfDestinationToReplace.PropertyType),
                          null);
			}
		}

The idea to test how reflection copy performs is:

  • We create 1000, 10K, 100K, 200K, 500K instances of two pairs of mildly compatible complex classes (i.e.: each pair has some common properties, but some are not common);
  • We copy the values of the properties of the instances of the first type of objects on the properties of the second types of objects;
  • First we do it without reflection (“direct” copy);
  • Then we do it with reflection, but without “registering” the object pairs (that is to say the common properties are evaluated every time we ask the method to perform a copy);
  • Last, we do the copy with reflection more “prepared”, that is to say by first “registering” the object pairs (every time we ask the method to perform a copy, the common properties are already known).

 

We will take advantage of this test to also see how different the iOS simulator and the actual devices perform (or don’t perform, in a case).

Performance in the simulator (real device tests are below)

Here are the results of the direct copy / reflection copy on an iPhone 6plus (iOS 9.3) simulator (this is important to underline because on the real device it will be a totally different story) running on a macBookPro i7 8Gb RAM:

Test 1:
1,000 object copies in the simulator. Not a lot of difference between “direct” copy and copy via reflection

Simulator Screen Shot 02 Jul 2016 19.38.21

Test 2:
10,000 object copies in the simulator. The difference begins to be important (5x slower in the best reflection case).

Simulator Screen Shot 02 Jul 2016 19.39.00

Test 3:
100,000 object copies in the simulator. The non-reflection methods continue to perform well. Code is reused very well by the runtime. Reflection code is not.

Simulator Screen Shot 02 Jul 2016 19.39.38

Test 4:
200,000 object copies in the simulator. Performance degrades in the reflection part of the copy.

Simulator Screen Shot 02 Jul 2016 19.40.41

Test 5:
500,000 object copies in the simulator. This is an extreme case (it wouldn’t be a good idea to modify 500,000 objects in a mobile app, would it?), but this example does show some pitfalls of reflection.

Simulator Screen Shot 02 Jul 2016 20.11.40

Performance on a real device (iPhone 6plus)

Comparing the simulator to the real device is interesting because there is one particular case (the 500,000 objects) that the device cannot even work with because of insufficient memory. This reminds us once again that the simulators must be used with a grain of salt.

Test 1 on iPhone:
1,000 object copies in the real device. Not a lot of difference between “direct” copy and copy via reflection nor between Simulator and real device

IMG_1630

Test 2 on iPhone:
10,000 object copies in the real device. The performance of copy without reflection is still good. The performance of copy via reflection is comparable to that of the simulator.

IMG_1631

Test 3 on iPhone:
100,000 object copies in the real device. The performance of copy without reflection is still good. However, the device begins to degrade both compared to the “simple” copy and compared to the simulator.

IMG_1632

Test 4 on iPhone:
200,000 object copies in the real device. The performance of copy without reflection is still good. The device is three time slower than the simulator in this case.

IMG_1633

Test 5 on iPhone:
…last, 500,000 object copies in the real device. The performance cannot be shown as the creation of 500,000 complex objects on an iPhone 6plus results in the crash of the app.

Syncfusion free “succintly” ebooks

When we discuss technologies on this blog, we generally try to present different commercial alternatives to reach a certain goal. This time we’ll make an exception and we will present a set of books offered (for free, or better: for your availabilty to subscribe to their newsletter) by  one specific company: Syncfusion, to which we’re not affiliated in any way.

Why do we do this? Because they have given us for free (as they have everyone else) a lot of good eBooks that give you the lowdown on interesting development/system admin topics. These books are professional and well written. Everyone can get them for free at www.syncfusion.com

In particular, I liked the “Git Succintly” eBook. I have used Git in the past via dubious GUI’s (Visual Studio – which has no “staging” concept until VS 2013; this may change with VS 2015 update 2 if I’m not mistaken – and Xamarin studio).

This little book, written very concisely by Ryan Hodson (“succint” books are rarely made up of more than 100 pages), tells you everything you have to know to kick the ground running with Git (even professionally) via command line, something I hadn’t done in the past.

git

And… it’s free.

Other cool titles:

  • NancyFX succintly (for microservice fans)
  • Criptography in .Net

What does Syncfusion actually sell? They sell (with an interesting business model that really awards independent developers) components for web and mobile development. You may want to check them out.

Renewing an SSL certificate for a website hosted in Azure

Managing resources in Azure has become easier (well, at least the interface looks better) since Microsoft launched the new portal (the one at portal.azure.com).

Let us see today how you upload, in the new portal, a renewed IP-based SSL certificate for your Azure web app.

Prerequisites

  1. Needless to say, to upload a renewed certificate in Azure you need to have a renewed certificate. You don’t have to wait for the old certificate to expire before installing the new one, though: you can buy the new certificate in advance (one/two months is a pretty safe choice) and use it immediately. However, watch out that some third parties (for example: the bank that allows your eCommerce payments) may need to install the intermediate certificates of your new certificate in their “certificate store” before you replace the certificate in your web server. Check with them if this is the case.
  2. To renew an SSL certificate, you can talk to the issuer of the existing certificate. There are also DNS providers that issue SSL certificates for you via a Certification Authority they trust, so you don’t have to speak to another party.
  3. The new certificate must be in the .pfx format (password-protected) to get along with IIS (Azure also runs Apache actually, but I think most Azure websites are IIS presently. I may be wrong already and I will definitely be wrong in the future).
    I explained how to create the .pfx certificate in this post. However, if your Certificate authority or DNS provider are very kind, you won’t have to go through any of that: they will create a .pfx for you, thank you very much. For instance, dnsimple has an interface that creates the pfx for you when you buy a certificate through them (they buy it at Comodo’s). Dnsimple also provides a matching password you will have to use in Azure in conjunction with the certificate:
Download a pfx format certificate and password from dnsimple

Download a pfx format certificate and its password from dnsimple, or any provider that is “IIS-friendly”

The actual work

  1. Go to portal.azure.com
  2. Choose the blade (new portal terminology for a dynamic window) corresponding to your web app
  3. In the app’s settings, choose “Custom domains and SSL”
  4. Choose “Upload certificate”. Don’t be scared if you’re doing this ahead of time: before you bind the certificate to your site, nothing will change in the configuration. Plus, as we said, you can use the renewed certificate before the old one expires, unless a third party needs the intermediate certificates.
upload renewed pfx certificate in Azure

upload renewed pfx certificate in Azure

5. Once you upload the new certificate, the list of available certificates is incremented by one (see the “Certificates” section in the screenshot below: there is a “2017” certificate below the “2016”).

As you can see in the "certificates" section, I have a new one

As you can see in the “certificates” section, I have a new one

6. Now you would be tempted to ADD a new binding between your hostname and the new certificate. You would want to do that in the SSL bindings configuration (see “SSL bindings” in the screenshot above). Azure will allow you to do that; however, once you save and re-enter the blade, you will see that only the old certificate still has a binding to the hostname.

7. This is why you don’t ADD a new binding between the hostname and the new certificate: you update the existing binding. In the row corresponding to existing binding, select the new certificate you just uploaded and replace the old one, as you see below:

Choose the new certificate in the SSL binding

Choose the new certificate in the SSL binding

8. If your SSL is already IP-based, you won’t have to set the IP binding again: the old configuration is kept.

9. However, in order to check that the new Certificate chain is working, you can use an online tool like SSL shopper’s checker.

Just make sure that you are seeing the latest, non-cached situation in the tool!

SSL-checker

Check your SSL certificate in Azure via an SSL checker

Encrypting a SQL Azure Database to comply with the EU data protection laws

Back in 2014, Microsoft’s president and chief legal officer Brad Smith wrote a note in the company’s blog (you can read it here: http://blogs.microsoft.com/blog/2014/04/10/privacy-authorities-across-europe-approve-microsofts-cloud-commitments/) stating that Azure was the only cloud provider meeting the renewed data protection regulations of the European Union. This award stemmed from policies that were already in place and some that Microsoft committed to implementing in the future.

It has to be noted that, by “data protection”, one does not refer only to possible hackers stealing customer data, but also, as Microsoft says, Protecting customer data from government snooping (read here: http://blogs.microsoft.com/blog/2013/12/04/protecting-customer-data-from-government-snooping/).

The European Commission, on October 15, 2015, ruled that the “Safe Harbor” decision of year 2000 (which affirmed that data were by definition protected when exchanged between EU countries and the US) is invalid. This new ruling followed a complaint of an Austrian Facebook user who affirms the company does not protect his data from the US authorities: http://curia.europa.eu/jcms/upload/docs/application/pdf/2015-10/cp150117en.pdf.

The data protection is thusly interpreted by Amazon as regards their very widely used Web Services cloud: https://aws.amazon.com/compliance/eu-data-protection/

There are also a lot of additional cloud service providers that have addressed the EU’s regulations; here is a good white paper by Cloudera: https://www.cloudera.com/content/dam/cloudera/Resources/PDF/solution-briefs/eu-data-protection-directive-compliance-solution.pdf

Here is how Rackspace comments the October 15th ruling http://blog.rackspace.com/eu-ruling-on-safe-harbor-rackspace-stands-prepared/

If your company is in a European country and you want to use cloud storage, chances are that you may want to ask your cloud provider to keep your data in the EU datacenters. Most providers allow you to choose where your database is located and replicated.

However, this is not enough. One of the requirements of data protection is encryption. Customer data should be encrypted not only when it leaves the EU for back up and geo-replication: you must assure a certain level of security if you allow your customers store personal data in a database.

Starting from October, 2015, if you have a SQL Azure database, you can take advantage of the transparent data encryption. What does it mean? It means your customer data is encrypted with a server-level cryptographic key but you don’t have to change your SQL queries.

I will try to show now how simple this is. I will replicate the same info you find here: https://msdn.microsoft.com/library/dn948096, but with a real-case scenario.

Before encrypting: Back up the DB

SQL Azure DBs are automatically backed up (frequency is set by you: watch out for Azure bandwidth costs!) but it is a good practice to back up your data before any important DDL operation. You can backup your DB to an Azure container.

To do so, from the Azure portal choose “SQL Server”, then select  the server that contains the DB you want to back up before encrypting.

Then, choose the “export” feature.

Exporting a SQL Azure DB to an Azure container

Exporting a SQL Azure DB to an Azure container

 

You have to choose the Azure “blob” where your backup file (.bacpac) will be stored. You also need to provide your server’s username and password (by server, I mean a DB server: being an “as a service” DB, actually, it is not a real server).

 

Configuring an Azure container to export a SQL Azure DB

Configuring an Azure container to export a SQL Azure DB

 

Encrypt the DB

The cool DDL command to encrypt the DB is:

ALTER DATABASE [MYDatabaseName] SET ENCRYPTION ON;

If you are not cool and don’t like writing command lines (I absolutely don’t), you can achieve the same result via the portal, (see screenshots below):

  1. Select the Server
  2. Select the DB
  3. Choose “all settings”
  4. “Transparent data encryption”
  5. “ON”
  6. “Save”

set-encryption-sql-azure
7. Wait for some seconds (depending on the size of the DB, it could be also minutes)
8. You are done.

Keep on querying

Encryption is totally transparent. Keep on querying your DB!

Calling WebServices Asynchronously in .Net: Unique state object is required for multiple asynchronous simultaneous operations

When you call a web service aynchronously, you may want to make sure that responses arrive in the same order in which they were requested.

Is this always the case? Not necessarily. At times you just don’t care if the responses to requests A and B come in order B and A. Other times, order is crucial. One would argue that in these cases synchronous methods are better than asynchronous, but – especially when you are using third party web services, or particular technologies (PCLs are an example) – you cannot always choose to “go synchronously”.

.Net helps you enforce the correct request – response order by throwing the error that is the subject of this post. If a client requests multiple async methods in a short period of time, .Net tries to prevent the inconsistencies by complaining thusly:

ERROR: System.ArgumentException: There was an error during asynchronous processing. Unique state object is required for multiple asynchronous simultaneous operations to be outstanding. ---> System.ArgumentException: Item has already been added. Key in dictionary: 'System.Object' Key being added: 'System.Object' at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add)

When does this happen? Is .Net mean to us?

It happens when you call an async web service method before the previous call has received its

completed

event.

Let us reproduce the issue.
Let us imagine we have a web service that we reference in a Console project with a

myWSRef

reference.
Let us imagine the service exposes an async method called

getProductDataAsync(manufactorCode,productId)

Our client repeatedly calls the async service in a while-loop, like this:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace testwcf
{
    class Program
    {
       static void Main(string[] args)
        {
            string manufactorCode="blahblahCode";
            string productId = "blahblahCode2";

            // we define the client for the WS
            myWSRef.RemoteWS wsClient = new myWSRef.RemoteWS ();

            // we attach a handler to the "Completed" event
            wsClient.getProductDataResponseCompleted += callToProductDataCompleted;
            
            int prodNumb=0;
            while (prodNumb<=100)
            {
                try
                {
                    prodNumb++;
                    string artificialSuffix=new Random().Next().ToString();
                    wsClient.getProductDataAsync(manufactorCode,productId+artificialSuffix);
                }
                catch (Exception e)
                {
                    Console.WriteLine(e);
                }
            }
            Console.ReadLine();
        }

        public static void callToProductDataCompleted(object sender, myWSRef.GetProductDataCompletedEventArgs ea)
        {
            //this is the handler to the webserver reply

            if (ea.Error != null){
                Console.WriteLine("ERROR: " + ea.Error);
                Debug.WriteLine("ERROR: " + ea.Error);
             }
            else {
                Console.WriteLine("Call OK: ");
                Console.WriteLine(ea.Result);
            }
        }
    }
}

What happens if we run this? .Net will throw the runtime error that is the subject of this post.

Let us try to distantiate the calls by asking the Thread to sleep one second.

try
                {
                    prodNumb++;
                    string artificialSuffix=new Random().Next().ToString();
                    wsClient.EditorProductsDataResponse2Async(authCode, editorId, productId + artificialSuffix, idMachine, true, year);
                    Thread.Sleep(1000);
                 }

What happens? Temporarily, the error goes away. In fact, one second is enough for the web service response to call the callToProductDataCompleted routine, and we’re safe… unless… the next call takes three seconds rather than one second. And we’re back to square one.

How to solve this issue for good? Many suggest that every call has a unique GUID.

Stackoverflow offers
this suggestion: every call passes its own Guid to the service.

What about when the web service is done by someone else and you cannot pass it any uniqueID?

One way to solve this is with a semaphore: when the response of a web service call has not been received, you cannot call another one.

This is the code, with the IsBusy semaphore implemented

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace testwcf
{
    class Program
    {

        static Boolean IsBusy = false;  // this will be our semaphore
        static void Main(string[] args)
        {
            string manufactorCode="blahblahCode";
            string productId = "blahblahCode2";

            // we define the client for the WS
            myWSRef.RemoteWS wsClient = new myWSRef.RemoteWS ();

            // we attach a handler to the "Completed" event
            wsClient.getProductDataResponseCompleted += callToProductDataCompleted;
            
            int prodNumb=0;
            while (prodNumb<=100)
            {
               // if IsBusy, it means I am in the middle of a call
                if (IsBusy)
                {
                    continue; //I wait until the web service is not being called by another client still
                }
                try
                {
                    prodNumb++;
                    string artificialSuffix=new Random().Next().ToString();
                    wsClient.getProductDataAsync(manufactorCode,productId+artificialSuffix);
                }
                catch (Exception e)
                {

                    Console.WriteLine(e);
                    // treat the exception, then
                    IsBusy=false; // free the "semaphore" for another call
            }
            Console.ReadLine();
        }

        public static void callToProductDataCompleted(object sender, myWSRef.GetProductDataCompletedEventArgs ea)
        {
            //this is the handler to the webserver reply
             IsBusy = false; // when the WS call has been dutifully served, we "free" the loop to serve another one
            if (ea.Error != null){
                Console.WriteLine("ERROR: " + ea.Error);
                Debug.WriteLine("ERROR: " + ea.Error);
             }
            else {
                Console.WriteLine("Call OK: ");
                Console.WriteLine(ea.Result);

            }
        }
    }
}

There are other methods to avoid the request overlapping, of course, like calling the following method inside the Completed callback function (iteration). Just remember that the callToProductDataCompleted method is called even when the web server throws an error (as: 400 Bad request), so the method is the place to handle those exceptions. By contrast, client errors (as a timeout or network error) will be caught by the catch block of the getProductDataAsync call.

Controller versus view in MVC .net: is the code in the view as fast as that in the controller? Is it slower?

One of the basic rules of MVC is that views should be only – exactly – views, that is to say: objects that present to the user something that is already “worked and calculated”.

They should perform little, if not none at all, calculation. All the significant code should be in the controllers. This allows better testability and maintainability.

Is this, in Microsoft’s interpretation of MVC, also justified by performance?

We tested this with a very simple code that does this:

– creates 200000 “cat” objects and adds them to a List

– creates 200000 “owner” objects and adds them to a List

– creates 200000 “catowner” objects (the MTM relation among cats and owners) and adds them to a List

– navigates through each cat, finds his/her owner, removes the owner from the list of owners (we don’t know if cats really wanted this, but their freedom on code fits our purposes).

We’ve run this code in a controller and in a razor view.

The result seem to suggest that the code in views runs just as fast as in controllers even if don’t pre-compile views (the compilation time in our test is negligible).

The average result for the code with the logic in the controller is 18.261 seconds.

The average result for the code with the logic in the view is 18.621 seconds.

The performance seems therefore very similar.

Here is how we got to this result.

Case 1: Calculations are in the CONTROLLER

Models:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

namespace WebPageTest.Models
{
public class Owner
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public virtual CatOwner CatOwner { get; set; }
}
public class Cat
{
public string Name { get; set; }
public DateTime DOB { get; set; }
public virtual CatOwner CatOwner { get; set; }
}
public class CatOwner
{
public virtual Cat Cat { get; set; }
public virtual Owner Owner { get; set; }
}
}

Controller:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using WebPageTest.Models;

namespace WebPageTest.Controllers
{
public class HomeController : Controller
{
public ActionResult Index()
{
Stopwatch howLongWillItTake = new Stopwatch();
howLongWillItTake.Start();
List<Owner> allOwners = new List<Owner>();
List<Cat> allCats = new List<Cat>();
List<CatOwner> allCatOwners = new List<CatOwner>();
// create lists with 200000 cats, 200000 owners, 200000 relations
for (int i = 0; i < 200000; i++)
{
//Cat
Cat CatX = new Cat();
CatX.Name = “Cat ” + i.ToString();
CatX.DOB = DateTime.Now.AddDays(i / 10);
//Owner
Owner OwnerX = new Owner();
OwnerX.Name = “Owner ” + i.ToString();
OwnerX.DOB = DateTime.Now.AddDays(-i / 10);
//Relationship “table”
CatOwner CatOwnerXX = new CatOwner();
CatOwnerXX.Cat = CatX;
// Relations
CatOwnerXX.Owner = OwnerX;
CatX.CatOwner = CatOwnerXX;
OwnerX.CatOwner = CatOwnerXX;
//add to list
allCats.Add(CatX);
allOwners.Add(OwnerX);
allCatOwners.Add(CatOwnerXX);
}
// now I remove all the items
foreach (Cat CatToDelete in allCats)
{
Owner OwnerToRemove = CatToDelete.CatOwner.Owner;
allOwners.Remove(OwnerToRemove);
}
// now all cats are free
int numberOfCats = allCats.Count();
int numberOfOwners = allOwners.Count();
howLongWillItTake.Stop();
long elapsedTime = howLongWillItTake.ElapsedMilliseconds;
// give info to the view
ViewBag.numberOfCats = numberOfCats;
ViewBag.numberOfOwners = numberOfOwners;
ViewBag.elapsedTime = elapsedTime;
return View();
}
}
}

View:

<div class=”row”>
<div class=”col-md-12″>
<hr />
<b>Results</b>
<br/>
Cats: @ViewBag.numberOfCats
<br/>
Owners: @ViewBag.numberOfOwners
<br/>
ElapsedTime in milliseconds: @ViewBag.ElapsedTime
<hr />
</div>
</div>

Case 2: Calculations are in the VIEW (pre-compiled)

Models: same as above

Controller:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;

namespace WebPageTest.Controllers
{
public class HomeBisController : Controller
{
public ActionResult Index()
{
return View();
}
}
}

View:

@using System;
@using System.Collections.Generic;
@using System.Diagnostics;
@using System.Linq;
@using System.Web;
@using WebPageTest.Models;
@using System.Web.Mvc;
@{
Stopwatch howLongWillItTake = new Stopwatch();
howLongWillItTake.Start();
List<Owner> allOwners = new List<Owner>();
List<Cat> allCats = new List<Cat>();
List<CatOwner> allCatOwners = new List<CatOwner>();
//create lists with 200000 cats, 200000 owners, 200000 relations
for (int i = 0; i < 200000; i++)
{
//Cat
Cat CatX = new Cat();
CatX.Name = “Cat ” + i.ToString();
CatX.DOB = DateTime.Now.AddDays(i / 10);
//Owner
Owner OwnerX = new Owner();
OwnerX.Name = “Owner ” + i.ToString();
OwnerX.DOB = DateTime.Now.AddDays(-i / 10);
//Relationship “table”
CatOwner CatOwnerXX = new CatOwner();
CatOwnerXX.Cat = CatX;
// Relations
CatOwnerXX.Owner = OwnerX;
CatX.CatOwner = CatOwnerXX;
OwnerX.CatOwner = CatOwnerXX;
//add to list
allCats.Add(CatX);
allOwners.Add(OwnerX);
allCatOwners.Add(CatOwnerXX);
}
// now I remove all the items
foreach (Cat CatToDelete in allCats)
{
Owner OwnerToRemove = CatToDelete.CatOwner.Owner;
allOwners.Remove(OwnerToRemove);
}
// now all cats are free
int numberOfCats = allCats.Count();
int numberOfOwners = allOwners.Count();
howLongWillItTake.Stop();
long elapsedTime = howLongWillItTake.ElapsedMilliseconds;
// give info to the view

}
<div class=”row”>
<div class=”col-md-12″>
<hr />
<b>Results</b>
<br />
Cats: @numberOfCats
<br />
Owners: @numberOfOwners
<br />
ElapsedTime in milliseconds: @elapsedTime
<hr />
</div>
</div>

How fast is classic ADO.net compared to Entity Framework?

Or maybe I should write: how slower is Entity Framework as compared to ADO.Net?

By Entity Framework I mean Microsoft’s open source package that allows you to manage DB objects via strongly-typed classes and collections.

By ADO.Net I mean peeking into the DB using the old ADO objects SQLConnection, SQLCommand, SQLParameters

This is the little test (note that it is a very peculiar test because rarely will you in real life insert, update and delete objects one by one: more massive operations are more likely):

– we create two table: Books and Authors. They are related via Author_Id, which is on the Books table.

– we insert 1000 authors and 1000 books

. we update 1000 books with a new title (one by one)

– we delete 1000 books (one by one)

– DB Is SQLserver version 11, running on a quad-core i5 @1.9 Ghz running Windows 8

– Server is a Windows 8 machine with 8Gb Gb RAM

The code for Entity Framework?

Book Model

namespace FastEF.Models
{
 public class Book
 {
 public int Id { get; set; }
 public string Title { get; set; }
 public Author Author { get; set; }
 
 }
}

Author Model

namespace FastEF.Models
{
 public class Author
 {
 public int Id { get; set; }
 public string Name { get; set; }
 public string Address { get; set; }
 public ICollection<Book> Books { get; set; }
}
}

DbContext

namespace FastEF.Models
{
 public class DBCreator:DbContext
 {
 public DbSet<Book> Books { get; set; }
 public DbSet<Author> Authors { get; set; }
}
}

Then, the action from Entity Framework test, which:

– inserts 1000 auhors and 1000 books related to the authors

– updates the 1000 books

– deletes the 1000 books


 public ActionResult EF()
        {
            Book bookToCreate = new Book();
            Author authorToCreate = new Author();
            Stopwatch tellTime = new Stopwatch();
            long insertingTime = 0;
            long updatingTime = 0;
            long deletingTime = 0;
            List generatedBookIds = new List();

            // let us delete table contents
            try
            {
                var objCtx = ((System.Data.Entity.Infrastructure.IObjectContextAdapter)thisDB).ObjectContext;
                objCtx.ExecuteStoreCommand("DELETE FROM Books");
                objCtx.ExecuteStoreCommand("DELETE FROM Authors");

            }


            catch (Exception e)
            {
                // write exception. Maybe it's the first time we run this and have no tables
                Debug.Write("Error in truncating tables: {0}", e.Message);

            }

            // let us start the watch
            tellTime.Start();

            // INSERTING!
            // we create 1000 authors with name="John Doe nr: " + a GUID
            // and address ="5th Avenue nr: " + a GUID
            // we create a book called "The Cronicles of: " + a GUID and attach it to the author
            // we save the book, so the author is also automatically created

            for (int i = 0; i < 1000; i++)
            {

                // creating author
                authorToCreate = new Author();
                authorToCreate.Name = "John Doe nr. " + Guid.NewGuid();
                authorToCreate.Address = "5th Avenue nr. " + Guid.NewGuid();

                //creating book and linking it to the author
                bookToCreate = new Book();
                bookToCreate.Title = "The Chronicles of: " + Guid.NewGuid();
                bookToCreate.Author = authorToCreate;

                //saving the book. Automatically, the author is saved
                thisDB.Books.Add(bookToCreate);
                thisDB.SaveChanges();
                generatedBookIds.Add(bookToCreate.Id);
            }

            insertingTime = tellTime.ElapsedMilliseconds; // how did I do with inserting?

            tellTime.Restart(); // restart timer

            // We update the 1000 books by changing their title
            foreach (int bookId in generatedBookIds)
            {

                Book bookToUpdate = thisDB.Books.Find(bookId);
                bookToUpdate.Title = "New chronicles of: " + Guid.NewGuid();

                thisDB.SaveChanges();

            }

            updatingTime = tellTime.ElapsedMilliseconds; // how did I do with inserting?
            tellTime.Restart(); // restart timer

            // We delete 1000 books, one by one
            foreach (int bookId in generatedBookIds)
            {

                Book bookToDelete = thisDB.Books.Find(bookId);
                thisDB.Books.Remove(bookToDelete);

            }

            deletingTime = tellTime.ElapsedMilliseconds; // how did I do with inserting?
            tellTime.Stop(); // stop timer


            //printing the results 

            string returnedMessage = "Results with Entity Framwork 6.1: ";
            returnedMessage += "
1000 Insert operations in ms.: " + insertingTime.ToString(); returnedMessage += "
1000 Update operations in ms.: " + updatingTime.ToString(); returnedMessage += "
1000 Delete operations in ms.: " + deletingTime.ToString(); return Content(returnedMessage); }

The code for ADO.Net?

 public ActionResult SQLClient()
        {

            string insertAuthorSQL = "INSERT INTO Authors (Name, Address) VALUES (@name, @address)";
            string insertBookSQL = "INSERT INTO Books(Title, Author_Id) VALUES (@Title, @Author_Id)";
            string updateBookSQL = "UPDATE Books Set Title=@Title where Id=@Id";
            string deleteBookSQL = "DELETE Books where Id=@Id";

            Book bookToCreate = new Book();
            Author authorToCreate = new Author();
            Stopwatch tellTime = new Stopwatch();

            // SQL Objects we will use
            SqlConnection connAntiEF = new SqlConnection(WebConfigurationManager.ConnectionStrings["DefaultConnection"].ToString());
            SqlCommand cmdAntiEF = new SqlCommand();

            // Open Connection
            connAntiEF.Open();

            long insertingTime = 0;
            long updatingTime = 0;
            long deletingTime = 0;
            List generatedBookIds = new List();

            // let us delete table contents
            try
            {
                cmdAntiEF = new SqlCommand("DELETE FROM Books", connAntiEF);
                cmdAntiEF.ExecuteNonQuery();
                cmdAntiEF = new SqlCommand("DELETE FROM Authors", connAntiEF);
                cmdAntiEF.ExecuteNonQuery();
            }


            catch (Exception e)
            {
                // write exception. 
                Debug.Write("Error in truncating tables: {0}", e.Message);

            }

            // let us start the watch
            tellTime.Start();

            // INSERTING!
            // we create 1000 authors with name="John Doe nr: " + a GUID
            // and address ="5th Avenue nr: " + a GUID
            // we create a book called "The Cronicles of: " + a GUID and attach it to the author
            // we save the book, so the author is also automatically created

            for (int i = 0; i < 1000; i++)
            {

                // creating author
                authorToCreate = new Author();
                authorToCreate.Name = "John Doe nr. " + Guid.NewGuid();
                authorToCreate.Address = "5th Avenue nr. " + Guid.NewGuid();

                //creating book and linking it to the author
                bookToCreate = new Book();
                bookToCreate.Title = "The Chronicles of: " + Guid.NewGuid();
                bookToCreate.Author = authorToCreate;

                // INSERT book with SQL and get its Id


                SqlParameter parmName = new SqlParameter("Name", authorToCreate.Name);
                SqlParameter parmAddress = new SqlParameter("Address", authorToCreate.Address);
                cmdAntiEF.CommandText = insertAuthorSQL;
                cmdAntiEF.Parameters.Add(parmName);
                cmdAntiEF.Parameters.Add(parmAddress);
                cmdAntiEF.ExecuteNonQuery();

                cmdAntiEF.Parameters.Clear();
                cmdAntiEF.CommandText = "SELECT @@IDENTITY";

                int insertedAuthorID = Convert.ToInt32(cmdAntiEF.ExecuteScalar());

                // INSERT book with SQL and get its Id


                parmName = new SqlParameter("title", bookToCreate.Title);
                parmAddress = new SqlParameter("author_id", insertedAuthorID);

                cmdAntiEF.CommandText = insertBookSQL;
                cmdAntiEF.Parameters.Add(parmName);
                cmdAntiEF.Parameters.Add(parmAddress);
                cmdAntiEF.ExecuteNonQuery();

                // we neeed the book's Id to iterate through the Id's later
                cmdAntiEF.CommandText = "SELECT @@IDENTITY";
                int insertedBookID = Convert.ToInt32(cmdAntiEF.ExecuteScalar());
                generatedBookIds.Add(insertedBookID);


                parmName = null;
                parmAddress = null;
                cmdAntiEF.Parameters.Clear();

            }


            insertingTime = tellTime.ElapsedMilliseconds; // how did I do with inserting?

            tellTime.Restart(); // restart timer

            // We update 1000 books by changing their title
            cmdAntiEF.CommandText = updateBookSQL;
            foreach (int bookId in generatedBookIds)
            {

                //parameters are loaded with the book's new data
                SqlParameter parmTitle = new SqlParameter("Title", "New chronicles of: " + Guid.NewGuid());
                SqlParameter parmId = new SqlParameter("Id", bookId);
                cmdAntiEF.Parameters.Add(parmTitle);
                cmdAntiEF.Parameters.Add(parmId);

                cmdAntiEF.ExecuteNonQuery();
                parmTitle = null;
                cmdAntiEF.Parameters.Clear();

            }

            updatingTime = tellTime.ElapsedMilliseconds; // how did I do with inserting?
            tellTime.Restart(); // restart timer

            // We delete 1000 books one by one
            cmdAntiEF.CommandText = deleteBookSQL;
            foreach (int bookId in generatedBookIds)
            {
                SqlParameter parmId = new SqlParameter("Id", bookId);
                cmdAntiEF.Parameters.Add(parmId);
                cmdAntiEF.ExecuteNonQuery();
                parmId = null;
                cmdAntiEF.Parameters.Clear();
            }

            connAntiEF.Close();

            deletingTime = tellTime.ElapsedMilliseconds; // how did I do with inserting?
            tellTime.Stop(); // stop timer

            // printing the results
            string returnedMessage = "Results with SQL Connection: ";
            returnedMessage += "
1000 Insert operations in ms.: " + insertingTime.ToString(); returnedMessage += "
1000 Update operations in ms.: " + updatingTime.ToString(); returnedMessage += "
1000 Delete operations in ms.: " + deletingTime.ToString(); return Content(returnedMessage); }

How did they do?

Entity Framework

Results with Entity Framwork 6.1:
1000 Insert operations in ms.: 11355
1000 Update operations in ms.: 20833
1000 Delete operations in ms.: 18117

Entity framework performance

Adding, updating, deleting 1000 sqlserver objects via EF

CPU average use: 35%

Memory average use: 65%

ADO.Net

Results with SQL Connection:
1000 Insert operations in ms.: 921
1000 Update operations in ms.: 309
1000 Delete operations in ms.: 311

ado.net insert and update and delete

Inserting, updating, deleting sql server objects via ado

How to interpret the results?

They cannot be compared, because using EF means using objects rather than non-typed records.

So, I keep on thinking ORMs are the way to go.

However, if one day I was asked to speed up parts of an application that is slow when reading / writing data, I would know where to go and look for possible ameliorations.

Entity Framework: Database first or code first? Some non-conceptual, very practical differences in real life scenarios

In the last years, Microsoft has promoted Code First as a very comfortable way to make your web (or even client-server) application communicate with your database (I am not talking only about Sql Server. I have had good experience of Entity Framework with Oracle databases as well).

Code First, in contrast with Database first.

Database first is how it has always worked in the IT world:

  1. first you create a DB
  2. then you create objects in your application that are a representation of your DB, and modify the DB contents through the objects.

Code First works the other way around:

  1. first you create (business?) classes in your application
  2. then Entity framework creates the DB tables to hold those objects and keep track of the DB modifications.

There is a third approach (Model First), but I have never really given it a chance because the other two were really sufficient for what I do.

What is better? the Practical Approach

Let us see how the DB-classes link is created in Database First and how this changes in Code First.

The problem:

I am a tie salesperson. I have two entities that are linked:

  1. ties
  2. racks

A tie can be linked to one rack. Racks can hold many ties.

Managing Related Tables in Entity Framework Database First

These are my Racks

CREATE TABLE [dbo].[Rack] (
[Id] INT NOT NULL IDENTITY,
[RackPosition] NVARCHAR (MAX) NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);

These are my Ties (linked to my Racks via the RackId, that is a foreign key)

CREATE TABLE [dbo].[Tie] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[TieSerial] INT NULL,
[RackId] INT NOT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);

ALTER TABLE [dbo].[Tie] WITH CHECK ADD CONSTRAINT [FK_Tie_Rack] FOREIGN KEY([RackId])
REFERENCES [dbo].[Rack] ([Id])
GO

These are the tables as you see them in Sql Management Studio:

Image of the two DB tables

The two tables created in the DB

In order to create the classes out of this Database, in Visual Studio we:

  1. Add or update the entity framework package to our web project (why not via NuGet, and why not 6.1, at the beginning of November 2014?)
  2. Add the ADO.NET Entity object to our project (we choose the option “EF Designer from DB”)
  3. We specify the connection string and finally import the DB objects

In Sql Management studio, we add some data to the Rack table, so that – when we create new ties – they can be hung on something!

Racks in the Rack table, Database first

Let us add some racks

Database first: choose what tables you want to import

DB Object to import in database first

We build the solution. At the end, the EF scripts create these class files we take good care of, because we will reuse them in Code First approach:

namespace DatabaseFirst
{
using System;
using System.Collections.Generic;

public partial class Tie
{
public int Id { get; set; }
public Nullable<int> TieSerial { get; set; }
public int RackId { get; set; }

public virtual Rack Rack { get; set; }
}
}

and

namespace DatabaseFirst
{
using System;
using System.Collections.Generic;

public partial class Rack
{
public Rack()
{
this.Ties = new HashSet<Tie>();
}

public int Id { get; set; }
public string RackPosition { get; set; }

public virtual ICollection<Tie> Ties { get; set; }
}
}

Please note: since the foreign key is on the Tie, this means a Tie has a Rack. A Rack has multiple Ties (thus, the ICollection of Ties) in the Rack object

Now, let us see what happens when we create an MVC controller and “scaffold” views

Let us create the views to edit these objects

MVC scaffolding of Database first objects

Below, the code we get for the Ties Controller. Note the bold statements: the scaffolding templates have recognized that, when we create or show a Tie, we also create or show the Rack it is bound to.

Note also that the templates makes use of the RackId field to create and modify the link between the Tie and the Rack.

public class TiesController : Controller
{
private DatabaseFirstDBEntities db = new DatabaseFirstDBEntities();

// GET: Ties
public ActionResult Index()
{
var ties = db.Ties.Include(t => t.Rack);
return View(ties.ToList());
}

// GET: Ties/Details/5
public ActionResult Details(int? id)
{
if (id == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
Tie tie = db.Ties.Find(id);
if (tie == null)
{
return HttpNotFound();
}
return View(tie);
}

// GET: Ties/Create
public ActionResult Create()
{
ViewBag.RackId = new SelectList(db.Racks, “Id”, “RackPosition”);
return View();
}

// POST: Ties/Create
// To protect from overposting attacks, please enable the specific properties you want to bind to, for
// more details see http://go.microsoft.com/fwlink/?LinkId=317598.
[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult Create([Bind(Include = “Id,TieSerial,RackId”)] Tie tie)
{
if (ModelState.IsValid)
{
db.Ties.Add(tie);
db.SaveChanges();
return RedirectToAction(“Index”);
}

ViewBag.RackId = new SelectList(db.Racks, “Id”, “RackPosition”, tie.RackId);
return View(tie);
}

// GET: Ties/Edit/5
public ActionResult Edit(int? id)
{
if (id == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
Tie tie = db.Ties.Find(id);
if (tie == null)
{
return HttpNotFound();
}
ViewBag.RackId = new SelectList(db.Racks, “Id”, “RackPosition”, tie.RackId);
return View(tie);
}

// POST: Ties/Edit/5
// To protect from overposting attacks, please enable the specific properties you want to bind to, for
// more details see http://go.microsoft.com/fwlink/?LinkId=317598.
[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult Edit([Bind(Include = “Id,TieSerial,RackId”)] Tie tie)
{
if (ModelState.IsValid)
{
db.Entry(tie).State = EntityState.Modified;
db.SaveChanges();
return RedirectToAction(“Index”);
}
ViewBag.RackId = new SelectList(db.Racks, “Id”, “RackPosition”, tie.RackId);
return View(tie);
}

// GET: Ties/Delete/5
public ActionResult Delete(int? id)
{
if (id == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
Tie tie = db.Ties.Find(id);
if (tie == null)
{
return HttpNotFound();
}
return View(tie);
}

// POST: Ties/Delete/5
[HttpPost, ActionName(“Delete”)]
[ValidateAntiForgeryToken]
public ActionResult DeleteConfirmed(int id)
{
Tie tie = db.Ties.Find(id);
db.Ties.Remove(tie);
db.SaveChanges();
return RedirectToAction(“Index”);
}

protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
}

The scaffolding creates not only the controller, but also the views.

This is the Create view (you will notice the creation of a dropdown list that allows us to choose the rack the tie is hung on)

DB First: creation of a new record

Creation of a new record with Entity Framework DB first

And last the index page, which shows what we just created

Tie Table in database first

The ties in our table

Noteworthy:

When the model is sent from the view to the controller, the rack object that is linked to the tie is null (see the breakpoint screenshot). However, the RackId key is not. This allows the DB to keep the link between the new tie and the chosen rack.

model of tie table entry

screenshot of the tie model

Managing Related Tables in Entity Framework Code First

To test how all of this works in the “Code First” world, I will do so:

Create a new Visual Studio project (web application, MVC)

  1. Upgrade EF to 6.1
  2. Prepare a new DB, called CodeFirst
  3. Create model classes from the same classes that were generated automatically by EF in Database First
  4. Add to the project an “Entity framework Code First” ADO.net object. This doesn’t do a lot: basically, it creates a new connection string for you [that you will have to change to make it point to your real DB].
  5. The ADO.net object also adds a DbContext class where you have to specify what classes will be written to the DB (this is another difference from Database First: naturally, Database first asks you where to read data from and what data it should read. Code First does not ask where it should write data and what it should write. You have to write additional code for that. But it’s not a lot.)

This is how the DbContext class looks like after our intervention. In bold, the code we added.

public class CodeFirstModel : DbContext
{
// Your context has been configured to use a ‘CodeFirstModel’ connection string from your application’s
// configuration file (App.config or Web.config). By default, this connection string targets the
// ‘CodeFirst.CodeFirstModel’ database on your LocalDb instance.
//
// If you wish to target a different database and/or database provider, modify the ‘CodeFirstModel’
// connection string in the application configuration file.
public CodeFirstModel()
: base(“name=DefaultConnection”)
{
}

// Add a DbSet for each entity type that you want to include in your model. For more information
// on configuring and using a Code First model, see http://go.microsoft.com/fwlink/?LinkId=390109.

// public virtual DbSet<MyEntity> MyEntities { get; set; }

public virtual DbSet<Tie> Ties { get; set; }
public virtual DbSet<Rack> Racks { get; set; }
}

Now, we ask the scaffolding engine to generate the controller exactly as we did with Database first

Code first in Entity framework: code creation

Code First Controller creation

The created controller is exactly like that of the Database first controller.

public class TiesController : Controller

{
private CodeFirstModel db = new CodeFirstModel();

// GET: Ties
public ActionResult Index()
{
var ties = db.Ties.Include(t => t.Rack);
return View(ties.ToList());
}

// GET: Ties/Details/5
public ActionResult Details(int? id)
{
if (id == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
Tie tie = db.Ties.Find(id);
if (tie == null)
{
return HttpNotFound();
}
return View(tie);
}

// GET: Ties/Create
public ActionResult Create()
{
ViewBag.RackId = new SelectList(db.Racks, “Id”, “RackPosition”);
return View();
}

// POST: Ties/Create
// To protect from overposting attacks, please enable the specific properties you want to bind to, for
// more details see http://go.microsoft.com/fwlink/?LinkId=317598.
[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult Create([Bind(Include = “Id,TieSerial,RackId”)] Tie tie)
{
if (ModelState.IsValid)
{
db.Ties.Add(tie);
db.SaveChanges();
return RedirectToAction(“Index”);
}

ViewBag.RackId = new SelectList(db.Racks, “Id”, “RackPosition”, tie.RackId);
return View(tie);
}

// GET: Ties/Edit/5
public ActionResult Edit(int? id)
{
if (id == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
Tie tie = db.Ties.Find(id);
if (tie == null)
{
return HttpNotFound();
}
ViewBag.RackId = new SelectList(db.Racks, “Id”, “RackPosition”, tie.RackId);
return View(tie);
}

// POST: Ties/Edit/5
// To protect from overposting attacks, please enable the specific properties you want to bind to, for
// more details see http://go.microsoft.com/fwlink/?LinkId=317598.
[HttpPost]
[ValidateAntiForgeryToken]
public ActionResult Edit([Bind(Include = “Id,TieSerial,RackId”)] Tie tie)
{
if (ModelState.IsValid)
{
db.Entry(tie).State = EntityState.Modified;
db.SaveChanges();
return RedirectToAction(“Index”);
}
ViewBag.RackId = new SelectList(db.Racks, “Id”, “RackPosition”, tie.RackId);
return View(tie);
}

// GET: Ties/Delete/5
public ActionResult Delete(int? id)
{
if (id == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
Tie tie = db.Ties.Find(id);
if (tie == null)
{
return HttpNotFound();
}
return View(tie);
}

// POST: Ties/Delete/5
[HttpPost, ActionName(“Delete”)]
[ValidateAntiForgeryToken]
public ActionResult DeleteConfirmed(int id)
{
Tie tie = db.Ties.Find(id);
db.Ties.Remove(tie);
db.SaveChanges();
return RedirectToAction(“Index”);
}

protected override void Dispose(bool disposing)
{
if (disposing)
{
db.Dispose();
}
base.Dispose(disposing);
}
}

The database is not created, yet.

To create it, either you enable Migrations (in Package console), or – more simply – you launch the application and, via the automatically generated views, create a DB entry that does not depend on other objects (you might as well seed some objects in the database via code, but we want to keep this example as simple as possible),

So, we create new Racks with a Racks controller (we do not start creating Ties because you cannot have a Tie without a Rack). The DB is automatically created. After we create some racks, we can add ties to them.

filling db items in code first

Create racks in Code first

Db created by code first approach

Codefirst database

How does the “automatic” DB look like? Well, it does look identical to the DB first approach: entity framework has indeed created two tables (with a plural name, but there is an option to specify it should remain singular) and the foreign keys.

Now we create our Ties

Tie table creation

We create Ties in CodeFirst now

We have obtained exactly the same result as with Database First.

Bottom line: what is better?

Rarely as in this occasion have I felt entitled to say: it’s the same, it depends on your inclinations and what you have at hand.

Database First is better if you have large databases you need your code to mirror (for smaller DBs, there is the opportunity to do Codefirst importing a part of the DB and creating the rest via .net classes)

Code First is better if you have an empty DB and don’t want to spend too much time switching between two different environments (the code IDE and the DB IDE).

With Code First, you have to learn some new attributes that you will use in your code to specify DB-related attributes as: a certain column has an index, a certain column should not go to the DB at all, and so on.

What do I prefer?

Lately I have gone with Code First because I have always been repaid by investments in technologies that automate certain processes even if at the beginning they seem to confront problems in a simplistic way.

Usually, these technologies improve and take away a lot of programming hassles. Take ORMs: how many developers would have bet they would almost totally replace field-level programming one day? Code First  gives you a more centralized view of your application, with potentially fewer bug to take care of in the future. That did the trick for me.

C# 5 polymorphism notes: interface implementations at subclass level

When we want to show how polymorphism works, we often use the superclass -> subclass example and verify how overridden methods of the subclass are dynamically invoked when the subclass is created using the superclass’s type (in this fashion: SuperClassType x = new SubClassType() )

Here I would like to show something a bit different: how dynamic invocation of methods works when we create objects using the type of an interface they implement, and what changes if the subclasses inherit from a superclass and implement the interface itself.

The interface and objects we’ll use are very simple.

We have an interface called IAreaOrVolume, which contains a “blueprint” method AreaOrVolumeSize.

We have a “Quad” class that implements this interface.

Quad gives back the Area by multiplying its own width by its own height.

Then, we have a PossibleSquare that subclasses Quad. We call it “PossibleSquare” because we will see that, given the inheritance mechanisms, at times it is a Square but works as a Quad.

We have a PossibleCube that subclasses PossibleSquare. We call it “PossibleCube” because it is a Cube BUT its volume is at times (we will see exactly when) calculated as a Square’s or even any Quad’s area. Probably the idea that a “Cube” is a subclass of a “Square” is logically flawed, as a Square is a “slice” of a Cube, but for the time let us forget about the Aristotelic Ideas and let us see just what they mean in C#.

All objects have a constructor that accepts width, height and depth. Even if a Square does not have a height different from its width, we pass the constructor a width different from the height to show how unexpected results can stem from different implementations of inheritance.

In this first example, the “AreaOrVolumeSize” method is declared as “virtual” in the superclass “Quad” and overridden in the subclasses. See, in bold, the modifiers “virtual” and “override”.

namespace TestInheritanceWithInterfaces
{

interface IAreaOrVolume
{
double AreaOrVolumeSize();
//whatever object implements this interface will have to define its own AreaOrVolumeSize metod
}

class Quad : IAreaOrVolume
{
protected double _width;
protected double _height;
protected double _depth;

public Quad(double width, double height, double depth)
{
_width = width;
_height = height;
_depth = depth;

}

public virtual double AreaOrVolumeSize() // virtual means: go on, you can override me
{
return _width * _height * 1;
// this is a Quad. In calculating “Area or Volume”, we disregard the depth of the object, as a Quad has an area, not a volume
}

}

class PossibleSquare : Quad
{
public PossibleSquare(double width, double height, double depth) : base(width, height, depth) { }

public override double AreaOrVolumeSize()
{
return _width * _width * 1;
// this is a Square. In calculating “Area or Volume”, we disregard the depth of the object,
// as a Square has an area, not a volume
// we also disregard the height as it is equal to the width
}
}

class PossibleCube : PossibleSquare
{
public PossibleCube(double width, double height, double depth) : base(width, height, depth) { }

public override double AreaOrVolumeSize()
{
return _width * _width * _width;
// this is a Cube.
// In calculating the volume, we disregard depth and height as they are both equal to the width
}
}

class Program
{
static void Main(string[] args)
{
IAreaOrVolume thisShape;
// typing our variable with the interface implemented by classes and subclasses allows
// polymorphism

thisShape = new Quad(5,6,1);
Debug.WriteLine(“Shape’s area or volume: {0}”,thisShape.AreaOrVolumeSize());

thisShape = new PossibleSquare(5,6,1);
Debug.WriteLine(“Shape’s area or volume: {0}”, thisShape.AreaOrVolumeSize());

thisShape = new PossibleCube (5,6,2);
Debug.WriteLine(“Shape’s area or volume: {0}”, thisShape.AreaOrVolumeSize());
}

}

}

The results are what we expected:

Quad’s area or volume: 30
Square’s area or volume: 25 (height is ignored in the overridden method; the area is a Square’s area = width squared)
Cubes’s area or volume: 125 (height is ignored in the overridden method; volume is the cube’s volume = width to the power of three)

What happens, instead, if we declare the AreaOrVolumeSize as “new” methods, which do not inherit from their superclass?

In this following example, the AreaOrVolumeSize method is not overridden, but marked as “new” suggesting that we want to hide the superclass’s method.

namespace TestInheritanceWithInterfaces
{

interface IAreaOrVolume
{
double AreaOrVolumeSize();
//whatever object implements this interface will have to define its own AreaOrVolumeSize metod
}

class Quad : IAreaOrVolume
{
protected double _width;
protected double _height;
protected double _depth;

public Quad(double width, double height, double depth)
{
_width = width;
_height = height;
_depth = depth;

}

public double AreaOrVolumeSize()
{
return _width * _height * 1;
// this is a Quad. We disregard the depth of the object, as a Quad has an area, not a volume
}

}

class PossibleSquare : Quad
{
public PossibleSquare(double width, double height, double depth) : base(width, height, depth) { }

public new double AreaOrVolumeSize() // new hides the superclass’s implementation
{
return _width * _width * 1;
// this is a Square. We disregard the depth of the object,
// as a Square has an area, not a volume
// we also disregard the height as it is equal to the width
}
}

class PossibleCube : PossibleSquare
{
public PossibleCube(double width, double height, double depth) : base(width, height, depth) { }

public new double AreaOrVolumeSize()
{
return _width * _width * _width;
// this is a Cube.
// we disregard depth and height as they are both equal to the width
}
}

class Program
{
static void Main(string[] args)
{
IAreaOrVolume thisShape;
// typing our variable with the interface implemented by classes and subclasses allows
// polymorphism

thisShape = new Quad(5,6,1);
Debug.WriteLine(“Quad’s area or volume: {0}”,thisShape.AreaOrVolumeSize());

thisShape = new PossibleSquare(5,6,1);
Debug.WriteLine(“Square’s area or volume: {0}”, thisShape.AreaOrVolumeSize());

thisShape = new PossibleCube (5,6,2);
Debug.WriteLine(“Cubes’s area or volume: {0}”, thisShape.AreaOrVolumeSize());
}
}
}

The result is what we expected, and a bit funny:

Quad’s area or volume: 30
Square’s area or volume: 30
Cubes’s area or volume: 30

What happened here is: since we declared the AreaOrVolumeSize as new in the subclasses, C# supposes we want to break the polymorphism mechanism and invoke the method of the superclass, not the “new” implementation in the subclass.

There is another possibility, though: if our subclasses implement the interface along with inheriting from the superclass, the runtime behavior is again that of invoking the specific class’s implementation, not the superclass’s.

In the following code, our subclasses (PossibleSquare and PossibleCube) inherit from Quad and they hide its AreaOrVolumeSize method, but they also implement the interface directly. In this way, even if they have hidden the superclass’s method, .Net knows that it is their method implementation it has to call, not the superclass’s:

namespace TestInheritanceWithInterfaces

{

interface IAreaOrVolume
{
double AreaOrVolumeSize();
//whatever object implements this interface will have to define its own AreaOrVolumeSize metod
}

class Quad : IAreaOrVolume
{
protected double _width;
protected double _height;
protected double _depth;

public Quad(double width, double height, double depth)
{
_width = width;
_height = height;
_depth = depth;

}

public double AreaOrVolumeSize()
{
return _width * _height * 1;
// this is a Quad. We disregard the depth of the object, as a Quad has an area, not a volume
}

}

class PossibleSquare : Quad, IAreaOrVolume // implementing directly IAreaOrVolume
{
public PossibleSquare(double width, double height, double depth) : base(width, height, depth) { }

public new double AreaOrVolumeSize()
{
return _width * _width * 1;
// this is a Square. We disregard the depth of the object,
// as a Square has an area, not a volume
// we also disregard the height as it is equal to the width
}
}

class PossibleCube : PossibleSquare, IAreaOrVolume // implementing directly IAreaOrVolume
{
public PossibleCube(double width, double height, double depth) : base(width, height, depth) { }

public new double AreaOrVolumeSize()
{
return _width * _width * _width;
// this is a Cube.
// we disregard depth and height as they are both equal to the width
}
}

class Program
{
static void Main(string[] args)
{
IAreaOrVolume thisShape;
// typing our variable with the interface implemented by classes and subclasses allows
// polymorphism

thisShape = new Quad(5,6,1);
Debug.WriteLine(“Quad’s area or volume: {0}”,thisShape.AreaOrVolumeSize());

thisShape = new PossibleSquare(5,6,1);
Debug.WriteLine(“Square’s area or volume: {0}”, thisShape.AreaOrVolumeSize());

thisShape = new PossibleCube (5,6,2);
Debug.WriteLine(“Cubes’s area or volume: {0}”, thisShape.AreaOrVolumeSize());
}
}
}

The result?

This time, more logic:

Quad’s area or volume: 30
Square’s area or volume: 25
Cubes’s area or volume: 125

Alternatives:

Below, we declare PossibleCube as a subclass of PossibleSquare, but do not specify it implements IAreaOrVolume.

class PossibleCube : PossibleSquare // NOT implementing directly IAreaOrVolume

Result?

Quad’s area or volume: 30
Square’s area or volume: 25 (its own implementation)
Cubes’s area or volume: 25 (again, the superclass’s implementation)

Here below, instead, it is PossibleSquare that hides the method’s implementation, and does not implement the interface directly

class PossibleSquare : Quad // NOT implementing directly IAreaOrVolume

class PossibleCube : PossibleSquare, IAreaOrVolume // again implementing directly IAreaOrVolume

The obvious result:

Quad’s area or volume: 30
Square’s area or volume: 30 (again, the superclass’s implementation)
Cubes’s area or volume: 125 (its own implementation)

Static IP addresses for Azure websites that are not hosted as “cloud services” or “VM”s: still impossible for outbound, but workarounds possible

I hope this article is not valid for long and that static IPs will also soon also applicable for Azure websites. At this moment (October 2014), this is not the case.

Microsoft announced at the end of July that you can finally have reserved IPs for VMs and cloud services. Cloud services CAN host websites (the “web” role) but they’re not as easy to deploy as Azure website services (which are elementary).

The details of the procedure to obtain the static IP (for inbound AND outbound traffic) are in this MSDN article here.

The procedure is not very friendly yet: you have to use powershell or Azure’s API. I haven’t seen a graphic interface yet. Moreover, static IPs can – today – only be assigned to newly deployed stuff, not to already-deployed services.

What happens if you still have an Azure “website”, that is the most simple (and agile) way to deploy your own website to the Azure cloud?

Inbound traffic

You CAN have an static IP address for inbound traffic. Here, in an MSDN blog entry, Benjamin Perkins shows how to do it with the help of SSL certificates.

Outbound traffic: there’s the rub

Why would you want your outbound traffic IP to be static? Because there are cases in which your website, in the background, has to call web services which only accept calls from whitelisted IPs. When is this the case?
– financial services (for instance: online payment)
– other paid web services

Should we give up Azure if we need outbound static IP? Not really. There are two ways to overcome the issue of outbound traffic not being static in Azure websites.

1. Azure websites’s IP addresses are not totally dynamic. There IS a range of IPs that your outbound traffic can use. The list is here. If your remote web server needs to know what IP address you’re going to use to make the calls, you can give them the Azure datacenter IP ranges.

What is the problem with this approach? the list is long, whereas web service providers may accept only a few IP addresses.

In October 2014, the West Europe Data Center IP list is long tens of lines. Chances are your web service provider gives you… say ten IPs you can communicate them?

2. You use a static-IP proxy for your websites calls. I have tested this British service called Quotaguard, that I pay for and with whom I have no affiliation whatsoever. It works.

What do they do? they provide you with a proxy server that does have two static IPs you can communicate to your provider as your “whitelisted” IPs. Your Azure traffic that needs whitelisting can pass via Quotaguard.

They have a lot of implementation examples. For .NET, they focus on web forms and http requests that have a proxy property. In case you are using (as it was my case) objects that have no “proxy” porperties, you can create a Proxy object yourself and link it to the .NET framework object “WebRequest”, like this:

using System.Net;

var proxy = new WebProxy(“http://quotaguard_THE_CODE_QUOTAGUARD_GIVES_YOU@eu-west-1-babbage.quotaguard.com:9293“); // you may want to store this URI in the application’s config page in Azure, rather than hardcoding it
            proxy.Credentials = new NetworkCredential(“YourQuotaGuardAccount”, “yourQuotaguardpassword”); // you may want to store credentials in secure config files rather than hardcoding them
            WebRequest.DefaultWebProxy = proxy; // we set the “global” proxy here
            Now you can use your whitelisted webservice call…
Another version of the same code can be found here:
Enjoy!