Easy real-time, event-driven and serverless with Azure

One of the most exciting parts of modern cloud platforms is serverless and the event driven architectures you can build with it. The reason that it’s so exciting is because it’s a double win, not only does it give you all the advantages of the cloud like easy elastic scale, isolation, easy deployment, low management etc but it’s also extremely easy to get started. We can also avoid the container and container management rabbit hole.

Azure Functions

Azure Functions is a serverless offering on the Azure platform. In a nutshell you create individual functions that you upload into the cloud and run individually. Here’s an example of a simple function that you can call via Http.

This simple function is triggered via Http and returns a string saying Hello {name}. The name is passed in via a query string parameter.

Azure Functions are made up of 2 primary things, Triggers and Bindings.


A trigger is how a Function is started, as we see the sample above is a HttpTrigger. Functions has a number of triggers that we can use including:
Http trigger – function that runs when a http end point is hit
Timer trigger – runs on a schedule
Queue trigger – runs when a message is added to a queue storage
Blob trigger – runs whenever a blob is added to a specificied container
EventHub trigger – runs whenever event hub recieves a new message
IoT Hub Trigger – runs whenever an iot hub recieves a new event on the event hub endpoint
ServiceBus Queue trigger – runs when a message is added to a specified Service Bus queu
ServiceBus Topic trigger – runs when a message is added to s specified ServiceBus topic
CosmosDB trigger – runs when documents change in a document collection

In this post we’ll be focusing on the Cosmos DB trigger.


Bindings are a way of declaratively connecting another resource to the function eg binding to Cosmos DB provides you a CosmosDB Connection and allows you to insert data into CosmosDB; bindings may be connected as input bindings, output bindings, or both. Data from bindings is provided to the function as parameters.

Cosmos DB

Cosmos DB is a multi-model global scale database with easy single touch geo-replication. It’s fast and has more SLA’s than any another cloud database on the market. One of the advanced features we get in Cosmos DB is change feed, Cosmos DB Change Feed is a log of all the changes that have occurred on a collection in the database. You’ll be able to see inserts, updates and deletes. As mentioned before Cosmos DB has a Azure Function trigger that uses change feed, so in essence we can call a function every-time a CosmosDB is triggered and pass in the data.

You can see below we’ve got a Cosmos DB triggered functions, a list of changed documents is provided into the function.

To show you a diagram in a diagram see below:

CosmosDB Change Feed

SignalR Service w Real World Example

SignalR service is fully managed real-time communications service.

Let’s put this into a real world scenario. The legendary James Montemagno built a Cloud Enabled Xamarin Application called GEO Contacts which we can find on github here. Geo Contacts is a cross-platform mobile contact application sample for iOS and Android built with Xamarin.Forms and leverages several services inside of Azure including Azure AD B2C, Functions, and Cosmos DB. This application allows you to check into locations and see who else has checked into a location near you, which is awesome but there’s a little bit of missing functionality that I would like, if I’ve already checked into a location I would like to be notified if another person checks in near me.

The process flow will be as follows:
1) A user checks in at a location like Sydney, Australia
2) Azure Function is called to store the data inside Cosmos DB
4) The Azure Function that is subscribed to that change feed is called with the change documents
5) The function will then notify other users via SignalR.

Setup the SignalR Service

We first have to setup SignalR Service and integrate with the Functions and the Mobile app.

Go into Azure and create a new SignalR service.


Then take the keys from the SignalR service and add the configuration into the Azure Function.


If we want to get the SignalR configuration into the client Azure Functions makes this easy. We create a simple function that returns the SignalR details.

Then we create the signalr client in the Xamarin application.

Setup the Change Feed

Now that we have the SignalR setup we can setup the change feed. The only thing we need to setup for change feed is the leases collection.

Now that we’ve setup the leases collection building the event driven function is very simple. We only need to use the CosmosDBTrigger with the correct configuration and then add the SignalR binding.

In the function below, we can see it uses the CosmosDBTrigger it gets executed when a location is added into the Location collection, we’re also have the SignalR binding which means we can easily sending messages to the Xamarin clients we setup.

That’s it, now we have a scalable, real-time and event-driven system.

Introduction to Augmented Reality with ARKit

In this post we’re going to dive into ARKit, we’ll find out what it is and get started with building our first augmented reality experience in ARKit.

ARKit is Apple’s toolkit for building augmented reality experiences on iOS devices. It was initially released at WWDC 2017, then ARKit 2.0 was released at WWDC 2018.

Before we jump into any code it’s important to understand what ARKit is and why we need it. Augmented Reality on mobile devices is hard, it’s hard because of the heavy calculations/triangulations/mathematics. It’s also very hard to do AR without killing the users battery or reducing the frame rates. ARKit takes care of all the hard parts for you hence allowing you to use a clean and simple API.

The Basics of ARKit

In order to Augment our Reality then we need to be able to track reality, eg how do we map the world so that we know what it looks like in a digital form. Devices like HoloLens have special sensors specifically designed for AR tracking. Mobile devices don’t have anything specifically designed for world tracking, but they do have enough sensors that when combined with great software we can track the world.


ARKit takes advantage of the sensors (camera, gyroscope, accelerometer, motion) already available on the device. As you can see in the diagram below ARKit is only responsible for the processing and basically this means sensor reading and advanced mathematical calculations. The rendering can be handled by any 2D/3D rendering engine, which includes SceneKit as you see below but majority of the apps will be using a 3D engine like Unreal or Unity.

arkit diagram


Understanding the World

The primary function of ARKit is to take in sensor data, process that data and build a 3D world. In order to do this ARKit uses a truckload of mathematical calculations, we can simplify and name some of the methods ARKit is using.

The diagram below shows Inertial Odometry, Inertial Odometry takes in motion data for processing. This input data is processed at a high frame rate.


The diagram below shows Visual Odometry, Visual Odometry takes in Video data from the camera for processing. The processing of visual data is done at a lower framerate and this is due to the fact processing the visual data is CPU intensive.


ARKit then combines the odometries to make what’s called Visual Inertial Odometry. This will have the motion data processed at a high framerate, the visual data processed at a lower framework and differences between the processing normalised. You can see Visual Inertial Odometry in the diagram below.



Triangulation allows the world mapping

In a very simple explanation triangulation is what allows ARKit to create a model of the world. In a similar way to humans, so as the phone is moved around ARKit will do calculations against the differences allowing ARKit to essentially see in 3D. A digital map of the world is created.


As you can see below a world map is created within ARKit.

Augmenting Reality (with Anchor Points)

As the world is mapped ARKit will create and update Anchor Points, these anchor points allow us to add items in reference to the anchor point. As you can see in the diagram below ARKit has added Anchor points and we’ve placed an object (3D vase) near the anchor point. As the devices is moved around these anchor points are updated, so it’s important that we track these changes and update our augmentations of the world.


As I mentioned before ARKit only does the processing and provide the data. It’s up to us to render objects in the 3D world. Below shows how you’ll combine both the video captured and then overlay a 3D rendering. As we move both the video capture and the 3D rendering is updated.


Tracking Options/Features.

ARKit has a few different tracking options and features that I will go over below.

Orientation Tracking

This is the most basic type of tracking available in ARKit, this will track your orientation within the world. It will not track your location in physical space, in essence it’s like your standing still and can view 360 degrees in the world.

orientation tracking

World Tracking

This option in ARKit is the most common, in this scenario ARKit tracks and builds a complete world maps and allows you to move freely within the world. It’s important to note that World Tracking includes the majority of features in ARKit including Plane Detection, Maps, Image Tracking and Object Detection.


Plane Detection

As we’re moving around the 3D world we need to know about the different surfaces in the world this is where plane detection comes in. The first release of ARKit included only horizontal orientation, in ARKit 2 we now have the option of both vertical and horizontal orientation. In the image below you can see the floor being detected as a plane.


Saving & Loading Maps 

In ARKit 2.0 we can now save, load and share the world map. In ARKit 1.0 the map was internal and only ever kept around for a single users session. This meant that in ARKit 1.0 you could not save sessions (maps) to be resumed later or share. This allows for a few scenarios including multiplayer games and the ability to save world maps.

Below is a video of a multi player game that leverages ARKit 2.0 map sharing feature.


Image Tracking

Imaging tracking allows your app to easily detect an image in the real world, this for example might be a photo or a business card or a dvd case. Once you’ve detected this image then you’ll be able to easily augment the reality around image. Normally a task like this would be really difficult but again ARKit makes this really easy for us, the only steps we need to take are a reference (eg the image we need to track) with the physical size, once we set that up we use a single option to set turn on the feature and add the reference images into ARKit.

Below I’ve included a reference video that leverages Image Tracking. While it looks very impressive the application below could be implemented with ARKit 2.0 without a huge amount of effort, around 80-100 lines of code.

Object Detection

Object Detection (ARKit 2.0 feature) allows us to both scan and detect 3D objects. I think the best way to understand this would be a simple video.

Building your first ARKit Experience

To give you a feel for how easy it is to build a ARKit experience I’m going to take you through a simple application in which you can see in the video below. As you move the phone around a plane is detected (which ARKit does for you), we place a Node on that surface then if a users taps the Node we add a box on top of that node where the user tapped.

Let’s jump into some code to see how easy it is to get started with ARKit.

The first thing we do in this app is create a SceneView and add it as a SubView to the visible ViewController, as we see below.


The next step we need is to call the run command on the scene with the world tracking configuration, as we see below.


As we move the phone around and surfaces are detected the DidAddNode method is called by ARKit. As you can see below if the anchor is a ARPlaneAnchor we then add our PlaneNode, which is the blue we see in the video.


Then if a user touches the PlaneNode we then add a Cube on top of where the user just touched.


That’s it, all we need to do for our first AR Experience. You can see the full code file below or get all the code from https://github.com/rid00z/ARKitExample




XAM’s Favourite (New) C# Features

At XAM Consulting (Xamarin Developers) our developers love to stay on top of the current trends in all aspects of software and mobile development. Last week we had a slack conversation geeking out about our favourite new C# features. I thought it would be nice to share some of the new features we love.

Majority of the features are C#7 and you can use them right now, in Xamarin and .NET.


My favourite c# feature is the default implementation for a interface. This is my favourite because it solves the problem of developers of duplicating code implementations for an interface while still getting the power of multiple inheritance of it.

Jesse & Matthew Robbins

Oh yeah! My favourite C# 7 feature is out vars’ (Out variables)

I love it as it saves an unnecessary variable declaration. Instead you can just inline it

I also love the if (type is SomeOtherType someOtherType) syntax for checking typecasts

Matthew B

The nameof operator which we now use extensively. It’s subtle but powerful because helps rid your codebase of those string literals. Yay.


Tuples and Deconstructing


The inline getter arrow (“expression-bodied members”).

Myself (Michael)

Personally I love the new ability to assign variables using the is statement with pattern matching.

In my years I’ve met many developers who don’t keep up with the new releases of languages, this is a little unfortunate for them because these new features help make code cleaner, more concise and increases readability.

This content originally appeared on the XAM Consulting – Blog.

Beautiful Xamarin – Facebook Clone in Xamarin.Forms

I’ve actually had this project around since this start of the year but I never blogged it. I did this code for my speaking session at Microsoft Ignite in Australia.

Warning – This code is a playground there’s lots of play code and not reflective of a production application. This project is for only used for learning about Xamarin.Forms features.

In this blog post I’ll go over some of the features want to demo with this app, with some screenshots. Over the coming weeks I plan to demonstrate that anyone who says you cannot build fast and beautiful apps in Xamarin.Forms is simply wrong, lying or ignorant. The deal is that you just need to know how Xamarin.Forms work.

Post-Layout Animations/Translations – Right Slide Bar

One of the most powerful features of Xamarin.Forms are the Post-Layout translations. fbforms-sidebar

People are so surprised when I tell them this is done without any custom renders, let’s take a look at how.

In this case I make use of the Grid in Xamarin.Forms, in actual fact the sidebar is layered over the listview which in the background. The Grid only has a single column.

So the FriendList normally sits on top of the List but what we actually do is translate the X coordinate so it sits off the screen, just after the page size allocated.

Then the awesomely simple part, when a user taps the menu button we just translate the views.

Disappearing NavigationBar

This one was a little harder and it’s still not working perfectly.


In this case our grid has 2 rows, 80 at the top for the header size and a * for the rest.

On the default state the header just sits in the first row.

While the ListView actually uses the Grid.RowSpan=”2″ so it goes over multiple rows, I had to do this because the height of the listview needs to be the full height of the screen once the header disappears. I also make use of the ability to set post-layout values in Xaml, setting the TranslationY to 60 on the ListView (this moves it down into the correct position by default).

So when we want to hide the navigation bar we must hook into the Scrolled event on the listview and translate the Y values on the views depending on the scroll position. This part is really just for a demo, it’s not thought out and very prototype.

Like Animations

One other thing I like about this app is the animated likes buttons.


In this case I cheated a little bit and used a Animated Gif, it worked ok. I also used a few other features.

As with all the other cases I leveraged the Grid. You’ll also notice that I used a few extra controls, AdvancedFrame from FreshEssentials to allow for the rounded edges and a GifImageViewControl.

As with all the other samples we simply leverage some Translations and in this case we also leveraged the FadeTo animation.

If you want to check out the code base please find it here: https://github.com/XAM-Consulting/FacebookForms



Why developers should care about CX!

Normally I like to write about deep technical topics but CX (Customer Experience) is another one of my passions. I’m passionate about CX because I’ve had so many bad experiences with big brands and this trend is a welcome change.

I recently wrote an article over on linkedin about ‘Why you should care about CX’.

I think as developers we need to be across these trends, it’s often that these types of trends are what drives our work and it’s nice to know that we can help people have a better experience.

Please take a look at the post and let me know what you think: https://www.linkedin.com/pulse/why-you-should-care-cx-michael-ridland



The Definition of Done (DoD) for Xamarin Developers

A DoD or Definition of Done is a software development term, the term is common in many agile teams and it has origins in SCRUM. The basic idea is that a team has a shared understanding of what defines a task as Done. What a project manager might foresee as ‘Done’ can be different to a developer or a customer. At XAM Consulting we deliver high quality solutions for our customers and therefore a solid DoD is essential. It’s important all developers understand that just because they’ve made a code change this does not mean it’s done.

We’ve recently revised our Definition of Done and thought we would share it because it’s a good marker on what it means to be ‘Done’ as a Xamarin developer.

Note* This is only a checklist for what’s done as a developer, it’s not the complete software lifecycle, in order for this to be successful we also need to follow modern software engineering techniques including gitflow, CI and TestCloud Testing.

So here I present… The Definition of Done (DoD) for Xamarin developers. (This DoD could also apply for Mobile Developers or Cross Platform developers)

Code respects .NET Coding Guidelines
We follow .NET coding guidelines outlined here – https://github.com/dennisdoomen/CSharpGuidelines

Code respects XAM Consulting’s Code Quality and Architecture Guidelines
We’ve not published this in the public domain (yet) but in summary we follow industry best practices for code quality. Our code bases are designed to be maintainable in the long term. Here’s a short summary:

  • Code is Loosely Coupled
  • Code has High Cohesion
  • Code makes use of OOP (avoiding pitfalls with inheritance, prefer composition over inheritance)
  • Interface Driven Development – programming to small interfaces
  • Classes are small
  • Methods are small
  • Code follows SOLID Principles
  • Use and understand design patterns
  • Make use of Reactive patterns

Tested on all target platforms
As our solutions are cross platform it’s essential that all code modifications are tested on all target platforms. If you test on a single platform it’s not done.

Tested on variety of screen sizes
It’s easy to code a UI for a single screen size but much harder to have it work on multiple sizes, especially smaller devices. All code modifications must be tested on different screen sizes.

Tested on a variety of physical devices
Real devices behave differently to simulators in many situations. All code modifications must be tested on a variety of devices, internally at XAM Consulting we have a good variety of different physical devices for internal testing before we go into TestCloud.

All possible side effects tested in system
If you make modification in a complex system it’s essential that you test for any possible side effects.

Tested in Release Mode
Applications can behave much different in Debug mode then Release. We need to make sure we’re testing the application in a release mode.

Application must handle intermittent connections
Mobile devices have transient connections, there needs to be a strategy for handling these types of connections.

Units Tests
Unit tests are an essential component of a high quality codebase. If it doesn’t have unit tests it’s not done.

UITests developed covering features
Xamarin TestCloud is a great tool, it’s saved us (from production issues) a number of time. We now have UITests and CI with (TestCloud) on all of our projects.

Peer reviewed code
This is a no brainer, peer reviewed code is essential. We made this a part of our lifecycle using gitflow.

Peer reviewed for end-user acceptance
It’s important that not only the code is reviewed we also need to ensure that the problem is well understood. It’s important that two or more people discuss and understand the issue. Once understood the developers must both test the solution will meet the user acceptance criteria.

Issues must be reproduced
If you don’t reproduce an issue it cannot be confirmed as solved. The issue that is reproduced must be exactly the same as the issue that’s reported, otherwise there might be two issues.


Essentially, the goal is to have no issues when shipped into a UAT environment. If you ship a issue into UAT you need to review why that issues went into UAT.

At XAM Consulting it’s our commitment to ship world class apps and I hope that you can also make the effort.


Xamarin Plugins / .NET Standard with Martijn van Dijk and Michael Ridland

In this video I chat with Martijn van Dijk about Xamarin Plugins and a little bit of .net standard. It was filmed at the Xamarin Dev Days in Singapore 2016, sorry it’s taken a while to upload.

Martijn is doing some great work in the Xamarin community and you can follow him on twitter and github.

Links from the interview:

Plugin For Xamarin Templates


Hacking the Xamarin.Forms Layout System for Fun and Profit

If you’re a Xamarin.Forms developer, it’s likely you already know about the two great talks that Jason Smith did at Evolve this year (2016). Both of the talks were on performance in Xamarin.Forms and they were outstanding because we got a truckload of performance tips that we’ve never had previously. If you’ve watched them once we would recommend you also consider watching them again, there’s alot of content and you’ll learn something new. This insight into the Xamarin.Forms performance and the layout system has been a key part of us building performant apps in Xamarin.Forms at XAM Consulting.

But we wanted more, we wanted more speed and more performance, we want native performance. The tips we got were great, but we wanted to understand why these tips existed because if we understand the system we can use that understanding to develop our own tips and techniques for Xamarin.Forms performance.

One of the points that Jason makes is about the Xamarin.Forms Grid, the suggestion is that we should avoid using the ‘Auto’ setting for columns and rows. Why is this? Well it turns out that if a view inside a Auto column changes size (or generally does anything to invalid measure) the grid will be doing a full layout of all children, well not only the grid but the Grid’s parent and parent after that.

If you watch both of the talks, it’s clear the root of performance in Xamarin.Forms is in the layout system, so let’s look into this Layout System. Now that Xamarin.Forms is open source we can use the code a understand the layout system.

The layout system is broken up into two parts/cycles, 1) the invalidation cycle 2) the layout cycle.


Invalidation Cycle

To understand the invalidation cycle let’s take a look at the Xamarin.Forms layout code.

As we can see in this code below each VisualElement.cs has a event MeasureInvalidated, VisualElement is the base class of Layouts, Views and Pages.


Then each time a child is added to a Page or Layout, the parent attaches the MeasureInvalidated event of the child.


As we can see below the OnChildMeasureInvalidated event handler (for the children) then calls the MeasureInvalidated event, because the parent of that View is subscribed to the MeasureInvalidated even the parents OnChildMeasureInvalidated method is called. It’s important to note there’s conditional logic involved, the event will not always be called.

…Conditional Logic…

So here’s an example of what it looks like, with the events.


Here’s what happens when a child view becomes invalidated. As you can see the events bubble to the top.


That’s the invalidation cycle.

Layout Cycle

Let’s take a look at the layout cycle, the layout cycle happens in two cases:
1) When a layout is done for the first time, eg. when a page is first displayed.
2) After the invalidation cycle

As you can see in the image below, during the first part of the layout cycle measure is called on the child. It’s important to note that for the most part measure is called on all children and in many cases it’s called multiple times. After the children have been measured then layout is called on all children.


So here’s how the full layout looks.



You might be thinking why do I care about this? Well because we can hack it. It’s possible to short circuit the invalidation cycle at an early stage, as per the image below.


Let’s take a look at how we can test this. In order to test this I’ve created a page with a few labels and a single label that’s, updated on a timer 20 times. I’ve also linked up my Xamarin.Forms app into the Forms code base and added performance metrics to the methods inside Forms.


Let’s first take a look at the results of a StackLayout as per below. stacklayout

The results show that LayoutChildIntoBoundingRegion is called 243 times, every time the text of the label is changed a full layout cycle happens.


Let’s try the same with a Grid, as per below.


The results show that LayoutChildIntoBoundingRegion is called 6 times, it’s only called the first time the view is shown.


Let’s try the same Grid but using a ‘Auto’ for a row.


The result is back up to 243 times and the full layout is happening every time the label text is changed.


Ok, now let’s put back the Grid to using stars but I also want to try something different. I want to have children of the grid dynamically change size, so let’s test with a BoxView that changes width.


NB – In order for me to change the Width of the Box I need to change the HorizontalOptions on the BoxView from Fill to Start.


Even though we’ve changed the Height back to a star the result is still 243 and the full layout is happening everytime the label text is changed. It seems that when we change the HorizontalOptions of a child we also change change the invalidation cycle behaviour.


Ok, so I changed the BoxView back to Fill the BoxView will not change dynamically. Let’s try something different, lets put the BoxView inside a ContentView.


It’s back down again, great so we still get our children changing dynamically and we also short circuit the layout system.



  • A child of a stacklayout will always cause a full layout cycle, there’s no way a StackLayout can short circuit this cycle.
  • A child of a Grid with that’s inside rows and columns with static or star widths AND the LayoutOptions are set to Fill, are fully constrained which means that the invalidation cycle will be stopped at that child view.
  • In order to have children of a Grid change layout dynamically they need to be a child of another View which is fully constrained. (As per the Grid sample).

After taking a look at the Xamarin.Forms code base I can see this is the line that does the short circuit. It seems that a view.Constraint needs to be the value of LayoutConstraint.Fixed for this short circuit to occur. I did further research into this and found the only time we can short circuit the invalidation cycle is in the cases proven above.

UPDATE – Originally in this blog post I mentioned that the above cases are the only way to short circuit the invalidation cycle, this is not entirely true as the AbsoluteLayout can be used to short circuit the invalidation cycle. I will put together some more research for this for another post. 



Talk with Rui Marinho XLabs Founder and Xamarin.Forms Developer

Last weekend I was lucky enough to attend and speak at the Xamarin Dev Days in Singapore. I think Dev Days Singapore is the biggest Dev Days so far with 400 registrations and 250 that attended. It was even more awesome for myself as I got to catch up with some friends in the Xamarin world including Rui Marinho.

In this video I talk with Rui Marinho the founder of XLabs and now a developer on the Xamarin.Forms team. We delve into some important topics, we discuss how he got into Xamarin, announce that XLabs is Dead, talk about open source and a little about the future of Xamarin.Forms.


Be more awesome with MFractor for Xamarin Studio

In this video I interview Matthew Robbins the creator of MFractor a Xamarin Studio Plugin. If you’re not using MFractor your really missing out, it’s got some beautiful features which save you truckloads of time when building apps with Xamarin Studio.

If you’ve ever wanted a ReSharper for Xamarin Studio this is it. It’s got static analysis, code generation and advanced navigation features. It’s here right now and will only get more awesome into the future.

You can find out more about MFractor here: www.fractor.com and find Matthew on twitter.com/matthewrdev