Author Archives: Michael

Serverless Kubernetes with Azure Container Apps

What are Azure Container Apps

Please note* as of writing Azure Container Apps is a preview feature. It is not ready for production applications.

Do you love containers but don’t have the time to manage your own Kubernetes platform? Azure Containers Apps is for you.

We all love containers their small, portable, and run anywhere. Azure Containers Apps are great if you just want to run some containers but that’s just the beginning. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manual cloud infrastructure configuration and complex container orchestrators. 

Azure Container Apps is serverless, so you just need to worry about your app code. You can use Azure Container Apps for public-facing endpoints, background processing apps, event-driven processing, and running microservices.

Azure Container Apps is built on top of Kubernetes which means some of the open-source foundations are available to you, like Dapr and KEDA. Something to note* is that you do not have access to K8s commands in Azure Containers Apps. As we love to do in modern architectures we can support different technologies, like node and .net at the same time. We can support HTTP, Microservices communication, event processing and background tasks. With Container Apps many of the requirements you have for Modern apps are built-in like robust auto-scaling, ingress, versioning, configuration, and observability. With Azure Container Apps you can use all the good parts of K8s without the hard parts.

Where does this fit in Azure?

In Azure, we already have AKS, ACI, Web Apps for containers, and Service Fabric. Why do we need something new? In a very short summary:
-Azure Kubernetes Service (AKS) is a fully managed Kubernetes option in Azure, the full cluster is inside your subscription which means you also have power, control, and responsibility. Eg. you still need to maintain the cluster with upgrades and monitoring.
-Container Instances allow you to run containers without running servers. It’s a lower-level building block but concepts like scale, load balancing, and certificates are not provided with ACI containers.
-Web Apps for containers (Azure App Service) is fully managed hosting for web applications run from code or containers. It optimised for running web applications and not for the same scenarios Container Apps eg Microservices.

So Azure Container Apps are designed for when you want to multiple containers, with K8s like funtionality without the hassle of managing K8s.

You can find more details on the differences here.

As you can see Azure Container Apps looks like a compelling offer for many Azure microservices scenarios. At XAM in our Azure Consulting practice, we are extremely excited about the future of Azure Container Apps.

Getting Started

The great thing about Azure Containers Apps is that it’s really easy to get started. In this post we are going to assume you have an Azure subscription.

If you haven’t already you need to install the Azure cli.

brew update && brew install azure-cli

Once you’ve completed the installation then you can login to Azure

az login

Then you can easily install the Azure Container Apps cli

az extension add \

In order to run a container app, we need both a log analytics workspace and a container apps environment. So first let’s setup the log analytics workspace.

 az monitor log-analytics workspace create \
  --resource-group InternalProjects \
  --workspace-name demoproject-logs    

Then we can get the workspace client id and secret of the workspace.

LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show --query customerId -g InternalProjects -n demoproject-logs -o tsv | tr -d '[:space:]'`
LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=`az monitor log-analytics workspace get-shared-keys --query primarySharedKey -g InternalProjects -n demoproject-logs -o tsv | tr -d '[:space:]'`

Now that we have a log analytics workspace we can create our Azure Container Apps environment. An Environment is a secure boundary around groups of container apps.

az containerapp env create \
  --name containerappsdemo \
  --resource-group InternalProjects\
  --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \
  --location canadacentral

Now that we have our Container Apps Environment we can easily run a container. Rather than make our own image we can use something already built, in this case I’m going to use the docker images of nopcommerce (which is a .net eCommerce platform). You can see the image here:

az containerapp create \
  --name nop-app \
  --resource-group XAMInternalProjects \
  --environment containerappsdemo \
  --image \
  --target-port 80 \
  --ingress 'external' \
  --query configuration.ingress.fqdn

After we run our command we get the public location of the running container apps and can easily browse and see this running.

Running multiple containers and multiple technologies

Now that we have our environment we can easily run more containers. We already know with containers we get this for free but for fun let’s use multiple technologies, so why not now run a nodejs application next to our .net app. In this case I’ve also used another pre-built container which is a hello world nodejs app,

az containerapp create \
  --name nodeapp \  
  --resource-group InternalProjects \
  --environment containerappsdemo \
  --image  \ 
  --target-port 80 \
  --ingress 'external' \
  --query configuration.ingress.fqdn

After we run our command we are returned the public url of the Container App, we can see below the url of

If we then open that url in our browser, we can see we’ve got our running node app and with a secure endpoint.


Azure Containers Apps allows you to build K8s style applications without the hassle of managing K8s. This technology has a huge amount of potential to help all sizes of companies deliver more value to their customers faster and more efficiently, by cutting down on the management/configuration required for these types of applications.

I’m really excited to see Azure Container Apps evolve and I would love to one day use this in a production scenario.

.NET MAUI – Exploring Overlays – Part 2

This is post #4 in a series called ‘.NET MAUI Source of Truth’.

About Source Of Truth – As any developer knows, source code is the purest form of truth in working software. So I’ve decided the best way to get deep into .NET MAUI is to look at the source code.

In my last posts, we explored the .NET MAUI codebase learning about the new Windows functionality, and then managed to get a demo of them working in Preview11. Then we’ve dived into overlays in .NET MAUI.

After writing my previous post on Overlays I found an interesting piece of code in the .NET MAUI codebase. This code was basically a advanced implementation of a WindowsOverlay. This was called the VisualDiagnosticsOverlay and I also managed to find a good sample project called DrasticOverlay created by

This sample project is a good example of what can be done with overlays. This sample project has example of Overlays, Background Overlays, Hit Detection Overlays, Videos, Menus, Loading, Drag and Drop and more.

Looking through this codebase we can see the different overlays that have been created, see below.

In this case, let’s dig into the drag and drop overlay to see how these awesome overlays are working. The basic implementation of an overlay is a partial class that inherits from WindowOverlay. The partial class combined with multi-targeting allows for common code and native implementations to nicely sit side by side.

Below, I’ve included the DragAndDropOverlay code. In this, we can see our partial class and inheritance from WindowOverlay. Overall it’s a fairly simple class that has a drop element, which is a IWindowOverlayElement. In this case the element is using the drawing capabilities available in .NET MAUI. Then a simple event handler and IsDragging property.

public partial class DragAndDropOverlay : WindowOverlay
    DropElementOverlay dropElement;
    bool dragAndDropOverlayNativeElementsInitialized;

    internal bool IsDragging
        get => dropElement.IsDragging;
            dropElement.IsDragging = value;

    public DragAndDropOverlay(IWindow window)
        : base(window)
        this.dropElement = new DropElementOverlay();

    public event EventHandler<DragAndDropOverlayTappedEventArgs>? Drop;

    class DropElementOverlay : IWindowOverlayElement
        public bool IsDragging { get; set; }
        // We are not going to use Contains for this.
        // We're gonna set if it's invoked externally.
        public bool Contains(Point point) => false;

        public void Draw(ICanvas canvas, RectangleF dirtyRect)
            if (!this.IsDragging)

            // We're going to fill the screen with a transparent
            // color to show the drag and drop is happening.
            canvas.FillColor = Color.FromRgba(225, 0, 0, 100);

As we can see in the previous image there are native implementations on Windows, iOS and Android. Let’s now take a look at the iOS partial class, this class is named DragAndDropOverlay.iOS.cs. You can see the full file over here:

Below is the Initialize() method on the iOS implementation. There’s a native view called DragAndDropView and this is added as a subview on the nativeWindow?.

// We're going to create a new view.
// This will handle the "drop" events, and nothing else.
dragAndDropView = new DragAndDropView(this, nativeWindow.RootViewController.View.Frame);
dragAndDropView.UserInteractionEnabled = true;

Below we can see part of the DragAndDropView which is implemented in the iOS code. We can see that it’s making use of native APIs eg UIView and IUIDropInteractionDelegate.

class DragAndDropView : UIView, IUIDropInteractionDelegate
    DragAndDropOverlay overlay;

    public DragAndDropView(DragAndDropOverlay overlay, CGRect frame)
        : base(frame)
        this.overlay = overlay;
        this.AddInteraction(new UIDropInteraction(this));

    public bool CanHandleSession(UIDropInteraction interaction, IUIDropSession session)
        Console.WriteLine($"CanHandleSession ({interaction}, {session})");

        return session.CanLoadObjects(new Class(typeof(UIImage)));

In looking at the majority of the implementations they all follow a similar pattern to the DragAndDrop Overlay. As mentioned previously you can implement Overlay on multiple levels, so you can use the Cross-Platform drawing methods available in .NET MAUI and you can also do native implementations for each platform.

If you are interested in overlays I would recommend taking a look at this cool sample project – DrasticOverlay and the VisualDiagnosticOverlay from the .NET MAUI codebase.

Overall it’s nice to see that we can implement this level of functionality within overlays in .NET MAUI. Ideally I would like to see support for .NET MAUI controls inside overlays, presently I’m not sure how we could implement this but I suspect it would be possible.

.NET MAUI – Exploring Overlays

This is post #3 in a series called ‘.NET MAUI Source of Truth’.

About Source Of Truth – As any developer knows, source code is the purest form of truth in working software. So I’ve decided the best way to get deep into .NET MAUI is to look at the source code.

In my last posts, we explored the .NET MAUI codebase learning about the new Windows functionality, and then managed to get a demo of them working in Preview11.

In my previous research, I noticed a new method on Window called AddOverlay. In this post I’m going to explore WindowOverlays, what are they? How is it implemented natively? How does it maintain state, including during page navigation? When should I use them? How is it related to the traditional navigation style and Shell? What can we use it for? (login screens/flows) How do we animate it?

Going back to the roots of ‘Source of Truth’ this post is going to focus on the .NET MAUI source code and look at how things are implemented.

To do this post I’ve downloaded the .NET MAUI source code from github and opened it in Visual Studio for Mac Preview.

If we start at the source we can see that IWindow has 2 methods and a property in relation to Overlays.

/// <summary>
/// Provides the ability to create, configure, show, and manage Windows.
/// </summary>
public interface IWindow : ITitledElement
	/// <summary>
	/// Gets the read only collection of Window Overlays on top of the Window.
	/// </summary>
	IReadOnlyCollection<IWindowOverlay> Overlays { get; }

	/// <summary>
	/// Adds a Window Overlay to the current Window.
	/// </summary>
	/// <param name="overlay"><see cref="IWindowOverlay"/>.</param>
	/// <returns>Boolean if the window overlay was added.</returns>
	bool AddOverlay(IWindowOverlay overlay);

	/// <summary>
	/// Removes a Window Overlay to the current Window.
	/// </summary>
	/// <param name="overlay"><see cref="IWindowOverlay"/>.</param>
	/// <returns>Boolean if the window overlay was removed.</returns>
	bool RemoveOverlay(IWindowOverlay overlay);

As my next step I thought I would take a look in the codebase looking to see where AddOverlay is used, (hoping we had a sample). In the search, I found a test and a sample. Using the overlay methods seems simple enough.

void TestAddOverlayWindow(object sender, EventArgs e)
	var window = GetParentWindow();
	overlay ??= new TestWindowOverlay(window);

void TestRemoveOverlayWindow(object sender, EventArgs e)
	if (overlay is not null)
		overlay = null;

Next, I’ve taken a look at how the AddOverlay method is implemented in the Window(.NET MAUI Window), Window.Impl.cs.

/// <inheritdoc/>
public bool AddOverlay(IWindowOverlay overlay)
	if (overlay is IVisualDiagnosticsOverlay)
		return false;

	// Add the overlay. If it's added, 
	// Initalize the native layer if it wasn't already,
	// and call invalidate so it will be drawn.
	var result = _overlays.Add(overlay);
	if (result)

	return result;

It’s a simple method, the overlay gets added to the list, then initialised and invalidated. In this method, it doesn’t show how these overlays are rendered onto the screen. In order to find this out, we will need to do a bit more digging.

At this point in time, you might ask the question has this functionality even been implemented? I had the same question so I’ve taken the sample code from my previous blog, you can find here, and added some code to test. I added and overlayed the following:

public class TestWindowOverlay : WindowOverlay
	IWindowOverlayElement _testWindowDrawable;

	public TestWindowOverlay(Window window)
		: base(window)
		_testWindowDrawable = new TestOverlayElement(this);


		EnableDrawableTouchHandling = true;
		Tapped += OnTapped;

	async void OnTapped(object sender, WindowOverlayTappedEventArgs e)
		if (!e.WindowOverlayElements.Contains(_testWindowDrawable))

		var window = Application.Current.Windows.FirstOrDefault(w => w == Window);

		System.Diagnostics.Debug.WriteLine($"Tapped the test overlay button.");

		var result = await window.Page.DisplayActionSheet(
			"Greetings from Visual Studio Client Experiences!",
			"Do something", "Do something else", "Do something... with feeling.");


	class TestOverlayElement : IWindowOverlayElement
		readonly WindowOverlay _overlay;
		Circle _circle = new Circle(0, 0, 0);

		public TestOverlayElement(WindowOverlay overlay)
			_overlay = overlay;

		public void Draw(ICanvas canvas, RectangleF dirtyRect)
			canvas.FillColor = Color.FromRgba(255, 0, 0, 225);
			canvas.StrokeColor = Color.FromRgba(225, 0, 0, 225);
			canvas.FontColor = Colors.Orange;
			canvas.FontSize = 40f;

			var centerX = dirtyRect.Width - 50;
			var centerY = dirtyRect.Height - 50;
			_circle = new Circle(centerX, centerY, 40);

			canvas.FillCircle(centerX, centerY, 40);
			canvas.DrawString("🔥", centerX, centerY + 10, HorizontalAlignment.Center);

		public bool Contains(Point point) =>
			_circle.ContainsPoint(new Point(point.X / _overlay.Density, point.Y / _overlay.Density));

		struct Circle
			public float Radius;
			public PointF Center;

			public Circle(float x, float y, float r)
				Radius = r;
				Center = new PointF(x, y);

			public bool ContainsPoint(Point p) =>
				p.X <= Center.X + Radius &&
				p.X >= Center.X - Radius &&
				p.Y <= Center.Y + Radius &&
				p.Y >= Center.Y - Radius;

On my iPad simulator, I was able to see the drawing. The tapped method was not working for me.

Now we know the drawing code is working but not how it is all linked together, the clues might just be IWindowOverlayElement and Draw.

If we then take a look in the MAUI codebase we can see that the Draw method also exists on the WindowOverlay, it loops over the elements and draws them. I assume (hope) there’s some type of visual invalidation hierarchy in this window overlaying.

/// <inheritdoc/>
public void Draw(ICanvas canvas, RectangleF dirtyRect)
	if (!IsVisible)
	foreach (var drawable in _windowElements)
		drawable.Draw(canvas, dirtyRect);

Also noting that I’ve discovered WindowsOverlay is a partial class and there’s a native implementation in each.

Now I need to find out how/when/why the Draw method is called.

After a bit of digging, I found some answers inside the native implementations of the WindowOverlay. What happens is that the WindowOverlay grabs the native Layer and then draws itself on top using the NativeGraphicsView to map the drawing in the Overlay to a view that can be seen natively. Below is the iOS version.

// Create a passthrough view for holding the canvas and other diagnostics tools.
_passthroughView = new PassthroughView(this, nativeWindow.RootViewController.View.Frame);

// the native graphics view calls the draw methods
_graphicsView = new NativeGraphicsView(_passthroughView.Frame, this, new DirectRenderer());
_graphicsView.AutoresizingMask = UIViewAutoresizing.All;



// Any time the frame gets a new value, we need to update and invalidate the canvas.
_frameObserver = nativeLayer.AddObserver("frame", Foundation.NSKeyValueObservingOptions.New, FrameAction);
// Disable the graphics view from user input.
// This will be handled by the passthrough view.
_graphicsView.UserInteractionEnabled = false;

// Make the canvas view transparent.
_graphicsView.BackgroundColor = UIColor.FromWhiteAlpha(1, 0.0f);

// Add the passthrough view to the front of the stack.

// Any time the passthrough view is touched, handle it.
_passthroughView.OnTouch += UIViewOnTouch;
IsNativeViewInitialized = true;
return IsNativeViewInitialized;

It looks like the only way to use overlays is through canvas drawing, this means none of the .NET MAUI Controls or functionality will work with overlays. Some initial problems that I can see at this point, what about transitions and animations?.

As a basic requirement we need to update the UI based on state changes, I was able to do this by using the Invalidate method on the Overlay.

Device.StartTimer(TimeSpan.FromMilliseconds(500), () =>
	_size += 10;
	return true;

So we’ve coved the basic questions, what are they and how is it implemented natively. What about the other questions?

How does it maintain state, including during page navigation? I have not tested this specifically but based on the architecture I would suggest it does maintain the state.

Do gestures pass-through to underlying views? This worked well, when I tapped where the drawing was then gestures did not pass-through but when I tapped outside of the drawing the gestures did pass-through.

How is it related to the traditional navigation style and Shell? This functionality sits completely outside of navigation and shell.

What can we use it for? In the future I would like to see it used to build outstanding user experiences, eg popovers, toast messages, alerts and slideovers.

How do we animate/transition it? Not sure exactly, I don’t think we can use any of the the animation libraries available in .NET MAUI (.NET MAUI Animations, Lottie or Native APIs).


  • Uses the Drawing capability of .NET MAUI
  • We can have multiple overlays and multiple overlay elements.
  • We can add and remove overlays
  • Overlays provide a canvas and draw method which allows you to draw on the screen.
  • No support for Controls, Animations or Transitions
  • Has the ability to invalidate and redraw

My thoughts

I think that overlays are a great idea and something that I could get excited about but in my current knowledge it seems that overlays provide a fairly limited functionality. Considering that the overlay feature needs to be supported and maintained if built, then ideally it should be a useful feature.

I think that with overlays you should be able to create something like slideoverkit and popups with animations/transitions etc, ideally it would support all the functionality create these easily or allow the extensibilty to create these.

I would suggest it will need to be able to support controls, animations and transitions.

We know that .NET MAUI is still early days so I hope that we can see this functionality developed further.

Exploring Multi-window Apps in .NET MAUI Preview 11

This is post #2 in a series called ‘.NET MAUI Source of Truth’.

About Source Of Truth – As any developer knows, source code is the purest form of truth in working software. So I’ve decided the best way to get deep into .NET MAUI is to look at the source code.

In my last post we explored the .NET MAUI codebase learning about the new Windows functionality but we could not get experience with overlays until Preview11. The great news is that Preview11 has been shipped.

You can learn more about the preview here:

I’m doing all this on the mac. You can also do this on Windows but you’ll need a ipad. If you want to use your Mac you can, you can see some details on installing this on a mac, you can find more detailed installation and upgrade notes at these locations:

On another note the documentation for .NET doesn’t provide instructions to set your path for dotnet permanently. If you want to do this then following these instructions. I found dotnet located here: /usr/local/share/dotnet/dotnet. 

In my case I already had dotnet and maui installed, so I just needed to do an upgrade. At the time that I wrote this post then Maui Check was not upgrading me to Preview 11, but do note I did first run Maui Check and it resolved a few other issues for me so I would recommend doing a check first.

Here’s what I did.

1. Run Maui Check.

cd $HOME/.dotnet/tools

Then I resolve resolved all the issues associated

2. Manually update to preview11

sudo dotnet workload install maui
dotnet new --install Microsoft.Maui.Templates
dotnet new maui -n MauiPreview11Play
cd MauiPreview1Play
dotnet restore
dotnet build -t:Run -f net6.0-ios

Generally in .NET MAUI development(on the mac) I’ve been able to use Visual Studio Mac Preview, in this case I’ve updated to the latest version and now I can open the project.

Update: Initially I started with using VS MAC Preview but eventually it caused too many issues, not that I think this was a VS issue but more that .NET MAUI was not building and deploying without issues. Eventually I turned to VS Code the command line.

It took me a long time to get this working because I would have issues building and deploying, I was not able to tell if it was my issues or .NET MAUI. I had to switch between –no-incremental and a normal build when I had issues with build and deploy.

dotnet build --no-incremental -t:Run -f net6.0-maccatalyst / ios

dotnet build -t:Run -f net6.0-maccatalyst / ios


Once we have the new project running we can start our setup for Multi-window.

Step 1. Add a SceneDelegate, you can add this under Platforms/MacCatalyst and Platforms/iOS.

using Foundation;
using Microsoft.Maui;
using ObjCRuntime;
using UIKit;

namespace MauiPreview11Play;

public class SceneDelegate : MauiUISceneDelegate


Step 2. Update your info.plist to support multiple scenes. You can do this under /Platforms/iOS and /Platforms/MacCatalyst


Step 3. Setup the multi-window code. In this case I’ve just edited the existing files to add some buttons and methods.

    Text="Open window"
    SemanticProperties.Hint="open it"
    HorizontalOptions="Center" />

    Text="Close window"
    SemanticProperties.Hint="Close it"
    HorizontalOptions="Center" />

private void OpenWindow(object sender, EventArgs e)
	Application.Current.OpenWindow(new Window(new MainPage()))

In order to open a window we can use the OpenWindow method on the Application, providing a new Window.

private void CloseWindow(object sender, EventArgs e)
	var window = this.GetParentWindow();
	if (window is not null)

In order to close a window we need to provide the window.

Here’s a video of multiple windows in a maccatalst app

Also multiwindow in iPad.

There we go, .NET MAUI Multi-Windows Apps. If you want to see the code you can find it here:


.NET MAUI Source Of Truth – Exploring IWindow

This is post #1 in a series called ‘.NET MAUI Source of Truth’.

About Source Of Truth – As any developer knows, source code is the purest form of truth in working software. So I’ve decided the best way to get deep into .NET MAUI is to look at the source code.

Exploring IWindow

Recently I was exploring the .NET MAUI codebase and came across something that seemed interesting, IWindow. ‘Windows in MAUI? that doesn’t make sense’ because Xamarin.Forms never had windows. I set about to find out what IWindow was.

A Xamarin.Forms Recap

Before we take a look at Windows in .NET MAUI let’s refresh ourselves on view hierarchies in Xamarin.Forms. In Xamarin.Forms we have an Application class with a MainPage property which takes a Page. You’re able to set the MainPage to either a single page or a navigation page.

So our hierarchies looks something like this:

Windows in MAUI

Important Note: In this article whenever I make reference to Window/Windows most of the time it’s going to be Windows concept from .NET MAUI, not the Windows platform support of .NET MAUI.

If I look into any samples of MAUI applications then they follow the same hierarchy we find in Xamarin.Forms. So where’s the Windows?

Initially looking at the latest preview10 of .NET MAUI it seems the functionality of Windows is limited. So in order to see what Windows are going to look like in the future of .NET MAUI then we need to look at the latest source code.

If we remember that we’ve always started in Forms applications using MainPage,

eg MainPage = new ContentPage();

then let’s start with that MainPage property on the application class, FYI this is code from the MAUI github repository. If we dig into that method we can see that the MainPage still is part of the hierarchy but now Window is a parent of that page.

public Page? MainPage
		if (MainPage == value)


		if (Windows.Count == 0)
			_pendingMainPage = value;
			Windows[0].Page = value;


Now we can see that Window is a parent of Page, and Window has a property called Page.

If we look further into the implementation of the application class then we can see a new method that creates a new Window. I guess with the variable _pendingMainPage that we require some type of lazy loading.

IWindow IApplication.CreateWindow(IActivationState? activationState)
	Window? window = null;

	// try get the window that is pending
	if (activationState?.State?.TryGetValue(MauiWindowIdKey, out var requestedWindowId) ?? false)
		if (requestedWindowId != null && _requestedWindows.TryGetValue(requestedWindowId, out var w))
			window = w;

	// create a new one if there is no pending windows
	if (window == null)
		window = CreateWindow(activationState);

		if (_pendingMainPage != null && window.Page != null && window.Page != _pendingMainPage)
			throw new InvalidOperationException($"Both {nameof(MainPage)} was set and {nameof(Application.CreateWindow)} was overridden to provide a page.");

		// clear out the pending main page as this will never be used again
		_pendingMainPage = null;

	// make sure it is added to the windows list
	if (!_windows.Contains(window))

	return window;

Even more, is revealed if we take a look at IApplication. It looks like we will be able to OpenWindow, CloseWindow, and CreateWindow.

/// <summary>
/// Class that represents a cross-platform .NET MAUI application.
/// </summary>
public interface IApplication : IElement
	/// <summary>
	/// Gets the instantiated windows in an application.
	/// </summary>
	IReadOnlyList<IWindow> Windows { get; }

	/// <summary>
	/// Instantiate a new window.
	/// </summary>
	/// <param name="activationState">Argument containing specific information on each platform.</param>
	/// <returns>The created window.</returns>
	IWindow CreateWindow(IActivationState? activationState);

	void OpenWindow(IWindow window);

	/// <summary>
	/// Requests that the application closes the window.
	/// </summary>
	/// <param name="window">The window to close.</param>
	void CloseWindow(IWindow window);

	/// <summary>
	/// Notify a theme change.
	/// </summary>
	void ThemeChanged();

Here we can see one of the commits which is an initial implementation of Windows support:

From within .NET MAUI codebase, we can see a sample of how to use multiple windows in the .NET MAUI samples projects, look for MultiWindowPage.

public partial class MultiWindowPage : BasePage
	static int windowCounter = 1;

	public MultiWindowPage()

		label.Text = "Window Count: " + (windowCounter++).ToString();

	void OnNewWindowClicked(object sender, EventArgs e)
		Application.Current.OpenWindow(new Window(new MultiWindowPage()));

	void OnCloseWindowClicked(object sender, EventArgs e)
		var window = this.GetParentWindow();
		if (window is not null)

It’s awesome to see that we have multiple windows going on, this is good for multiple reasons including support for Desktop Applications and IPad. This Window functionality will probably become more useful in future iOS/Android OS releases if the native OS build out the Windowing functionality further.

The current preview of .NET MAUI does not have the code changes for IWindow support so you will either need to download the .NET MAUI source or wait until we get our next preview release.

Here are the windows events all explained:


  • Windows is a new concept built in .NET MAUI
  • Windows will allow you to open and close Windows
  • It looks like it will be very useful for Desktop applications

This is only a brief look at IWindow in .NET MAUI, there will be a lot more info to come as I discover more and the .NET team builds out the functionality further. I’m looking forward to sharing the knowledge. If you have any questions or need some .NET MAUI Consulting and XAM is here to help.

A First Look with FreshMvvm.Maui

Is FreshMvvm going to support .NET MAUI? FreshMvvm has over half a million downloads and it’s an extremely loved MVVM framework for Xamarin, so I think that it’s important that we put in the time to ensure FreshMvvm had a .NET MAUI version. This is a port of the FreshMvvm code base, it’s a new project. Moving forward both FreshMvvm and FreshMvvm.Maui will be supported. I must also thank Vlad Antohi who helped with the port of the codebase.

So What’s Changed?

FreshMvvm.Maui is still the FreshMvvm that you know and love. Generally, the upgrade process will be smooth and easy because from a consumer’s perspective not much has changed. We’ve made some changes internally, with the biggest change being to support the .NET dependency injection infrastructure. This means that TinyIOC has been removed and by default we use the Microsoft default dependency injection. The biggest advantage of this is you can bring in any container that supports the Microsoft Dependency Injection.

How do I get started?

1. First you’ll need to install the .NET MAUI preview.

2. Create a new .NET Maui project from the template

dotnet new maui -n HelloFreshMauiPreview

Once you create the project, you can then open it up in Visual Studio. I’m currently on my mac and running Visual Studio 2022 Preview.

3. Add FreshMvvm.Maui from nuget

4. Add your Service, Pages and View Models.

In this case I’ve added a Database service that Mocks a database, a small quote list and quote edit. You can see all the files in the sample repo here.

5. Setup a simple navigation container for the application.

public partial class App : Application
 public App()

  var page = FreshPageModelResolver.ResolvePageModel<QuoteListPageModel>();
  var basicNavContainer = new FreshNavigationContainer(page);
  MainPage = basicNavContainer;

6. Configure the app

public static MauiApp CreateMauiApp()
	var builder = MauiApp.CreateBuilder();

        .ConfigureFonts(fonts =>
            fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");

    builder.Services.Add(ServiceDescriptor.Singleton<IDatabaseService, DatabaseService>());

    builder.Services.Add(ServiceDescriptor.Transient<QuoteListPage, QuoteListPage>());
    builder.Services.Add(ServiceDescriptor.Transient<QuotePage, QuotePage>());

    builder.Services.Add(ServiceDescriptor.Transient<QuoteListPageModel, QuoteListPageModel>());
    builder.Services.Add(ServiceDescriptor.Transient<QuotePageModel, QuotePageModel>());

    MauiApp mauiApp = builder.Build();


    return mauiApp;

It’s that easy, we have our first FreshMvvm.Maui app running.

You can find the this simple FreshMvvm.Maui app here:

AirNZClone – Awesome animation tricks in Xamarin.Forms

In my Xamarin UI July I’ve decided on a UI that was not only visually appealing but also one that had a little more complexity than you normally see in a Xamarin.Forms apps. I recently attended the Xamarin Developer Summit in Houston Texas, I flew Air Zealand during this trip and I found that the Air NZ app had some nice UI built around the pan/scroll gesture. What I found interesting is that the UI had both a long scroll and a horizontal pan. You can take a look at the original app below.


At first glance I was not exactly sure how I would solve this type of horizontal pan with snap but also the vertical scroll. I had a few different ideas to start on, would I use a collection view? listview? scroll view? or the new carousel view?. I actually attempted to do this in all those controls and they failed miserably, the way that I’ve actually got this working was with the trusty grid and post layout translations.  The overall hierarchy looks like this,

—->Trip1 (StackLayout)
—->Trip2 (StackLayout)
—->Trip3 (StackLayout)

The ScrollView does it’s job for the vertical scrolling but there alot more to the hoizontal pan and snap.

As many of you would’ve seen before the Grid allows you to layer views directly on top of each other. So the Trips are layered on top of each other.

In order to get the horizontal pan and snap I needed to bring a few things together, I’ve used the post layout translations to put views off the screen. eg

—->Trip1 – TranslationX = 0
—->Trip2 – TranslationX = Width
—->Trip3 – TranslationX = Width * 2

*A little extra note is that I needed to know the width of the view as soon as possible so I overrid the OnSizeAllocated method. As you can see below:

protected override void OnSizeAllocated(double width, double height)
    base.OnSizeAllocated(width, height);

    _trips = new List<TripOffset>()
        new TripOffset { Trip = view1, TripBackgroundImage = view0Image, Offset = 0 },
        new TripOffset { Trip = view2, TripBackgroundImage = view2Image, Offset = width },
        new TripOffset { Trip = view3, TripBackgroundImage = view3Image, Offset = width * 2 }
    _currentTrip = _trips[0];
    view1.CalculateOffsets(0, width, 0, false);
    bgImage0.Opacity = 1;
    view2.CalculateOffsets(width, width, width, false);
    bgImage2.Opacity = 1;
    view3.CalculateOffsets(width * 2, width, width * 2, false);
    bgImage3.Opacity = 1;

Now that the trips are layed out we need to move them with a pan gesture, I spent alot of time using the Xamarin.Forms PanGestureRecognizer but in the end I could not get both the Forms PanGesture and ScrollView to work nicely. I was a little worried that this might not be possible, but then the awesome MRPanGesture Recognizers came to the rescue. Not only does it work ‘correctly’ with the scrollview it also has a velocity something that the built in Forms gesture was missing.

So now my views look like this.

ScrollView (Vertical Scroll)
–>Grid (MR Gestures Pan)
—->Trip1 – TranslationX = 0
—->Trip2 – TranslationX = Width
—->Trip3 – TranslationX = Width * 2

As we move the PanGesture we change the TranslationX on all the views.

It sounds pretty simple but when your moving things around on a screen based on Pan Gestures then there’s a few things you need to take into account. For example, if a user it actually wants to scroll then we need to disable the pan gesture, as you can see below.

if (counter == 3)
    var totalDis = panEventArgs.TotalDistance;
    var isVertical = Math.Abs(totalDis.Y) > Math.Abs(totalDis.X);
    if (isVertical)
        disabled = true;
        scrollingContainer.IsEnabled = false;

There’s also much more to this complete view than we’ve discussed, we’ve got the pan/snap and scroll sorted but there’s many more elements to this view.

Fading and changing background

As you can see in the video when a user snaps between cities the background image of the city also fades out and is replaced with the new city, this one wasn’t too hard.

For the fade I used a white overlay with transparency, then as the user pan’s I change the opacity. As you can see.

//Calculate percentage of pan and then set opacity on overlayt
var percentage = Math.Abs(panEventArgs.TotalDistance.X / this.Width);
whiteoverlay.Opacity = (percentage + .4).Clamp(.5, .9);

Then for the images I just show and hide them using opacity, I’m using opacity so that I don’t cause a layout refresh.

if (trip.Offset == 0)
    _currentTrip = trip;
    trip.TripBackgroundImage.Opacity = 1;
    trip.TripBackgroundImage.Opacity = 0;

The Circle Images

As you can see in the original view we have a floating image circle off the side of the screen that show the user that the they can pan horizontally this also changes offset and size as the user moves.


This one was a little time consuming, essentially what I needed to do was calculate the percentage of horizontal pan and then do some maths to translate this into an offset and a scale (change the size).

First we need to find if the view we have is to the left or right of centre.

bool isNeighbourOfCentre = Math.Abs(parentCurrentStartingOffset) == containerWidth;

Then we need to calculate the percentage that the gesture has moved.

var offset = Math.Abs(x) - halfOfImage;
var maxOffset = (inverseParentCurrentStartingOffset - halfOfImage);
finalOffset = offset.Clamp(0, maxOffset);
percentageMoved = (finalOffset / maxOffset);

Once we’ve done this we use an easing function to give the movement some bounce.

normalizedTime = Ease(percentageMoved);
finalOffset = normalizedTime * (maxOffset - offsetOfImage);

Here’s our custom easing function.

double Ease(double normalizedTime)
    normalizedTime = 1.0 - normalizedTime;
    normalizedTime = Math.Max(0.0, Math.Min(1.0, normalizedTime));
    normalizedTime = 1.0 - Math.Sqrt(1.0 - normalizedTime * normalizedTime);
    return 1.0 - normalizedTime;

Then we also need to calculate the scale, in the case below we take the percentage of movement and turn at into a number between .6 and 1 which will allow us to scale the image.

var makeToPointFiveScale = (.5 * (1 - percentageMoved));
var scale = makeToPointFiveScale + .5;

Then we translate the views.

if (animate)
    circleImageContainer.TranslateTo(finalOffset, 0, 250, Easing.SinOut);
    circleImageContainer.ScaleTo(scale, 250, Easing.SinOut);
    circleImageContainer.Scale = scale;
    circleImageContainer.TranslationX = finalOffset;

Then here’s our final view, it’s definitely not completed like the ‘real’ app but I’m pretty confident I’ve been able to prove my original goals of proving that this app would 100% be possible in Xamarin.


If you want to take a look at the source code then you can find it up on github here.

Thanks for reading – Michael


The Visual Studio debugging trick that EVERY Xamarin/.NET developer should know!

Too many of you this might seen like common knowledge because that’s what I thought, I thought this would be in every Xamarin/.NET developers bag of tricks. I’ve been surprised to discover just how many people don’t know about this feature in Visual Studio, I notice that many people who post questions on FreshMvvm have not done this simple setting which would lead them directly to their issues. I’ve been using this trick for years and it’s saved me hundreds of hours.

Let’s start with a piece of problematic code. In this FreshMvvm sample project I’ve created a unhandled exception which happens during the init method, this issue is within the sample project and not within the FreshMvvm project.

Screen Shot 2019-06-22 at 3.41.13 pm

When I run this project application and run into the exception Visual Studio doesn’t really tell me where my issue originates from. The exception is surfaced where the app crashes which is in main method.

Screen Shot 2019-06-22 at 3.47.16 pm

Now once I apply this debugging setting, let’s see what happens. As you can see below I’m taken directly to the issues in my code.

Screen Shot 2019-06-22 at 3.49.25 pm

We can also see that the call stack is much more relevant.

Screen Shot 2019-06-22 at 3.49.31 pm

So how do you do this? Easy. 

In Visual Studio for Mac. 

Basically we want to setup a generic exception catch point, the easiest way to do this from the breakpoints pad.

Screen Shot 2019-06-22 at 3.57.47 pm

Screen Shot 2019-06-22 at 3.57.55 pm

In Visual Studio on the PC.

From the debug window go into Exception Settings.

Step 1

Then we can set debugger to break on all CLR exceptions.

Step 2


That’s it! Add this technique in your toolbox and it can save you hours of debugging time over the years.




Easy real-time, event-driven and serverless with Azure

One of the most exciting parts of modern cloud platforms is serverless and the event driven architectures you can build with it. The reason that it’s so exciting is because it’s a double win, not only does it give you all the advantages of the cloud like easy elastic scale, isolation, easy deployment, low management etc but it’s also extremely easy to get started. We can also avoid the container and container management rabbit hole.

Azure Functions

Azure Functions is a serverless offering on the Azure platform. In a nutshell you create individual functions that you upload into the cloud and run individually. Here’s an example of a simple function that you can call via Http.

public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
    ILogger log)
    log.LogInformation("C# HTTP trigger function processed a request.");

    string name = req.Query["name"];

    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;

    return name != null
        ? (ActionResult)new OkObjectResult($"Hello, {name}")
        : new BadRequestObjectResult("Please pass a name on the query string or in the request body");

This simple function is triggered via Http and returns a string saying Hello {name}. The name is passed in via a query string parameter.

Azure Functions are made up of 2 primary things, Triggers and Bindings.


A trigger is how a Function is started, as we see the sample above is a HttpTrigger. Functions has a number of triggers that we can use including:
Http trigger – function that runs when a http end point is hit
Timer trigger – runs on a schedule
Queue trigger – runs when a message is added to a queue storage
Blob trigger – runs whenever a blob is added to a specificied container
EventHub trigger – runs whenever event hub recieves a new message
IoT Hub Trigger – runs whenever an iot hub recieves a new event on the event hub endpoint
ServiceBus Queue trigger – runs when a message is added to a specified Service Bus queu
ServiceBus Topic trigger – runs when a message is added to s specified ServiceBus topic
CosmosDB trigger – runs when documents change in a document collection

In this post we’ll be focusing on the Cosmos DB trigger.


Bindings are a way of declaratively connecting another resource to the function eg binding to Cosmos DB provides you a CosmosDB Connection and allows you to insert data into CosmosDB; bindings may be connected as input bindings, output bindings, or both. Data from bindings is provided to the function as parameters.

Cosmos DB

Cosmos DB is a multi-model global scale database with easy single touch geo-replication. It’s fast and has more SLA’s than any another cloud database on the market. One of the advanced features we get in Cosmos DB is change feed, Cosmos DB Change Feed is a log of all the changes that have occurred on a collection in the database. You’ll be able to see inserts, updates and deletes. As mentioned before Cosmos DB has a Azure Function trigger that uses change feed, so in essence we can call a function every-time a CosmosDB is triggered and pass in the data.

You can see below we’ve got a Cosmos DB triggered functions, a list of changed documents is provided into the function.

public static async Task CosmosTrigger([CosmosDBTrigger(
    databaseName: "CDALocations",
    collectionName: "Location",
    ConnectionStringSetting = "AzureWebJobsCosmosDBConnectionString",
    LeaseCollectionName = "leases",
    FeedPollDelay = 1000,
    CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
    ILogger log)
    if (documents != null && documents.Count > 0)
        log.LogInformation($"Documents modified: {documents.Count}");
        log.LogInformation($"First document Id: {documents[0].Id}");

To show you a diagram in a diagram see below:

CosmosDB Change Feed

SignalR Service w Real World Example

SignalR service is fully managed real-time communications service.

Let’s put this into a real world scenario. The legendary James Montemagno built a Cloud Enabled Xamarin Application called GEO Contacts which we can find on github here. Geo Contacts is a cross-platform mobile contact application sample for iOS and Android built with Xamarin.Forms and leverages several services inside of Azure including Azure AD B2C, Functions, and Cosmos DB. This application allows you to check into locations and see who else has checked into a location near you, which is awesome but there’s a little bit of missing functionality that I would like, if I’ve already checked into a location I would like to be notified if another person checks in near me.

The process flow will be as follows:
1) A user checks in at a location like Sydney, Australia
2) Azure Function is called to store the data inside Cosmos DB
4) The Azure Function that is subscribed to that change feed is called with the change documents
5) The function will then notify other users via SignalR.

Setup the SignalR Service

We first have to setup SignalR Service and integrate with the Functions and the Mobile app.

Go into Azure and create a new SignalR service.


Then take the keys from the SignalR service and add the configuration into the Azure Function.


If we want to get the SignalR configuration into the client Azure Functions makes this easy. We create a simple function that returns the SignalR details.

public static SignalRConnectionInfo GetSignalRInfo(
    [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
    [SignalRConnectionInfo(HubName = "locations")]SignalRConnectionInfo connectionInfo)
    return connectionInfo;

Then we create the signalr client in the Xamarin application.

var client = new HttpClient();
string result = await client.GetStringAsync("");
SignalRConnectionInfo signalRConnectionInfo = JsonConvert.DeserializeObject<SignalRConnectionInfo>(result);

if (hubConnection == null)
    hubConnection = new HubConnectionBuilder()
                        .WithUrl(signalRConnectionInfo.Url, options =>
                            options.AccessTokenProvider = () => Task.FromResult(signalRConnectionInfo.AccessToken);

    await hubConnection.StartAsync();

Setup the Change Feed

Now that we have the SignalR setup we can setup the change feed. The only thing we need to setup for change feed is the leases collection.

Now that we’ve setup the leases collection building the event driven function is very simple. We only need to use the CosmosDBTrigger with the correct configuration and then add the SignalR binding.

In the function below, we can see it uses the CosmosDBTrigger it gets executed when a location is added into the Location collection, we’re also have the SignalR binding which means we can easily sending messages to the Xamarin clients we setup.

public static async Task CosmosTrigger([CosmosDBTrigger(
    databaseName: "CDALocations",
    collectionName: "Location",
    ConnectionStringSetting = "AzureWebJobsCosmosDBConnectionString",
    LeaseCollectionName = "leases",
    FeedPollDelay = 1000,
    CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
    [SignalR(HubName = "locations")]IAsyncCollector<SignalRMessage> signalRMessages,
    ILogger log)

    List<LocationUpdate> locations = new List<LocationUpdate>();

    if (documents != null && documents.Count > 0)
        log.LogInformation($"Documents modified: {documents.Count}");
        log.LogInformation($"First document Id: {documents[0].Id}");

        locations.Add(await documents[0].ReadAsAsync<LocationUpdate>());

    await signalRMessages.AddAsync(
       new SignalRMessage
           Target = "nearMe",
           Arguments = new[] { locations.ToArray() }

    log.LogInformation($"Sent SignalR Message");

That’s it, now we have a scalable, real-time and event-driven system.

Introduction to Augmented Reality with ARKit

In this post we’re going to dive into ARKit, we’ll find out what it is and get started with building our first augmented reality experience in ARKit.

ARKit is Apple’s toolkit for building augmented reality experiences on iOS devices. It was initially released at WWDC 2017, then ARKit 2.0 was released at WWDC 2018.

Before we jump into any code it’s important to understand what ARKit is and why we need it. Augmented Reality on mobile devices is hard, it’s hard because of the heavy calculations/triangulations/mathematics. It’s also very hard to do AR without killing the users battery or reducing the frame rates. ARKit takes care of all the hard parts for you hence allowing you to use a clean and simple API.

The Basics of ARKit

In order to Augment our Reality then we need to be able to track reality, eg how do we map the world so that we know what it looks like in a digital form. Devices like HoloLens have special sensors specifically designed for AR tracking. Mobile devices don’t have anything specifically designed for world tracking, but they do have enough sensors that when combined with great software we can track the world.


ARKit takes advantage of the sensors (camera, gyroscope, accelerometer, motion) already available on the device. As you can see in the diagram below ARKit is only responsible for the processing and basically this means sensor reading and advanced mathematical calculations. The rendering can be handled by any 2D/3D rendering engine, which includes SceneKit as you see below but majority of the apps will be using a 3D engine like Unreal or Unity.

arkit diagram


Understanding the World

The primary function of ARKit is to take in sensor data, process that data and build a 3D world. In order to do this ARKit uses a truckload of mathematical calculations, we can simplify and name some of the methods ARKit is using.

The diagram below shows Inertial Odometry, Inertial Odometry takes in motion data for processing. This input data is processed at a high frame rate.


The diagram below shows Visual Odometry, Visual Odometry takes in Video data from the camera for processing. The processing of visual data is done at a lower framerate and this is due to the fact processing the visual data is CPU intensive.


ARKit then combines the odometries to make what’s called Visual Inertial Odometry. This will have the motion data processed at a high framerate, the visual data processed at a lower framework and differences between the processing normalised. You can see Visual Inertial Odometry in the diagram below.



Triangulation allows the world mapping

In a very simple explanation triangulation is what allows ARKit to create a model of the world. In a similar way to humans, so as the phone is moved around ARKit will do calculations against the differences allowing ARKit to essentially see in 3D. A digital map of the world is created.


As you can see below a world map is created within ARKit.

Augmenting Reality (with Anchor Points)

As the world is mapped ARKit will create and update Anchor Points, these anchor points allow us to add items in reference to the anchor point. As you can see in the diagram below ARKit has added Anchor points and we’ve placed an object (3D vase) near the anchor point. As the devices is moved around these anchor points are updated, so it’s important that we track these changes and update our augmentations of the world.


As I mentioned before ARKit only does the processing and provide the data. It’s up to us to render objects in the 3D world. Below shows how you’ll combine both the video captured and then overlay a 3D rendering. As we move both the video capture and the 3D rendering is updated.


Tracking Options/Features.

ARKit has a few different tracking options and features that I will go over below.

Orientation Tracking

This is the most basic type of tracking available in ARKit, this will track your orientation within the world. It will not track your location in physical space, in essence it’s like your standing still and can view 360 degrees in the world.

orientation tracking

World Tracking

This option in ARKit is the most common, in this scenario ARKit tracks and builds a complete world maps and allows you to move freely within the world. It’s important to note that World Tracking includes the majority of features in ARKit including Plane Detection, Maps, Image Tracking and Object Detection.


Plane Detection

As we’re moving around the 3D world we need to know about the different surfaces in the world this is where plane detection comes in. The first release of ARKit included only horizontal orientation, in ARKit 2 we now have the option of both vertical and horizontal orientation. In the image below you can see the floor being detected as a plane.


Saving & Loading Maps 

In ARKit 2.0 we can now save, load and share the world map. In ARKit 1.0 the map was internal and only ever kept around for a single users session. This meant that in ARKit 1.0 you could not save sessions (maps) to be resumed later or share. This allows for a few scenarios including multiplayer games and the ability to save world maps.

Below is a video of a multi player game that leverages ARKit 2.0 map sharing feature.


Image Tracking

Imaging tracking allows your app to easily detect an image in the real world, this for example might be a photo or a business card or a dvd case. Once you’ve detected this image then you’ll be able to easily augment the reality around image. Normally a task like this would be really difficult but again ARKit makes this really easy for us, the only steps we need to take are a reference (eg the image we need to track) with the physical size, once we set that up we use a single option to set turn on the feature and add the reference images into ARKit.

Below I’ve included a reference video that leverages Image Tracking. While it looks very impressive the application below could be implemented with ARKit 2.0 without a huge amount of effort, around 80-100 lines of code.

Object Detection

Object Detection (ARKit 2.0 feature) allows us to both scan and detect 3D objects. I think the best way to understand this would be a simple video.

Building your first ARKit Experience

To give you a feel for how easy it is to build a ARKit experience I’m going to take you through a simple application in which you can see in the video below. As you move the phone around a plane is detected (which ARKit does for you), we place a Node on that surface then if a users taps the Node we add a box on top of that node where the user tapped.

Let’s jump into some code to see how easy it is to get started with ARKit.

The first thing we do in this app is create a SceneView and add it as a SubView to the visible ViewController, as we see below.


The next step we need is to call the run command on the scene with the world tracking configuration, as we see below.


As we move the phone around and surfaces are detected the DidAddNode method is called by ARKit. As you can see below if the anchor is a ARPlaneAnchor we then add our PlaneNode, which is the blue we see in the video.


Then if a user touches the PlaneNode we then add a Cube on top of where the user just touched.


That’s it, all we need to do for our first AR Experience. You can see the full code file below or get all the code from

using System;
using System.Collections.Generic;
using System.Linq;
using ARKit;
using ARKitExample.Nodes;
using Foundation;
using SceneKit;
using UIKit;

namespace ARKitExample
    public partial class ViewController : UIViewController
        private readonly ARSCNView sceneView;

        protected ViewController(IntPtr handle) : base(handle)
            this.sceneView = new ARSCNView
                AutoenablesDefaultLighting = true,
                //DebugOptions = ARSCNDebugOptions.ShowFeaturePoints,
                Delegate = new SceneViewDelegate()

        public override void ViewDidLoad()

            this.sceneView.Frame = this.View.Frame;

        public override void ViewDidAppear(bool animated)

            this.sceneView.Session.Run(new ARWorldTrackingConfiguration
                AutoFocusEnabled = true,
                PlaneDetection = ARPlaneDetection.Horizontal,
                LightEstimationEnabled = true,
                WorldAlignment = ARWorldAlignment.GravityAndHeading
            }, ARSessionRunOptions.ResetTracking | ARSessionRunOptions.RemoveExistingAnchors);

        public override void ViewDidDisappear(bool animated)


        public override void TouchesEnded(NSSet touches, UIEvent evt)
            base.TouchesEnded(touches, evt);

            if (touches.AnyObject is UITouch touch)
                var point = touch.LocationInView(this.sceneView);
                var hits = this.sceneView.HitTest(point, ARHitTestResultType.ExistingPlaneUsingExtent);
                var hit = hits.FirstOrDefault();

                if (hit == null) return;

                var cubeNode = new CubeNode(0.05f, UIColor.White)
                    Position = new SCNVector3(
                        hit.WorldTransform.Column3.Y + 0.1f,


        class SceneViewDelegate : ARSCNViewDelegate
            private readonly IDictionary<NSUuid, PlaneNode> planeNodes = new Dictionary<NSUuid, PlaneNode>();

            public override void DidAddNode(ISCNSceneRenderer renderer, SCNNode node, ARAnchor anchor)
                if (anchor is ARPlaneAnchor planeAnchor)
                    var planeNode = new PlaneNode(planeAnchor);
                    this.planeNodes.Add(anchor.Identifier, planeNode);

            public override void DidRemoveNode(ISCNSceneRenderer renderer, SCNNode node, ARAnchor anchor)
                if (anchor is ARPlaneAnchor planeAnchor)

            public override void DidUpdateNode(ISCNSceneRenderer renderer, SCNNode node, ARAnchor anchor)
                if (anchor is ARPlaneAnchor planeAnchor)