Setting up Home Assistant in Docker for Windows with Port Forwarding Enabled

I hope that you’ve landed here before spending hours/days trying to find a solution as to why you can’t forward the Home Assistant port in Docker. The solution is frustratingly easy.

Problem

The install Home Assistant in docker on Windows instructions are great, with one exception. It explains the required prerequisites to make sure docker has access to a host disk. However, those instructions have outdated instructions to setup the port-forward rules, which ultimately makes it a waste of time.

They share this command (don’t use):

docker run --init -d --name="home-assistant" -e "TZ=America/Los_Angeles" -v //c/Users/[USER]/homeassistant:/config --net=host homeassistant/home-assistant:stable

It installs fine and spins up the container. The docs say to next use netsh and manually add port-forward rules, but it doesn’t work (and you can seriously mess stuff up with netsh).

Solution

Instead, you can just tell docker to port forward it for you when you initially create the container by using the -p switch. Since Home Assistant uses port 8123, you use -p 8123:8123 in the command.

Here’s the one-liner that does both the install, and the port forward, at the same time:

docker run -p 8123:8123 --name="home-assistant" -e "TZ=America/Los_Angeles" -v //c/Users/lance/homeassistant:/config homeassistant/home-assistant:stable

After that, you’re ready to go! Open a browser on the host PC and navigate to http://localhost:8123.

Important -p 8123:8123 parameter must be used before --name. Otherwise, it gets used in the container instead of Docker, which results in a broken install because the container doesn’t know what -p is. I wasted two days before discovering this, thanks to help from Alex Sorokoletov and Martin Sundhaug. I owe them some 🍻.

Using Windows IoT, SignalR, Azure Custom Vision and Xamarin Forms to Flush a Toilet

My cat uses the human toilet. However, he doesn’t know how to flush when he’s done. So, I thought, “Why not train a custom trained machine learning model to know when to flush the toilet for him? That way, I can go on vacation without asking friends to stop by.”

That single thought began my trip through some great modern developer tools and tech to build a full solution for the problem. In this post, I’ll walk you through everything and you can explore the code and parts list here https://github.com/LanceMcCarthy/Flusher.

3D Printed case for Raspberry Pi running Windows IoT and Flusher client application. The gray and white item in the background is the toilet valve.

You might have some initial questions like: “Why use AI, why not just use motion detection to flush it?” There are several ways you could use a non-AI approach (like a motion sensor used in public restrooms), but the cat is way too smart and would try to game the system into getting him extra treats. Ultimately, I need a smart/remote way to know there’s a positive hit (ehem, a “”number one” or a “number two”) and to flush only then.

The system has several parts:

Signal R

Since it is the middle of all the other applications, let’s start with the server project. It is a very simple ASP.NET Core application hosting a SignalR Hub. The hub has 6 methods:

public class FlusherHub : Hub
{
    public async Task SendMessage(string message)
    {
        await Clients.All.SendAsync(ActionNames.ReceiveMessageName, message);
    }

    public async Task SendFlushRequest(string requester)
    {
        await Clients.All.SendAsync(ActionNames.ReceiveFlushRequestName, requester);
    }

    public async Task SendPhotoRequest(string requester)
    {
        await Clients.All.SendAsync(ActionNames.ReceivePhotoRequestName, requester);
    }

    public async Task SendPhotoResult(string message, string imageUrl)
    {
        await Clients.All.SendAsync(ActionNames.ReceivePhotoResultName, message, imageUrl);
    }

    public async Task SendAnalyzeRequest(string requester)
    {
        await Clients.All.SendAsync(ActionNames.ReceiveAnalyzeRequestName, requester);
    }

    public async Task SendAnalyzeResult(string message, string imageUrl)
    {
        await Clients.All.SendAsync(ActionNames.ReceiveAnalyzeResultName, message, imageUrl);
    }
}

All clients subscribe to the hub, each with different responsibilities, using a reusable SignalR service class. The Windows IoT application is concerned with listening for Flush and Analyze requests, while the Xamarin applications send and listen those requests.

The web application also has an MVC view so I can manually communicate with the IoT client from a web page.

Windows IoT

Now, let’s talk about the component that does all the heavy lifting; the UWP app running on Windows IoT. This app connects to the SignalR hub and listens for commands as well as sends status updates to all the other projects.

I 3D printed a case for the Raspberry Pi 3 so that it was user friendly and self contained. I decided on an amazing model on Thingiverse, check it out here https://www.thingiverse.com/make:760269.

Here’s a high level rundown on the construction:

The mechanical part is just a simple replacement toilet value (Danco link) that has a convenient cable that the servo can pull:

Digging into each part of the code would make this post too long. You can drill right down to the code here on GitHub. Let me instead explain with some highlights. When the project starts up, it initializes and starts up several services:

// Sets up the Azure Storage connection
await InitializeAzureStorageService();
// Enables the Webcam connected to the Raspberry Pi
await InitializeCameraServiceAsync();
// Setups the GPIO PWM service that allows me to set a specific angle for a servo motor
await InitializeServoServiceAsync();
// connects to the signalR Hub
await InitializeSignalRServiceAsync();
// Initialized the rest of the GPIO Pins (LEDs, button, etc)
InitializeGpio();

Here’s the general workflow:

  1. When an analyze request comes in or a local trigger occurs (i.e. motion), the app will take a photo.
  2. It uploads the photo to Azure Storage blob and creates an URL to the image.
  3. That image URL is then sent to a trained Azure Custom Vision service. The service will return the analyze results to the IoT Client. (or falls back on using Windows ML and an ONYX model on the device to inference).
  4. If there was a high degree of certainty (85%+) of the presence a #1 or #2, the servo will be moved from 0 degrees to 100 degress and stays there for 5 seconds (this flushes the toilet)
  5. The IoT client will send the results, with image, to the SignalR hub.
  6. Extra – In case a human needs to use that guest bathroom, you can press the triangle button in front of the unit to manually flush.

Using the GPIO pins and Windows IoT APIs, the app changes the status lights to let any humans nearby understand the current state of the unit. GPIO is also used for the Flash LED pin and the PWM signal for the servo.

  • Green (ready, awaiting command)
  • Blue (busy, action in progress)
  • Red (exception or other error)

The analyze task logic looks like this:

    private async Task<AnalyzeResult> AnalyzeAsync(bool useOnline = true)
    {
        try
        {
            // Status LED to indicate operation in progress
            SetLedColor(LedColor.Blue); 

            var analyzeResult = new AnalyzeResult();

            // Take a photo
            await flusherService.SendMessageAsync("Generating photo...");
            analyzeResult.PhotoResult = await GeneratePhotoAsync(Requester);

            bool poopDetected;

            if (useOnline)
            {
                Log("[INFO] Analyzing photo using Vision API...");
                await flusherService.SendMessageAsync("Analyzing photo using Vision API...");

                // Option 1 - Online Custom Vision service
                poopDetected = await EvaluateImageAsync(analyzeResult.PhotoResult.BlobStorageUrl);
            }
            else
            {
                Log("[INFO] Analyzing photo offline with Windows ML...");

                await flusherService.SendMessageAsync("Analyzing image with Windows ML...");

                // Option 2 - Use offline Windows ML and ONYX
                poopDetected = await EvaluateImageOfflineAsync(analyzeResult.PhotoResult.LocalFilePath);
            }

            analyzeResult.DidOperationComplete = true;
            analyzeResult.IsPositiveResult = poopDetected;
            analyzeResult.Message = poopDetected ? "Poop detected!" : "No detection, flush skipped.";

            // Update status LED
            SetLedColor(LedColor.Green);

            return analyzeResult;
        }
        catch (Exception ex)
        {
            SetLedColor(LedColor.Red);

            return new AnalyzeResult
            {
                IsPositiveResult = false,
                DidOperationComplete = false,
                Message = $"Error! Analyze operation did not complete: {ex.Message}"
            };
        }
    }

If there was a positive result, flush the toilet and send an email:

private async void FlusherService_AnalyzeRequested(string requester)
{
    Log($"[INFO] Analyze Requested by {requester}.");

    await flusherService.SendMessageAsync("Analyzing...");

    var result = await AnalyzeAsync();

    if (result.DidOperationComplete)
    {
        // Inform subscribers of negative/positive result along with photo used for analyzing.
        await flusherService.SendAnalyzeResultAsync(result.Message, result.PhotoResult.BlobStorageUrl);

        // If there was a positive detection, invoke Flush and send email.
        if (result.IsPositiveResult)
        {
            Log("[DETECTION] Poop detected!");
            FlusherService_FlushRequested(Requester);

            Log("[INFO] Alerting email subscribers");
            await SendEmailAsync(result.PhotoResult.BlobStorageUrl);
        }
        else
        {
            Log("[DETECTION] No objects detected.");
        }
    }
    else
    {
        // Inform subscribers of error
        await flusherService.SendMessageAsync("Analyze operation did not complete, please try again later. If this continues to happen, check server or IoT implementation..");
    }
}

Although the Raspberry Pi isn’t going to be connected to a display in normal use, I did build out a diagnostic dashboard as an admin panel. It uses Telerik UI for UWP charts and gauges to show a history of angle changes and current angle, a slider to manually move the servo to any angle, image to see the last photo taken.

There’s one last piece to the puzzle that I haven’t implemented yet. The actual automation of taking the photo so that I don’t need the admin app to start the analyze operation. At the beginning oft this article, I mentioned using a timer or a motion sensor, I will test both approaches in V2. I expect I’ll end up using a sonar sensor like I did for this Netduino project https://www.youtube.com/watch?v=g0_v_awy52k.

Azure Storage

This is a simple reusable Azure Storage service class that implements the Azure Storage .NET SDK to connect with a blob container that holds the image files. The images are deleted after a certain period (90 days) so I don’t end up with a huge container and costs.

Azure Custom Vision & Machine Learning

If you’ve never seen azure custom Vision, I recommend checking it out at https://customvision.ai. Not only can you use the REST API, you can also download a Tensorflow or ONXY model for offline, edge inferencing. As with the storage API, I wrote a reusable Custom Vision service class to do the heavy lifting

In order to train the model, I had to take a lot of gross pictures. As of writing this post, I’ve done 4 training iterations with about 6 hours of training time. To spare you the gritty details, here’s a safe-for-work screenshot of the successful #2 detection:

A test of the model with 85% probability for the two items in the toilet bowl.

I don’t share the endpoint details of my REST API in the demo code, but you can try out the ONXY model with Windows Machine Learning (aka WinML) because the ONYX file (flusher.onyx) is in the UWP project’s assets folder here.

Xamarin.Forms

Lastly, the admin applications. I decided to use Xamarin.Forms because I could build all three platform apps at the same time. I also prefer to use XAML when I can, this was a natural choice for me.

In a nutshell, this is similar to the Web admin portal. The app connects directly to the SignalR server and listens for messages coming form the IoT client. It can also request a photo, manually flush or request a complete analyze operation.

Here’s a screenshot at runtime to better explain the operations (to keep it work-safe, the images are only from test runs).

Xamarin.Forms on Android. you can request a photo, a toilet flush or a full analyze operation.

Cat Tax

Finally, the moment many of you were waiting for… my cat tax.

The guest bathroom that belongs to the cat and the star of the show.

Custom TypingStarted and TypingEnded Events

You know that little indicator in a chat that shows if someone is currently typing? I am working on a new post (coming soon, link will be here) that uses SignalR to communicate who is currently typing in a chat room along with the messages themselves.

To determine who is typing, I use a timer and the TextChanged event. The timer logic itself is straightforward, in the TextChanged event starts a timer.

  • The first TextChanged event starts a timer.
  • If the TextChanged event fires again before the timer’s Elapsed event, the timer is stopped and restarted.
  • If the Timer’s Elapsed event is fired first, then the user has stopped typing.

This code is a bit tedious to implement over and over again, so why not just build it into the control itself and invoke a custom TypingStarted and TypingEnded event? Enjoy!

public class TimedEntry : Entry, IDisposable
{
    private readonly System.Timers.Timer timer;

    public TimedChatEntry()
    {
        TextChanged += TimedChatEntry_TextChanged;

        timer = new System.Timers.Timer(1000);
        timer.Elapsed += timer_Elapsed;
    }

    public event EventHandler<EventArgs> TypingStarted;

    public event EventHandler<EventArgs> TypingEnded;

    private void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs args)
    {
        if (timer == null)
            return;

        timer?.Stop();
        Device.BeginInvokeOnMainThread(() => TypingEnded?.Invoke(this, new EventArgs()));
    }

    private void TimedChatEntry_TextChanged(object sender, TextChangedEventArgs e)
    {
        if (timer == null)
            return;

        if (!timer.Enabled)
        {
            timer?.Start();
            Device.BeginInvokeOnMainThread(() => TypingStarted?.Invoke(this, new EventArgs()));
        }
        else
        {
            timer.Stop();
            timer.Start();
        }
    }

    public void Dispose()
    {
        if (timer != null)
        {
            timer.Elapsed -= timer_Elapsed;
        }
        timer?.Dispose();
    }
}

Here’s an example that uses a SignalR service:

<TimedEntry TypingStarted="TimedChatEntry_OnTypingStarted"
            TypingEnded="TimedChatEntry_OnTypingEnded"/>
private async void TimedChatEntry_OnTypingStarted(object sender, EventArgs e)
{
    if (service != null)
        await service.SendTyperAsync(me.Name, true);
}

private async void TimedChatEntry_OnTypingEnded(object sender, EventArgs e)
{
    if (service != null)
        await service.SendTyperAsync(me.Name, false);
}

You can see the entire thing in action, including the SignalR Hub project, here on GitHub: SignalR Chat Room Demo.

Unblocking .NET DLLs on Mac Catalina

There’s a new headache in town. If you run self-hosted Azure DevOps agent, or .NET Core project, on a Mac running Catalina (v 10.15+) and Microsoft Edge, you will have noticed a new behavior where the OS prevents the .NET assembly from working.

This is because it has been marked with a “quarantine” file attribute when it was downloaded with the browser. It’s a security measure that I’m familiar with on Windows. I would typically remove this attribute by right clicking on the file, select “Properties”, then check “Unblock”:

Unblocking a ZIP file on Windows.

At this point, I suspected I had a good idea of what was happening, I just needed to figure out how to check for and remove it on a Mac. I reached out to the dev community on twitter and asked around. Thanks to Eric Lawrence who pointed me to the Chromium code change that shows it is indeed quarantining the file(s).

After reviewing how to read and remove file attributes, indeed I found a com.apple.quarantine attribute on the tarball archive file that is downloaded from Azure DevOps. Since it contains all the .NET assemblies, we can just unblock the tar.gz file and all contents will unblock as well.

Solution

Let’s go back to Azure DevOps Agent Pool page, where you download your agent package (see here if you’ve never done this before).

Click the download button to download the tarball file

Now that the file is in the downloads folder, we can use the xattr command to list the attributes. In my case, I’m checking the downloaded compressed file.

xattr vsts-agent-oxs-x64-2.159.2.tar.gz
You’ll see the attributes listed with the xattr command

Bingo! Notice the com.apple.quarantine attribute? That’s the one causing this headache. Now we can see the attribute name, we can remove it by calling the the xattr command again with -d attributeName fileName parameters .

xattr -d com.apple.quarantine vsts-agent-oxs-x64-2.159.2.tar.gz
You will need to give Terminal permission to access the Downloads folder.

Finally list the attributes again to confirm the quarantine has been removed:

Confirm the attribute was removed

Now you can finish extracting the tarball and setting up the agent (or running your .NET core application).

Preparing apps for Windows X and Surface Duo or Neo Devices.

Preface – This is not Microsoft-provided information, just my guess after digging around the Microsoft.UI.Xaml source code after a conversation on Twitter. This is not coming from any MVP-NDA or other NDA source. I will update this post when official information becomes available.

There is suspiciously missing control from the XAML Gallery app – TwoPaneView. It’s in the WinUI 2.2 release, but there’s no documentation, guidance or examples of it being used (yet). The only thing you’ll find is the API reference, which is automatically generated during the build process.

It didn’t go completely unnoticed, another MVP Fons Sonnemans, did find the control in the preview SDK and wrote this blog post. However, now armed with the knowledge of two-screen devices and Windows X on the horizon, I wanted to dig deeper.

API Support

If you look at the Fon’s demo, it might seem like all the control is good for right now is visual state changes that occur within a single Window. If this is what we’ll use for multi-window-single-instance apps, there needs to be some sort of OS level event that bubles useful information up the API. This iswhere my conjecture begins…

I reviewed the source code of the control and found some interesting code in DisplayRegionHelper::GetRegionInfo()

It appears to check if the display region is WindowingEnvironmentKind::Tiled from calling a WinRT API WindowingEnvironment::GetForCurrentView() . Then, the most interesting part that I think supports multi-screen setups, is regions = winrt::Windows::UI::WindowManagement::DisplayRegion::GetRegionsForCurrentView()

Here’s the snippet with the aforementioned lines highlighted:

 winrt::WindowingEnvironment environment{ nullptr };
        try
        {
            environment = winrt::WindowingEnvironment::GetForCurrentView();
        } catch(...) {}

        // Verify that the window is Tiled
        if (environment)
        {
            if (environment.Kind() == winrt::WindowingEnvironmentKind::Tiled)
            {
                winrt::IVectorView<winrt::Windows::UI::WindowManagement::DisplayRegion> regions = winrt::Windows::UI::WindowManagement::DisplayRegion::GetRegionsForCurrentView();
                info.RegionCount = std::min(regions.Size(), c_maxRegions);

                // More than one region
                if (info.RegionCount == 2)
                {
                    winrt::Rect windowRect = WindowRect();

                    if (windowRect.Width > windowRect.Height)
                    {
                        info.Mode = winrt::TwoPaneViewMode::Wide;
                        float width = windowRect.Width / 2;
                        info.Regions[0] = { 0, 0, width, windowRect.Height };
                        info.Regions[1] = { width, 0, width, windowRect.Height };
                    }
                    else
                    {
                        info.Mode = winrt::TwoPaneViewMode::Tall;
                        float height = windowRect.Height / 2;
                        info.Regions[0] = { 0, 0, windowRect.Width, height };
                        info.Regions[1] = { 0, height, windowRect.Width, height };
                    }
                }
            }

This is the basis of my theory, I could be way off. If I’m wrong, what’s the worst thing that happened? I was forced to think about my application in a multi-window environment? Win-win!

Demo

There are no official demos of this that I could find. However, the same source code also had a UITest! I isolated that UI test in a runnable project, that is what you see a recording of in the tweet embedded above. You can download the project from here DuoNeoTest.zip

Note: I did not set the InsiderSDK as the Target SDK. If you do have SDK 18990 installed and have a device running insider preview, just change the target in the project properties.