C# Nullable Reference Types: IntelliSense Confusion

The feature and concept of Nullable reference types were introduced in C# 8.0 and it basically made all types non-nullable by default and ensured that these types could never be assigned the value null. This is one of my favorite features in C# recently, but there are scenarios where a mixed nullable environment could cause confusion.

Confusion

To enable the assignment of the value null to a type, you have to explicitly mark that type. This uses the same concept of nullable as introduced in C# 2.0, where you, for example, make an int nullable by adding a question mark after it: int?.

When we look at a regular example service-class, we can see which benefits can be had from Nullable reference types:

public class ProductService
{
    // This method accepts a non-null string for 'productId'
    // and always returns a string
    public string FormatProductId(string productId)
    {
        // ...
    }

    // This method accepts a nullable 'formattedProductId'
    // and returns a string or null
    public string? TryGetProductName(string? formattedProductId)
    {
        // ...
    }
}

This makes things all fine and clear. We know that the method FormatProductId never returns null and that it doesn't accept null in its parameter. We also know that the method TryGetProductName returns a string which could be null and that the parameter accepts a string which could be null.

This is great, this means that we don't have to perform a null-check on productId-parameter of the FormatProductId-method, right? Well, not exactly...

Confusion: Mixed nullable environments

In an environment where all your code has Nullable reference types enabled, you can trust the output of a method and the input to its parameters. In a mixed nullable environment, things are not as straight forward, especially when you look at how IntelliSense in Visual Studio signals what to expect from the code.

Scenario 1: Modern app & legacy library

Imagine that your new modern app has Nullable reference types enabled, but you're using an external library that is legacy and does not have this enabled. This external library can be your own old library or something you've included from NuGet.

The problem now becomes that the external library is signaling, for example, that it has a method that returns a string and not a string?, so you should be able to trust that it is not null, right? Unfortunately not. Even with a local non-nullable project, IntelliSense tells me that the returned string is not null, even when it is.

Value is not null

Scenario 2: Legacy app & modern library

Imagine that you have just put together a nice library that you want others to use, either in your project, organization or publicly through NuGet. One of the best parts about using Nullable reference types is that the compiler will warn you if you try to send in a null value as a parameter to a method that explicitly states that it doesn't support null.

Nice, now you can clean out all those noisy null-checks at the top of all the methods, right? Unfortunately not. Your code might be used by another assembly (or an older version of Visual Studio), which doesn't detect the non-nullability.

In a way, this means you have to reverse the way you do null-checks in your code.

public class ProductService
{
    // This method does not accept a null-value
    // and if it does, it should throw an exception
    public string FormatProductId(string productId)
    {
        if (productId == null)
            throw new ArgumentNullException(productId);
        // ...
    }

    // This method accepts null-values
    // and should adjust its logic accordingly
    public string? TryGetProductName(string? formattedProductId)
    {
        return
            formattedProductId != null
            ? GetProductName(formattedProductId)
            : null;
    }
}

Key takeaways

My own take-aways from exploring this aspect of Nullable reference types are:

  • When building a library, always check for null in incoming method-arguments, even when Nullable reference types is enabled
  • When consuming an external legacy library, don't trust the return-type to not be null (even if it says it's not)
  • In a mixed nullable environment, the feature to guard us against NullReferenceExceptions is likely to mistakenly cause some more of them
  • When this feature is fully adopted, there will be a reduction in a lot of the overhead in null-handling code

Thoughts

Hopefully, in .NET 5, this feature is enabled by default and these kinds of confusions, and described associated errors can be avoided.

One idea for an improvement to the IntelliSense-behavior around assemblies that are not known to have Nullable reference types enabled could be to show all these types as nullable. Both because it makes things super-clear, but also because of the fact that they actually are nullable.

This change would make everything in the whole .NET Core CLR light up as nullable, but as of .NET Core 3.1, it all is nullable, by definition.

PowerShell LINQ with Short Aliases

Most modern applications or code today deal with some kind of filtering or querying. In C# and .NET, we have Language Integrated Query (LINQ), which we also have access to in PowerShell, because it's built on .NET.

To list the top 10 largest files in the Windows temporary folder, which is larger than 1 Mb and starts with the letter W, skipping the first 5, ordering by size, the C#-code with LINQ would look somewhat like this:

new System.IO.DirectoryInfo(@"C:\Windows\Temp")
    .GetFiles()
    .Where(x => x.Length > 1024 && x.Name.StartsWith("W"))
    .OrderByDescending(x => x.Length)
    .Select(x => new { x.Name, x.Length })
    .Skip(5)
    .Take(10)
    .ToList()
    .ForEach(x => Console.WriteLine($"{x.Name} ({x.Length})"));

The equivalent logic in PowerShell has a bit of a more daunting syntax, especially if you're not used to it:

Get-ChildItem "C:\Windows\Temp" `
| Where-Object {$_.Length -gt 1024 -and $_.Name.StartsWith("W")} `
| Sort-Object {$_.Length} -Descending `
| Select-Object -Property Name, Length -First 10 -Skip 5 `
| ForEach-Object {Write-Host "$($_.Name) ($($_.Length))"}

That's a bit explicit and verbose, but if you use the command Get-Alias in PowerShell, you will see a lot of useful aliases, which make the syntax a bit terser and easier to get an overview of:

gci "C:\Windows\Temp" `
| ?{$_.Length -gt 1024 -and $_.Name.StartsWith("W")} `
| sort{$_.Length} -Descending `
| select Name, Length -First 10 -Skip 5 `
| %{write "$($_.Name) ($($_.Length))"}

In a real scenario, you probably wouldn't write each result to the console, but let PowerShell present the result in its default grid format.

HTML Encode TagHelper in ASP.NET Core

For a specific scenario recently, I wanted to display the HTML-encoded output of a TagHelper in ASP.NET Core. So I wanted to use the TagHelper, but not output its actual result, but see the raw HTML which would have been included in my template.

HTML

So I created another TagHelper, which allows me to wrap any HTML, inline code in ASP.NET Core and other TagHelpers, and get all the content inside the TagHelper's tag to be HTML-encoded, like this:

<html-encode>
    <a href="@Url.Action("Index")">Read More</a>
    @Html.TextBox("No_Longer_Recommended-TagHelpers_Preferred")
    <my-other-tag-helper />
</html-encode>

From this, I will get the raw HTML of the link with an UrlHelper-result, the result of the HTML-helper and the result of my other TagHelper.

The source-code for the html-encode-TagHelper is as follows:

[HtmlTargetElement("html-encode", TagStructure = TagStructure.NormalOrSelfClosing)]
public class HtmlEncodeTagHelper : TagHelper
{
    public override async Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        var childContent = output.Content.IsModified
            ? output.Content.GetContent()
            : (await output.GetChildContentAsync()).GetContent();

        string encodedChildContent = WebUtility.HtmlEncode(childContent ?? string.Empty);

        output.TagName = null;
        output.Content.SetHtmlContent(encodedChildContent);
    }
}

API Rate Limit HTTP Handler with HttpClientFactory

Most APIs have a Rate Limit of some sort. For example, GitHub has a limit of 5000 requests per hour. This can partly be handled by limiting your use by timing your requests to the API or through caching of the results.

What about when an API limits your requests per second? This is probably something you would want to handle somewhere central in your code and not spread out everywhere where you make an HTTP call to the API.

Funnel

For me, the solution was to add a Outgoing request middleware to the setup of the HttpClientFactory.

With this, I can just configure the startup services to use this RateLimitHttpMessageHandler-class with the HttpClientFactory:

services
    .AddHttpClient<IApi, Api>()
    .AddHttpMessageHandler(() =>
        new RateLimitHttpMessageHandler(
            limitCount: 5,
            limitTime: TimeSpan.FromSeconds(1)))
    .AddDefaultTransientHttpErrorPolicy();

This ensures that wherever I use the class IApi, through dependency injection, it will limit the calls to the API to only 5 calls per second.

The simplified version of code for the RateLimitHttpMessageHandler:

public class RateLimitHttpMessageHandler : DelegatingHandler
{
    private readonly List<DateTimeOffset> _callLog =
        new List<DateTimeOffset>();
    private readonly TimeSpan _limitTime;
    private readonly int _limitCount;

    public RateLimitHttpMessageHandler(int limitCount, TimeSpan limitTime)
    {
        _limitCount = limitCount;
        _limitTime = limitTime;
    }

    protected override async Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request,
        CancellationToken cancellationToken)
    {
        var now = DateTimeOffset.UtcNow;

        lock (_callLog)
        {
            _callLog.Add(now);

            while (_callLog.Count > _limitCount)
                _callLog.RemoveAt(0);
        }

        await LimitDelay(now);

        return await base.SendAsync(request, cancellationToken);
    }

    private async Task LimitDelay(DateTimeOffset now)
    {
        if (_callLog.Count < _limitCount)
            return;

        var limit = now.Add(-_limitTime);

        var lastCall = DateTimeOffset.MinValue;
        var shouldLock = false;

        lock (_callLog)
        {
            lastCall = _callLog.FirstOrDefault();
            shouldLock = _callLog.Count(x => x >= limit) >= _limitCount;
        }

        var delayTime = shouldLock && (lastCall > DateTimeOffset.MinValue)
            ? (limit - lastCall)
            : TimeSpan.Zero;

        if (delayTime > TimeSpan.Zero)
            await Task.Delay(delayTime);
    }
}

Azure Storage Easy Web File-Hosting

In an ambition to improve my blog a little bit, I wanted to include more images in the posts but felt a lack of a good solution for web file-hosting. To find the best fit, I put down a check-list of features and criteria. The solution I was looking for should or must check the following:

  • Ownership of the files: If the solution used dissapears tomorrow, you can still access my files.
  • Predictable URLs: The URL to the resources should never change. You don't want to have to update all your blog-posts or other external links floating around the internet.
  • Good tooling: Avoiding slow web-upload, when uploading multiple and/or large files, but also easily getting an overview of your files.
  • (Bonus) Pretty URLs: "Pretty" looking URLs are easier to check for copy-paste errors, but could also potentially benefit SEO.
  • (Bonus) Low or no cost: Since there are free services out there, paying for file-hosting must be worth it.
Images & Tech

Non-fitting Alternatives

When I started evaluating alternatives, a while back, Flickr was still a thing. The problem was that I couldn't predictable directly link to an image I've uploaded. After feeling all the obstacles, which indicated that using Flickr for image-hosting was being actively blocked, I understood it was too much of a hack.

Historically, I've been using Google's image hosting through Blogger, which is where this blog started out. The problem was that this also felt like I hack and I was always worried that the URLs would change and I'd have to go through every single blog-posts I've ever made and update all images.

Services like Dropbox and Google Drive seem to actively try to block the use of their services for this, even if they are accessible through the web.

Azure Storage for easy web file-hosting

Enter Azure Storage, with its wide spread adoption, familiar interface and extremely affordably priced Blob Storage. It checks off all my points in the checklist and more. Giving it's backing, it's fair to assume more functionality will be added ongoingly.

Azure Blob Storage can be used to store data-blobs of almost any size in the Azure-cloud. By providing a path/key, you can read or write to that "file" at that "path". The overall performance of Azure Storage is great and also an important feature of the service, but the simple mechanisms of Azure Blob Storage make it very fast.

Expose blob-content to the internet

So you could build a web-app which accesses your files on Azure Blob Storage and expose them through URLs in your API, but you can also let Azure handle that for you, by activating anonymous read access to your blobs.

You can do this on blob-level, so you can have a dedicated blob for public files, separately from your other blobs in the same Storage-account. These files will be read-only when you use the option Blob (anonymous read access for blobs only), found under the Access policy-section of the selected blob.

Upload files

Then you can use the Azure Portal, programmatically use the Azure Blob Storage API to upload files, or you can use application Azure Storage Explorer, for a friendly GUI-experience to get started.

Azure Storage Explorer

Now you can share your files anywhere you want, via the provided URL from Azure or through a more pretty URL, using a custom domain.

Add custom domain

To fulfill the criteria of pretty URLs, you can set your own custom domain for an Azure Storage-account. If you do not do this, the default URL for Azure Blob storage is https://{storage-account-name}.blob.core.windows.net/{file-path}.

Activate Azure CDN

This is a great start for your small project to start out with, but you can in the future easily transition into using the full power of Azure Content Delivery Network (Azure CDN) on your existing Azure Storage-account, simply by activating it from the Azure CDN-section in your Storage-account.