using Programming;

A Blog about some of the intrinsics related to programming and how one can get the best out of various languages.

F# Advent 2019: Dawn of the F# Domain Types

If you use .NET, build your objects in F#

So towards the end of October, I tweeted (as one does) about building domain objects and models in F#. I mentioned a few nice things about it, so today I intend to do a deeper dive into some of those nice things, and talk about why they are concerns you should keep in the back of your head when you make a decision about how to build your domain models.

Now it's no secret that I like F#. I talk about it a lot (among many other things), largely because when I am doing .NET work, there is almost always some F# involved. (It might be a small portion, or it might be a large portion, depends on the project.) As a result, if you've ever spoke with me in person about programming, I've probably talked about it.

Today, I'm going to show you why F# is a very cool tool in the .NET ecosystem, and how F# fixes a lot of class-oriented things that, in my opinion, C# does wrong.


F# does a lot of the heavy-lifting for object-oriented work

One of the most impressive things I feel F# has to offer is the level of generation it offers for object-oriented work. How it can give you a lot of things you usually spend a good chunk of time on for free. Three lines of F# can do something that would take, genuinely, dozens (or even hundreds) of lines of C#.

If you've ever done OOP work, chances are you've had to write a .ToString(), or a .Equals(object), or a .GetHashCode(). Spoiler alert: with F# you don't need to write any of those most of the time. (You can, but it's ill-advised.)

Let's take a common game-development scenario in C#: vectors. In physics, a vector is a distance and direction. Start at the origin (or point (0,0)), then adjust position from there based on the direction given, and the distance, move to the new location.

Commonly, we represent them in one of two ways:

  • r, Θ
  • x, y

You are likely familiar with the (x, y) format: this is standard point representation for the cartesian coordinate system. The (r, Θ) format is common in polar coordinate systems.

Long-winded math aside, the two values can be translated between. We'll build a sizable Vector2 feature-set into our program, and see how the F# and C# of the implementation vary.

A standard C# starting point might be something like the following:

public class Vector2 
{
    public int X { get; set; }
    public int Y { get; set; }
}

In F#, we would do something like the following:

type Vector2 = {
    X : int
    Y : int
}

Alright, so far F# hasn't really lived up to the promise. It's the same length as the C#, and we don't really see any advantages.

Fair. We haven't done anything to really take advantage of F# yet, but one of the things we already have is an implementation of .ToString(), .Equals(object), and .GetHashCode(). It also implements IEquatable<Vector2>, IStructuralEquatable, IComparable<Vector2>, IStructuralComparable, and IComparable.

I could go into a large digression on the intermediate language generated and how it works, why it's important, etc., instead, I'm going to just use a quick screenshot of the generated F# class (left) to the generated C# class (right).

F# and C# comparison

Wow are those different. F# implemented 9 additional methods and even setup a constructor for us. Not so bad. The C# version, on the other hand, is just the two properties. So really, our 4 lines of F# is equivalent to a lot, lot more in C#. In fact, the F# version is even immutable. Sure, we could omit the set in the C# to achieve the same, but then we have to define our own constructor. My point with this first example is to demonstrate how effective F# is at generation. We'll dig deeper into what it means in a few moments.

Now, I could stop here, and I suppose if I were publishing this right now I might, but today I'm going to go a little further.


Why is all this F# generation important?

This is probably the more important part to discuss: why is the generation F# provides so important? Why do I push it as such a major reason to build objects in F#?

It all boils down to a couple basic points:

  1. Structural equality and comparison is important for doing effective matching and evaluation of objects against one-another.
  2. These few functions are extremely important if the object itself is to be used in a HashMap scenario. (I.e. a dictionary key, etc.)

Basically, it boils down to performance and developer ease. A lot of times we want to be able to do something like if a = b then ..., but we find out that there is no = operator between a and b, even when a and b are the same type. So you go ahead and define your own operator, just to need to define a .Equals(object) method and then .GetHashCode(). You will also want both of these implemented if you are using LINQ, as some of the LINQ methods / query syntax require them.

Unfortunately, in C# we don't get those features. Instead we only have a base implementation (the default) for both Equals(object) and GetHashCode(). The base implementations don't do structural equality at all, so two objects that are exactly the same might not match via .Equals(object). (Spoiler: unless they are the exact same instance, they won't match via the default.)

This also comes in to heavy play with struct types, as it's expected that two structures (which are expected to be value types) that are instanced via the same values would be the same. But alas, by default, they are not such in .NET.

So, let's look at some situations we would have to write a lot of manual code for in C#, that F# just gives us.

Scenario 1: you want to determine if an element is in an array.

Let's assume you have an array of elements, and you want to determine if you have a certain element in the array. With the new LINQ syntax in C#, it should be rather simple, no?

var find = new Vector2() { X = 1, Y = 2 };
var results = elements.Where(x => x.Equals(find));

Seems reasonable, should work fine. Except it wont. The problem here is that we don't override .Equals(object), and we don't have a .Equals(Vector2), so our default C# version won't work. Our F# version will work fine though, as we have both of those options in the F# version. One note: for the F# version we must initialize find via the constructor, because it's immutable (which lends well to a later point).

To get the C# version to work, we really just need to pick one of the two to implement, though for fairness we'll actually implement both.

public override bool Equals(object other) => Equals(other as Vector2);
public bool Equals(Vector2 other) => other != null && X == other.X && Y == other.Y;    

So we added two lines (at a minimum) to our C# to make sure this works, but we end up with working code, so we are done here.

First note: your IDE might state that you didn't override GetHashCode(), and it's entirely correct! It's always a best practice to override both if you are doing one of them. Second note: you might find that the two lines printed by the C# version are different than the F# version, and they are. The F# version has a custom .ToString() implementation, which means when we do a print line for it, we get a nicer string.

Scenario 2: you have two collections, and you want to create the collection of the elements present in both.

This scenario is your basic Intersect scenario: you have some elements1 in one collection, and some more elements2 in another collection. You want to create a single collection that encompasses only those elements that are present in both collection.

You might write some code like the following:

var results = elementsA.Intersect(elementsB).ToArray();
Console.WriteLine(results.Length);

In the C# version, you'll find that no elements match no matter how you call the Intersect function, and that the F# version gives you some.

Again, we're missing a key piece, I alluded to it earlier: GetHashCode(). We need it for Intersect to work properly.

So, we'll create a poor implementation:

public override int GetHashCode() => X ^ Y;

Now the C# version works fine.

Note here: the F# GetHashCode() is a good implementation, ours was a poor attempt to make sure that we got something in place for .NET to use. In .NET, it's OK for two different elements to return the same code, because Equals(object) is the final decision maker, but two identical elements may NOT return different hash codes.


At this point, I wanted to go into some performance implications of the C# classes vs. the F# generated objects, but because this post is already long, and also because it's Christmas, I'm going to save that discussion for a later date. In fact, I'm going to target mid-January for that discussion. That said, I recommend doing some tests of the C# vs. F# objects, and especially in usage as dictionary keys. You might find some interesting results.

Additionally, this is all up on GitHub, so feel free to play with the exact code I was using to test our setups.

You're Logging Wrong: Stop It

You're taking Dependencies Wrong

Specifically, you're logging wrong.

Here's the deal, we all at some point think "Oh, I need to log some information, better write an abstraction!"

Because we programmers are too afraid (or arrogant) to take dependencies on someone else's "stuff", we always write our own abstractions. They're usually subtly different, but the 99% similarities are:

  1. An ILogger interface of some sort;
  2. A Log(Severity, message) in the ILogger;
  3. Two or three generic logger implementations;

This is basically the bulk of what most implementations call for. Always something like this.

This might look something like the following, in C#:

public enum LoggerLevel : byte
{
    Error = 0,
    Warning = 100,
    Information = 200,
    Verbose = 255
}

public interface ILogger
{
    void Log(LoggerLevel level, string msg);
    void Log<T>(LoggerLevel level, T obj);
}

Right? This probably looks familiar. This is a pretty common pattern, so common I call it the "logger pattern."

The purpose is to make it easy to say "Ok, here's an ILogger, do your thing." We have a logger sample for something like an in-memory buffer:

public class BaseLogger : ILogger
{
    private List<string> messages = new List<string>(10000);

    public void Log(LoggerLevel level, string msg)
    {
        messages.Add(msg);
    }

    public void Log<T>(LoggerLevel level, T obj)
    {
        messages.Add(obj.ToString());
    }
}

And we say "great, things are done."

Here's the deal: this is wrong.


What's wrong with ILogger?

Well, just about everything.

  1. It requires an implementation for every target logger, this means a lot of boiler-plate code, etc.;
  2. We always mess it up with trying to support "common" targets (AutoFac, for example);

A great person named David Fowler tweeted about this a few days ago:

This is what the .NET library ecosystem looks like today. Say you develop an interesting library/framework that calls into user code and also does some interesting logging (lets call this library A). A customer comes along and wants to use Autofac and log4net.

They file a feature request on your repository and you do what a good software engineer would do, you make an abstraction, A.ILogger and A.ICanActivateYourCode and you make 2 new libraries. A.Log4Net and A.Autofac. Rinse and repeat this process with libraries A to Z

What you end up with are different abstractions that all look the same but nobody wants to take a dependency because that's a big deal. What if it's the wrong one, what if that package goes away? The benefit needs to be huge in order for your core library to pull more deps.

So a customer that wants to use A, B and C with log4net and autofac install. A, B, C and A.log4Net, A.Autofact, B.log4Net, b.Autofac, C.log4net, C.Autofac.

I call this glue library hell. There are insufficient shared abstractions that exist, therefore everyone makes their own.

Not to mention you have to hope that enough of the API for the underlying libraries are appropriately exposed so that you have a consistent way to configure them.

David's point was that we have a million of the same implementations, but they're ever-so-slightly different.


So what's the 'right' way?

Ah yes, what is the 'right' way?

Well, due to the nature of logging, we are always adding a message / thing to the log, we're never actually doing anything else. As a result, the "right" way to take a logging dependency is not ILogger, but is instead an Action<string>. That is: a function that you pass a message to.

Wait, what do you mean?

Back to our original example, let's say we have a function DoWork, and DoWork takes an ILogger:

private static void DoWork(ILogger logger)
{
    for (var i = 0; i < COUNT; i++)
        logger.Log(LoggerLevel.Information, "Test");
}

This seems straightforward. "Yeah, you log the whatever with the Log method on the ILogger." Sure, makes sense.

Calling this is straightforward:

ILogger logger = new BaseLogger();
DoWork(logger);

And viola: we have logging.

But, we want to log with a function instead, so we replace DoWork:

private static void DoWork(Action<LoggerLevel, string> logger)
{
    for (var i = 0; i < COUNT; i++)
        logger(LoggerLevel.Information, "Test");
}

Now, instead of taking the ILogger directly, we'll take the Log function:

Action<LoggerLevel, string> logger = new BaseLogger().Log;
DoWork(logger);

Curiously, this makes life much easier on the consumer, as now they don't need a ILogger implementation (and that whole abstraction is just gone), instead, they pass a function.

This means that the consumer can say "here's a function that logs to AutoFac", or "here's a function that logs to log4net", etc.

Even moreso: our implementation is now compatible with any other utility using a logger (should they all use function logging). The same function can be slightly reworked for each utility, assuming they don't share a LogLevel. You just wrap it with a very quick lambda and life is good.


But wait, there's more.

There's one more thing here we should evaluate, which is the obvious: what about performance?

Ah yes, the age-old "is it fast, though?" question.

Everyone wants to make sure their code is fast, they want to make sure they don't have any performance loss. Why? I don't know, it's always something superficial.

So, is it fast? This is a good question, I guess.

Of course, me being me, I benchmarked various types of logging and threw it on GitHub, but the overall consensus was: it is as fast, or faster.

(Sidebar: the nop in the F# version is because of this bug. Looks like Don Syme has thoughts on how we can fix it.)

The Full Results

So I just ran my benchmark alet it do it's thing, I've put the full result below:

// * Summary *

BenchmarkDotNet=v0.11.4, OS=Windows 10.0.17134.407 (1803/April2018Update/Redstone4)
Intel Xeon CPU E3-1505M v6 3.00GHz, 1 CPU, 8 logical and 4 physical cores
.NET Core SDK=3.0.100-preview-010184
  [Host]     : .NET Core 2.1.7 (CoreCLR 4.6.27129.04, CoreFX 4.6.27129.04), 64bit RyuJIT
  DefaultJob : .NET Core 2.1.7 (CoreCLR 4.6.27129.04, CoreFX 4.6.27129.04), 64bit RyuJIT


|                  Method |      Mean |      Error |     StdDev |    Median | Ratio | RatioSD | Gen 0/1k Op | Gen 1/1k Op | Gen 2/1k Op | Allocated Memory/Op |
|------------------------ |----------:|-----------:|-----------:|----------:|------:|--------:|------------:|------------:|------------:|--------------------:|
|               LogDirect |  33.42 us |  0.4143 us |  0.3673 us |  33.36 us |  0.62 |    0.01 |     18.8599 |      3.7231 |           - |            78.21 KB |
|             LogDirectFs |  35.46 us |  0.6995 us |  1.4755 us |  35.60 us |  0.68 |    0.04 |     18.8599 |      3.7231 |           - |            78.21 KB |
|         LogViaInjection |  53.94 us |  0.5640 us |  0.4710 us |  54.11 us |  1.00 |    0.00 |     18.8599 |      3.7231 |           - |            78.21 KB |
|       LogViaInjectionFs |  56.81 us |  0.9890 us |  0.8259 us |  56.72 us |  1.05 |    0.01 |     18.8599 |      3.7231 |           - |            78.21 KB |
|          LogViaCallback |  47.99 us |  0.5204 us |  0.4868 us |  47.91 us |  0.89 |    0.01 |     18.8599 |      3.7231 |           - |            78.27 KB |
|        LogViaCallbackFs |  50.81 us |  0.3474 us |  0.3250 us |  50.81 us |  0.94 |    0.01 |     18.8599 |      3.7231 |           - |            78.27 KB |
|            LogObjDirect | 666.71 us | 12.3811 us | 12.7145 us | 664.47 us | 12.38 |    0.32 |    185.5469 |     92.7734 |           - |          1093.84 KB |
|          LogObjDirectFs | 668.30 us |  8.1078 us |  6.7704 us | 664.00 us | 12.39 |    0.15 |    185.5469 |     92.7734 |           - |          1093.84 KB |
|      LogObjViaInjection | 709.30 us | 12.1255 us | 11.3422 us | 706.69 us | 13.14 |    0.26 |    185.5469 |     92.7734 |           - |          1093.84 KB |
|    LogObjViaInjectionFs | 705.13 us |  6.9833 us |  6.5322 us | 705.46 us | 13.07 |    0.18 |    185.5469 |     92.7734 |           - |          1093.84 KB |
|       LogObjViaCallback | 671.84 us |  6.9447 us |  5.4220 us | 673.47 us | 12.44 |    0.11 |    185.5469 |     92.7734 |           - |           1093.9 KB |
|     LogObjViaCallbackFs | 669.24 us |  4.6712 us |  3.9007 us | 670.10 us | 12.41 |    0.09 |    185.5469 |     92.7734 |           - |           1093.9 KB |
|         LogInlineDirect |  33.32 us |  0.3477 us |  0.3253 us |  33.24 us |  0.62 |    0.01 |     18.8599 |      3.7231 |           - |            78.21 KB |
|       LogInlineDirectFs |  33.43 us |  0.4313 us |  0.4035 us |  33.34 us |  0.62 |    0.01 |     18.8599 |      3.7231 |           - |            78.21 KB |
|   LogInlineViaInjection |  33.68 us |  0.6655 us |  0.7121 us |  33.39 us |  0.63 |    0.01 |     18.8599 |      3.7231 |           - |            78.21 KB |
| LogInlineViaInjectionFs |  35.91 us |  0.8274 us |  2.4267 us |  35.15 us |  0.64 |    0.03 |     18.8599 |      3.7231 |           - |            78.21 KB |
|    LogInlineViaCallback |  49.10 us |  0.6939 us |  0.6491 us |  49.02 us |  0.91 |    0.01 |     18.8599 |      3.7231 |           - |            78.27 KB |
|  LogInlineViaCallbackFs |  51.15 us |  0.5555 us |  0.5196 us |  51.01 us |  0.95 |    0.01 |     18.8599 |      3.7231 |           - |            78.27 KB |
|  LogInlineDynamicDirect |  76.38 us |  1.4895 us |  1.9884 us |  75.43 us |  1.43 |    0.05 |     18.7988 |      3.0518 |           - |            78.21 KB |

That's a lot of benchmarking. Let's crop that down:

|                  Method |      Mean |      Error |     StdDev |    Median | Ratio | RatioSD | Gen 0/1k Op | Gen 1/1k Op | Gen 2/1k Op | Allocated Memory/Op |
|------------------------ |----------:|-----------:|-----------:|----------:|------:|--------:|------------:|------------:|------------:|--------------------:|
|               LogDirect |  33.42 us |  0.4143 us |  0.3673 us |  33.36 us |  0.62 |    0.01 |     18.8599 |      3.7231 |           - |            78.21 KB |
|         LogViaInjection |  53.94 us |  0.5640 us |  0.4710 us |  54.11 us |  1.00 |    0.00 |     18.8599 |      3.7231 |           - |            78.21 KB |
|          LogViaCallback |  47.99 us |  0.5204 us |  0.4868 us |  47.91 us |  0.89 |    0.01 |     18.8599 |      3.7231 |           - |            78.27 KB |

Those are the three lines I want to look at.

First and foremost: the fastest way is to pass the BaseLogger class in directly (i.e.: no ILogger interface usage). That was pretty obviousl.

But, the next, interesting note is that the interface version is actually about 6us slower than passing the function within the interface. This is curious, not only is passing the function a more proper way to do it, but it's also ever-so-slightly faster. (Granted, the margins are tiny and statistically meaningless, but it does prove that there is not discernable performance disadvantage, so that argument is now completely irrelevant.)

The only "bad" difference is that there was an extra 0.06KB allocated, but I assume that's overhead for Action<>. It's also such a tiny amount that if you're using that (or the time, to be completely frank) for justification, you are not making the right decisions.

The only time the interface is a better option is if you are not passing it, but are doing things inline:

|                  Method |      Mean |      Error |     StdDev |    Median | Ratio | RatioSD | Gen 0/1k Op | Gen 1/1k Op | Gen 2/1k Op | Allocated Memory/Op |
|------------------------ |----------:|-----------:|-----------:|----------:|------:|--------:|------------:|------------:|------------:|--------------------:|
|         LogInlineDirect |  33.32 us |  0.3477 us |  0.3253 us |  33.24 us |  0.62 |    0.01 |     18.8599 |      3.7231 |           - |            78.21 KB |
|   LogInlineViaInjection |  33.68 us |  0.6655 us |  0.7121 us |  33.39 us |  0.63 |    0.01 |     18.8599 |      3.7231 |           - |            78.21 KB |
|    LogInlineViaCallback |  49.10 us |  0.6939 us |  0.6491 us |  49.02 us |  0.91 |    0.01 |     18.8599 |      3.7231 |           - |            78.27 KB |

Here, you'll note that the overhead of Action<> actually puts the callback version at a significant disadvantage, and a discernable disadvantage in this case.


In Summation

To summarize our discussion:

  1. Don't make an ILogger, it's unbecoming.
  2. Use an Action or Function, it's much more dynamic and reusable.
  3. The only time this is not the case is when the logger is constructed inline with the thing being logged. Then use a regular function or class.

F# / .NET: "Gotcha's"

Recently I've been digging more-and-more into F#, so I want to start putting together a list of things that I occasionally (or regularly) run into that new users of the language (or even those who are substantially experienced) might not have a great time with.

I'll be notating if it's a general .NET "gotcha", or an F#-specific "gotcha."

F#:

.NET:

If you have something you want to see in the list, please let me know either via Twitter or as a comment. I'll try to check back here regularly to keep this list as up-to-date as possible.

Importing F# to the SQLCLR (T-SQL)

Bringing F# into the SQLCLR

It's been some-time since my last post, and don't worry, we're still going to continue the IMAP server. I've been swamped at work, and as a result, haven't had the time to properly dedicate to writing these posts (especially that series, which is a complex topic).

Excuses aside, today we're going to talk about something I recently did for work, which is integrating F# into the SQLCLR (part of Microsoft SQL Server).

For those who don't know, the SQLCLR is a feature of SQL Server that allows one to import .NET assemblies as user-defined functions, or stored procedures. On it's own it doesn't sound impressive, but the SQLCLR allows us to significantly improve performance in some cases, and moderately improve it in others.

I won't go into detail explaining the SQLCLR, a gentleman by the name of Soloman Rutzky does that quite well. I'll let his "Stairway to SQLCLR" give you the introduction.

No, what I'll do today is show you how to import F# into the SQLCLR, instead of just C# or VB.NET. The process is about as straightforward as Soloman describes, but there are a few "gotcha's", so I'm going to include those in our discussion here today.

Without further ado, let's get started.

First: create the project and add the System.Data reference

The first step is obviously to create a project to hold our SQL code. The project should be an F# Class Library, in .NET Framework (I'm using 4.7.1 and F# Core 4.4.3.0). You'll want a module for the functions, and in that module you'll want to open Microsoft.SqlServer.Server, and System.Data.SqlTypes.

Once we've done that, we'll build a function. There are a few rules to creating a function in .NET that can be seen by SQL Server:

  1. The function must have the SqlFunction attribute;
  2. All inputs must be tupled;
  3. All input and output types must be a Sql[Something] type (SqlDouble, SqlInt, etc.);

So, for our example we're going to use a real-world example from my work: distance calculation from two geo-coded points.

To do this, we'll build a function that takes 4 double values: two Latitude/Longitude value sets.

let calculateDistance (fromLat : SqlDouble, fromLon : SqlDouble, toLat : SqlDouble, toLon : SqlDouble) : SqlDouble

That's the signature we'll use, next, we want to define how SQL should treat the function:

[<SqlFunction(
    IsDeterministic = true,
    IsPrecise = false,
    SystemDataAccess = SystemDataAccessKind.None,
    DataAccess = DataAccessKind.None)>]

This is where life gets special, so let me explain them piece-by-piece:

  • SqlFunction: this is just the attribute we use, there is also SqlProcedure for stored procedures;
  • IsDeterministic = true: this value should ONLY be set to true if the function is deterministic, that is, given any input value, it returns one and exactly one output, and that two calls to the function with the same input will result in the same output;
  • IsPrecise = false: this value should ONLY be set to true if the function uses the DECIMAL or NUMERIC types, and does precise mathematical calculations;
  • SystemDataAccess = SystemDataAccessKind.None: I'll be completely honest with you, I don't know what the difference between this and DataAccess are, but if you do any reading/writing to/from SQL, you should set it to Read, otherwise, probably use None (there's a small performance cost to setting this to Read, I leave it to you to decide whether or not to do so);
  • DataAccess = DataAccessKind.None: see above;

So basically, what we did here is define a function and tell SQL what it should expect the function to do. One of the most impotant parts is the IsDeterministic flag: this tells SQL that if it called the function for a set of values, it can reuse that result for any subsequent calls with the same set of values. This means it can memoize the results. If your function has side-effects, do not set this flag to true, or you will get weird results. Basically, if your function is truly "pure" (no side-effects), mark it with IsDeterministic = true.

Next: write the code

Alright, so we've covered the hard parts, next, we write the function.

My version of this function used some logic that was specific to my workplace, so I'm going to remove it and we'll write a vanilla function:

let constMod = 1.852 / 1.61 * 60.
let divPi180 = Math.PI / 180.
let div180Pi = 180. / Math.PI

[<SqlFunction(
    IsDeterministic = true,
    IsPrecise = false,
    SystemDataAccess = SystemDataAccessKind.None,
    DataAccess = DataAccessKind.None)>]
let calculateDistance (fromLat : SqlDouble, fromLon : SqlDouble, toLat : SqlDouble, toLon : SqlDouble) : SqlDouble =
    let fromLat = fromLat.Value
    let fromLon = fromLon.Value
    let toLat = toLat.Value
    let toLon = toLon.Value

    let fromLat = fromLat * divPi180
    let toLat = toLat * divPi180
    let fromLon = fromLon * divPi180
    let toLon = toLon * divPi180

    constMod *
    (Math.Acos
        ((Math.Sin toLon) * (Math.Sin fromLon) +
         (Math.Cos toLon) * (Math.Cos fromLon) * (Math.Cos (toLat - fromLat))))
    |> SqlDouble

This should be self-explanatory: we basically convert the data and do some simple math on it.

Third: enable SQLCLR

Alright, so that's that entirety of our .NET code.

Now, we need to enable the SQLCLR, because it's disabled by default.

The SQLCLR can be enabled through GUI or T-SQL, I prefer to do it through GUI because I typo a lot.

To enable it:

  1. Right click your server in SSMS;
  2. Click "Facets";
  3. In the "Facet" dropdown select "Surface Area Configuration";
  4. Change "ClrIntegrationEnabled" to "True";
  5. Click "OK";

Easy enough.

Fourth: trust the assembly, and import it

This is one spot where things aren't completely awesome: the FSharp.Core library isn't built to natively support a "SAFE" import to SQLCLR, so we have to trust it first.

To trust the assemblies, we'll want to get a SHA2_512 hash of them, and optionally, a description.

I, personally, don't care so much about the description at the moment, so I'll leave that out and let you locate it if you like. Instead, I'm just going to demonstrate how to hash it and trust it.

We need to trust FSharp.Core, and then our assembly:

DECLARE @hash AS BINARY(64) = (SELECT HASHBYTES('SHA2_512', (SELECT * FROM OPENROWSET (BULK 'C:\path\to\bin\dir\FSharp.Core.dll', SINGLE_BLOB) AS [Data])))
EXEC sp_add_trusted_assembly @hash

Then, our assembly:

DECLARE @hash AS BINARY(64) = (SELECT HASHBYTES('SHA2_512', (SELECT * FROM OPENROWSET (BULK 'C:\path\to\bin\dir\MyAssembly.dll', SINGLE_BLOB) AS [Data])))
EXEC sp_add_trusted_assembly @hash

Easy enough.

Because FSharp.Core isn't built for native SQL Server support (which, if anyone want's to fix, I've included the error at the end of this article), we have to add it with PERMISSION_SET = UNSAFE, which is, well...unsafe.

So, to load our assembly, we need a name, and the path:

CREATE ASSEMBLY [MyAssembly]
AUTHORIZATION dbo
FROM 'C:\path\to\bin\dir\MyAssembly.dll'
WITH PERMISSION_SET = SAFE

Not particularly hard. The name ([MyAssembly]) is not restricted to anything other than the regular NVARCHAR(128) for sysname, it does not need to match anything from the DLL, but probably easier if it does.

Finally: create the function

Alright, so our assembly is imported, we have it available, the last part is creating the function.

To create the function, we start it off like a normal T-SQL UDF:

CREATE FUNCTION CalculateDistance
(
    @fromLat FLOAT,
    @fromLon FLOAT,
    @toLat FLOAT,
    @toLon FLOAT
)
RETURNS FLOAT

If you've ever written a T-SQL Scalar-Valued UDF, this should look familiar. We build the signature exactly as we defined it in F#, and that part is super important: the signature cannot vary at all.

Next, we write the UDF:

AS EXTERNAL NAME [MyAssembly].[MyAssembly.Namespace.ModuleName].calculateDistance

The EXTERNAL NAME is a three part name:

  1. The assembly name as specified in CREATE ASSEMBLY;
  2. The assembly namespace and module name, the fully-qualified name of the first outer-container of the function we need;
  3. The function name itself;

Once you've created the function, we're literally all done. You can now call directly into your CLR code:

SELECT dbo.CalculateDistance(@fromLat, @fromLon, @toLat, @toLon)

Demonstrations!

For those who want to see the performance difference, the original T-SQL function is:

CREATE FUNCTION CalculateDistanceUdf
(
    @fromLat FLOAT,
    @fromLon FLOAT,
    @toLat FLOAT,
    @toLon FLOAT
)
RETURNS FLOAT
WITH SCHEMABINDING
AS 
BEGIN
    RETURN (1.852 / 1.61) *
        60 *
        DEGREES(
            ACOS(
                SIN(RADIANS(@toLon)) *
                SIN(RADIANS(@fromLon)) +
                COS(RADIANS(@toLon)) *
                COS(RADIANS(@fromLon)) *
                COS(RADIANS(@toLat) - RADIANS(@fromLat))))
END

The WITH SCHEMABINDING is a hint to try to tell SQL Server to mark the function deterministic, and it is as verified with SELECT OBJECTPROPERTY(OBJECT_ID('[dbo].[CalculateDistanceUdf]'), 'IsDeterministic'), but it still performs significantly slower than the SQLCLR alternative.

I borrowed the test from this article to run mine, and wrote them as follows:

CREATE TABLE Numbers (
    Num INT NOT NULL,
    CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Num)
)
GO

WITH N1(C) AS (SELECT 0 UNION ALL SELECT 0),
N2(C) AS (SELECT 0 FROM N1 AS T1 CROSS JOIN N1 AS T2),
N3(C) AS (SELECT 0 FROM N2 AS T1 CROSS JOIN N2 AS T2),
N4(C) AS (SELECT 0 FROM N3 AS T1 CROSS JOIN N3 AS T2),
N5(C) AS (SELECT 0 FROM N4 AS T1 CROSS JOIN N4 AS T2),
N6(C) AS (SELECT 0 FROM N4 AS T1 CROSS JOIN N4 AS T2 CROSS JOIN N3 AS T3),
Nums(Num) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM N6)
INSERT INTO Numbers(Num) SELECT Num FROM Nums
GO

This inserts 1048576 rows to the Numbers table, so it's a good-sized test.

Then we can run each of the following three tests:

DECLARE @fromLat AS FLOAT = 100
DECLARE @fromLon AS FLOAT = 100
DECLARE @toLat AS FLOAT = 120
DECLARE @toLon AS FLOAT = 120

SELECT MAX(dbo.CalculateDistance(Num / @fromLat, Num / @fromLon, Num / @toLat, Num / @toLon)) FROM Numbers
GO

DECLARE @fromLat AS FLOAT = 100
DECLARE @fromLon AS FLOAT = 100
DECLARE @toLat AS FLOAT = 120
DECLARE @toLon AS FLOAT = 120

SELECT MAX(dbo.CalculateDistanceUdf(Num / @fromLat, Num / @fromLon, Num / @toLat, Num / @toLon)) FROM Numbers
GO

DECLARE @fromLat AS FLOAT = 100
DECLARE @fromLon AS FLOAT = 100
DECLARE @toLat AS FLOAT = 120
DECLARE @toLon AS FLOAT = 120

SELECT MAX
    (
        (1.852 / 1.61) *
        60 *
        DEGREES(
            ACOS(
                SIN(RADIANS(Num / @toLon)) *
                SIN(RADIANS(Num / @fromLon)) +
                COS(RADIANS(Num / @toLon)) *
                COS(RADIANS(Num / @fromLon)) *
                COS(RADIANS(Num / @toLat) - RADIANS(Num / @fromLat)))))
FROM Numbers
GO

You can run these each individually to time them. My times were roughly 645ms for the SQLCLR, 3369ms for the T-SQL UDF, and 703ms for the inline T-SQL. As you can see, the SQLCLR function is faster than the inline T-SQL, and let's us encapsulate the logic in a single function. (This actually came about as an issue because we have the calculation there copied-and-pasted over several dozen queries, often 3-8x per query.)

So, that said, in this type of situation (raw math) there's no reason to use T-SQL for the task, and for something reasonably complex like this, no reason not to abstract it. Dump the code in .NET, write your unit tests, and then deploy the assembly to the SQL server.

Now, that said, there are times I wouldn't use a SQLCLR function, such as when the math is ultra simple: i.e. * 3, and there are times when a table-valued UDF would be far superior, so I don't want to make the suggestion that this will always help, just that it's another thing you can try, and it might actually surprise you.


For anyone curious, attempting to create an assembly in F# throws the following warning:

Warning: The Microsoft .NET Framework assembly 'fsharp.core, version=4.4.3.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.

And using a PERMISSION_SET of EXTERNAL_ACCESS or SAFE throws the following error:

CREATE ASSEMBLY failed because type 'Microsoft.FSharp.Collections.FSharpMap`2' in safe assembly 'FSharp.Core' has a static field 'empty'. Attributes of static fields in safe assemblies must be marked readonly in Visual C#, ReadOnly in Visual Basic, or initonly in Visual C++ and intermediate language.

Doing Bad Cookie Authentication, but for the Right Reasons

Poor Cookie-Based Authentication with ASP.NET

Greetings everyone. Once again, it's been a while since I've posted anything. I've been swamped with work, personal issues, and then some. I got a new dog (hi Max!), and so on. Fortunately, I have a topic I want to talk about (you can probably guess based on the title what it is), and thanks to a friend of mine, who we'll call "Jim" becasue, well, that's his name, who asked about this on Twitter, I figured we could go all-in.

Disclaimer: I'm a rambler, and this was hastily written.

Jim was asking about cookie-based authentication in ASP.NET, which is a great topic because it's something you should absolutely never do, but we are going to do to help him demonstrate how such a thing can be used for test automation. (Jim is a VERY smart person, and is working up a demo on how we can use cookie authentication hand-in-hand with test automation to do a wide-variety of things. The idea is to allow a test client to authenticate itself to quickly and easily get into the application.) We're going to spend this whole blog-post going over a worst-practice, vs. a best-practice. We're going to do all the things I always say never do, and learn why. (There's a lot of reasons we should not do any of these things, I'll try to cover a few with some examples.)

Then, at the end, I'm going to show you how we could do this type of authentication easily and via some sort of API call, so that we could stick with the core authentication principles that we value, and also satisfy our testers. This will demonstrate where the testers and the developers should be able to work together, to develop a flow that works for both.

We're going to build this out in Visual Studio, as usual, and we'll go over how we can take a blank ASP.NET website and add cookie-based authentication. Typically, we would use Forms-based authentication, or Active Directory / Windows Authentication.

What is a cookie?

A cookie is a small (usually) piece of information that a users browser stores which allows them to carry and pass information to and from a website. This cookie is a piece of data that is sent client-to-server, and server-to-client. We use cookies to transfer non-confidential data, because cookies are incredibly insecure. Anyone can spoof a cookie, and we'll look at doing exactly that in this blog post as well.

Cookie-Based Authentication in ASP.NET

Alright, so let's go ahread and create some cookie-based authentication in our ASP.NET application. For this, we'll have three pages:

  • Default.aspx: the main landing page, visible to authenticated and unauthenticated users;
  • Login.aspx: the login page, this will test the user login and create an appropriate cookie;
  • Authenticated.aspx: a page only available to authenticated users;

Step 1: Create the Authenticated.aspx

This page is pretty basic markup-wise (even code-wise):

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Authenticated.aspx.cs" Inherits="Poor_Cookie_Authentication__17_4_2018_.Authenticated" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
            Hello <asp:Literal runat="server" ID="litUserName"></asp:Literal>!
        </div>
        <div>
            This page is only available to authenticated users.
        </div>
    </form>
</body>
</html>

Basically, we throw a single literal which will be the name of the authenticated user. We'll populate this from the code-behind file, which is also really simple. But, before we do the code-behind, let's build a user object.

I'm going to do extremely basic authentication: our User class will have a Username (email) and Password. It will also contain a static array of all eligible users, so as to allow us to "pretend" to log someone in. We'll create two sample users, with different passwords:

public class User
{
    public static User[] Users { get; } =
        new[]
        {
            new User() { Username = "ebrown@example.com", Password = "1234" },
            new User() { Username = "johndoe@example.com", Password = "5678" }
        };

    public string Username { get; private set; }
    public string Password { get; private set; }
}

This can be replaced with any type of user loading, but I kept it simple to allow easy demonstration.

Now, let's load the user:

public partial class Authenticated : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        if (Request.Cookies["UserId"]?.Value == null)
        {
            Response.Redirect("Login.aspx");
        }

        var userId = int.Parse(Request.Cookies["UserId"].Value);
        var user = Models.User.Users[userId];
        litUserName.Text = user.Username;
    }
}

So, if it's not apparant, cookie reading is super simple in ASP.NET: simply call Request.Cookies[name].Value, I use ?.Value to avoid the need for an additional null-check. (If the cookie doesn't exist, Request.Cookies[name] returns null instead of throwing a KeyNotFoundException like a normal dictionary would.)

Step 2: Login.aspx

Alright, so the next step is to allow the user to login. This is really simple, and we'll do it with a small HTML page and code-behind. The login will take a username and password, and match that to a user in the User.Users array. If we find one, set a cookie with the ID.

Markup:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Login.aspx.cs" Inherits="Poor_Cookie_Authentication__17_4_2018_.Login" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
            <asp:Literal runat="server" ID="litError"></asp:Literal><br />
            Username: <asp:TextBox runat="server" ID="txtUsername" TextMode="Email"></asp:TextBox><br />
            Password: <asp:TextBox runat="server" ID="txtPassword" TextMode="Password"></asp:TextBox><br />
            <asp:Button runat="server" ID="btnSubmit" OnClick="btnSubmit_Click" Text="Login" />
        </div>
    </form>
</body>
</html>

Code:

public partial class Login : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        if (Request.Cookies["UserId"]?.Value != null)
        {
            Response.Redirect("Authenticated.aspx");
        }
    }

    protected void btnSubmit_Click(object sender, EventArgs e)
    {
        var username = txtUsername.Text;
        var password = txtPassword.Text;

        var user = Models.User.Users.FirstOrDefault(x => x.Username == username && x.Password == password);
        if (user != null)
        {
            Response.Cookies.Add(new HttpCookie("UserId", Models.User.Users.ToList().IndexOf(user).ToString()) { Expires = DateTime.Now.AddHours(8) });
            Response.Redirect("Authenticated.aspx");
        }

        litError.Text = "The username/password combination you entered does not exist.";
    }
}

There are two ways to set a cookie:

  • Use Response.Cookies.Add;
  • Directly call to Response.Cookies[name], which will create the cookie if it does not exist;

Here, I chose the former as it's clearer. We add a new response cookie, set the expiration for 8 hours from now, and then redirect the user to the Authenticated.aspx page.

Step 3: Our Default.aspx with logout

The last step here is to make our Default.aspx page, which is again simple, and perform our logout on this page.

Markup:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="Poor_Cookie_Authentication__17_4_2018_.Default" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
            <asp:HyperLink runat="server" ID="hlLogin" Text="Login" NavigateUrl="~/Login.aspx"></asp:HyperLink>
            <asp:LinkButton runat="server" ID="lbLogout" Text="Logout" OnClick="lbLogout_Click"></asp:LinkButton>
        </div>
    </form>
</body>
</html>

Code:

public partial class Default : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        if (Request.Cookies["UserId"]?.Value == null)
        {
            lbLogout.Visible = false;
        }
        else
        {
            hlLogin.Visible = false;
        }
    }

    protected void lbLogout_Click(object sender, EventArgs e)
    {
        Response.Cookies["UserId"].Expires = DateTime.Now.AddDays(-1);
        Response.Redirect("Default.aspx");
    }
}

We literally have a "Login" and "Logout", which one is shown depends on whether or not the cookie is set. On logout, we set the cookie expiration to the past so that the browser will delete it, and redirect the user.

Authentication complete!

Alright, so all-in-all we're basicallly done with designing the cookie-based authentication of the application, now let's move on to the insecurities of it.

Cookie-Based Authentication Insecurities

There are literally hundreds of things we could list that are reasons to not use cookie-based authentication, but let's just go over some of the basics:

  • Cookies are handled entirely client-side. That means, the client must be trusted to track the entire lifetime of the cookie: value, expiration, etc., all of it is in the client's hand.
  • Cookies are always plain-data. That is to say, they are not secured in any manner. In a non-HTTPS environment, cookies are clearly visible on transmission from client-to-server, and server-to-client. A man-in-the-middle attack with cookies is so unbelievably easy. (Facebook used to be vulnerable to session-hijacking using this attack, and I'll pull references on that later.)
  • Cookies carry small amounts of data. There are limits to the size of a cookie, due to the nature of it. Because cookies are passed in the headers of a web request or response, they are limited to the maximum size of a header.

Alright, so let's exploit some cookie insecurity. We're going to do four things:

  1. Change the expiration to keep ourselves logged in longer;
  2. Change the user ID to log in as someone else;
  3. Change the user ID to cause the server to error (if you place cookies into SQL this can do really bad stuff);
  4. Create a cookie with a user ID to login without credentials;

So the first task on our to-do list is to change a cookie expiration. We set it for 8 hours, but any sufficiently skilled user (and you really don't have to be all that skilled) can alter it. In fact, if you download the EditThisCookie plugin for Opera, you can do so with a two clicks. I'm going to use this to do the rest, and I'm just going to throw all the pictures with basic captions.

Step 1: We'll Login via Opera. That "Cookie" icon is our editor.

Opera Login

Step 2: Open the cookie. We click the "Cookie" Icon and expand the cookie we want ("UserId").

Opening the Cookie

Step 3: Edit the Expiration and click the "Checkmark" to save.

Edit Expiration

As you can see, pretty easy.

Next, we'll edit the value to be someone else:

Edit Value

New Page

Now I did not relogin, I used the original login and edited who I was. That's important to remember because it demonstrates why this is insecure.

We can error the server:

Bad Cookie

And even make a new cookie without logging in:

New Cookie

Automating our Tests

Ok, on to the fun part: let's automate a login, but without ever hitting the login page or function.

This is almost too easy with .NET, we'll use the WebClient and manually send the Cookie header. To do this, I created a TestAutomation.aspx page:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="TestAutomation.aspx.cs" Inherits="Poor_Cookie_Authentication__17_4_2018_.TestAutomation" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
            <asp:Literal runat="server" ID="litResponseData"></asp:Literal>
        </div>
    </form>
</body>
</html>

The code is trivial:

public partial class TestAutomation : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        using (var wc = new WebClient())
        {
            wc.Headers.Add("Cookie", "UserId=1");
            litResponseData.Text = wc.DownloadString("http://localhost:60078/Authenticated.aspx");
        }
    }
}

Yes, we logged in and downloaded the Authenticated.aspx text with 5 lines of code, 2 of which were braces. By manualy sending the header, we made it too easy for us to work with.

If you comment out the wc.Headers.Add line, you'll see that it returns the login form. This is because the WebClient follows the redirects. With that line in, we get the Hello ...! message.

An Ideal World

Alright, so all of this is done to demonstrate the point of how easy it is to use cookie-based authentication, and how we can exploit it, but also the ease of which it does what we want. One of the things Jim had mentioned to me was that he wanted the ability to either turn authentication off, or some other way to allow the user to login, but without needing to do too much complex work.

Typically, in this scenario, this is where the developer and tester would work together on a solution, one of which might be a very simple API call to do login: pass a username and password, then it logs that session in. We can simulate this by accepting Username and Password query-string parameters in our Login.aspx:

public partial class Login : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        if (Request.QueryString["Username"] != null && Request.QueryString["Password"] != null)
        {
            var username = Request.QueryString["Username"];
            var password = Request.QueryString["Password"];
            doLogin(username, password);
        }

        if (Request.Cookies["UserId"]?.Value != null)
        {
            Response.Redirect("Authenticated.aspx");
        }
    }

    protected void btnSubmit_Click(object sender, EventArgs e)
    {
        var username = txtUsername.Text;
        var password = txtPassword.Text;
        doLogin(username, password);
    }

    private void doLogin(string username, string password)
    {
        var user = Models.User.Users.FirstOrDefault(x => x.Username == username && x.Password == password);
        if (user != null)
        {
            Response.Cookies.Add(new HttpCookie("UserId", Models.User.Users.ToList().IndexOf(user).ToString()) { Expires = DateTime.Now.AddHours(8) });
            Response.Redirect("Authenticated.aspx");
        }

        litError.Text = "The username/password combination you entered does not exist.";
    }
}

So Login.aspx didn't change much, we just handle both cases now. By pushing login into a function, it made it easy to deal with the query-string based login, and the form-based login. We need to modify our TestAutomation.aspx page a little to accommodate:

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="TestAutomation.aspx.cs" Inherits="Poor_Cookie_Authentication__17_4_2018_.TestAutomation" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
        <div>
            <asp:Literal runat="server" ID="litResponseData1"></asp:Literal><br /><br />
            <asp:Literal runat="server" ID="litResponseData2"></asp:Literal>
        </div>
    </form>
</body>
</html>

The code is the big change:

public partial class TestAutomation : System.Web.UI.Page
{
    protected void Page_Load(object sender, EventArgs e)
    {
        // Method 1
        using (var wc = new WebClient())
        {
            wc.Headers.Add("Cookie", "UserId=1");
            litResponseData1.Text = wc.DownloadString("http://localhost:60078/Authenticated.aspx");
        }

        // Method 2
        var cookieContainer = new CookieContainer();
        var req = WebRequest.CreateHttp("http://localhost:60078/Login.aspx?Username=ebrown@example.com&Password=1234");
        req.CookieContainer = cookieContainer;
        req.GetResponse(); // We don't need to do anything with the response

        req = WebRequest.CreateHttp("http://localhost:60078/Authenticated.aspx");
        req.CookieContainer = cookieContainer;
        var response = (HttpWebResponse)req.GetResponse();
        using (var sr = new StreamReader(response.GetResponseStream()))
        {
            litResponseData2.Text = sr.ReadToEnd();
        }
    }
}

You see the // Method 2 comment? That is the part that uses the query-string to login. It can also use a POST to a form to login, if we wanted, though that is far more complex. Due to the ASP.NET event validation, it is far easier to virtually create the UI form in that case, and submit it. Instead, what we did is query-string based to support the testers use-case, and make it easy to do the testing, while also retaining most of our security.


This project is available on GitHub and I am allowing anyone to use it to any purpose, I simply ask that if you use the project directly, throw some sort of nice message on where you found it.