using Programming;

A Blog about some of the intrinsics related to programming and how one can get the best out of various languages.

Importing F# to the SQLCLR (T-SQL)

Bringing F# into the SQLCLR

It's been some-time since my last post, and don't worry, we're still going to continue the IMAP server. I've been swamped at work, and as a result, haven't had the time to properly dedicate to writing these posts (especially that series, which is a complex topic).

Excuses aside, today we're going to talk about something I recently did for work, which is integrating F# into the SQLCLR (part of Microsoft SQL Server).

For those who don't know, the SQLCLR is a feature of SQL Server that allows one to import .NET assemblies as user-defined functions, or stored procedures. On it's own it doesn't sound impressive, but the SQLCLR allows us to significantly improve performance in some cases, and moderately improve it in others.

I won't go into detail explaining the SQLCLR, a gentleman by the name of Soloman Rutzky does that quite well. I'll let his "Stairway to SQLCLR" give you the introduction.

No, what I'll do today is show you how to import F# into the SQLCLR, instead of just C# or VB.NET. The process is about as straightforward as Soloman describes, but there are a few "gotcha's", so I'm going to include those in our discussion here today.

Without further ado, let's get started.

First: create the project and add the System.Data reference

The first step is obviously to create a project to hold our SQL code. The project should be an F# Class Library, in .NET Framework (I'm using 4.7.1 and F# Core 4.4.3.0). You'll want a module for the functions, and in that module you'll want to open Microsoft.SqlServer.Server, and System.Data.SqlTypes.

Once we've done that, we'll build a function. There are a few rules to creating a function in .NET that can be seen by SQL Server:

  1. The function must have the SqlFunction attribute;
  2. All inputs must be tupled;
  3. All input and output types must be a Sql[Something] type (SqlDouble, SqlInt, etc.);

So, for our example we're going to use a real-world example from my work: distance calculation from two geo-coded points.

To do this, we'll build a function that takes 4 double values: two Latitude/Longitude value sets.

let calculateDistance (fromLat : SqlDouble, fromLon : SqlDouble, toLat : SqlDouble, toLon : SqlDouble) : SqlDouble

That's the signature we'll use, next, we want to define how SQL should treat the function:

[<SqlFunction(
    IsDeterministic = true,
    IsPrecise = false,
    SystemDataAccess = SystemDataAccessKind.None,
    DataAccess = DataAccessKind.None)>]

This is where life gets special, so let me explain them piece-by-piece:

  • SqlFunction: this is just the attribute we use, there is also SqlProcedure for stored procedures;
  • IsDeterministic = true: this value should ONLY be set to true if the function is deterministic, that is, given any input value, it returns one and exactly one output, and that two calls to the function with the same input will result in the same output;
  • IsPrecise = false: this value should ONLY be set to true if the function uses the DECIMAL or NUMERIC types, and does precise mathematical calculations;
  • SystemDataAccess = SystemDataAccessKind.None: I'll be completely honest with you, I don't know what the difference between this and DataAccess are, but if you do any reading/writing to/from SQL, you should set it to Read, otherwise, probably use None (there's a small performance cost to setting this to Read, I leave it to you to decide whether or not to do so);
  • DataAccess = DataAccessKind.None: see above;

So basically, what we did here is define a function and tell SQL what it should expect the function to do. One of the most impotant parts is the IsDeterministic flag: this tells SQL that if it called the function for a set of values, it can reuse that result for any subsequent calls with the same set of values. This means it can memoize the results. If your function has side-effects, do not set this flag to true, or you will get weird results. Basically, if your function is truly "pure" (no side-effects), mark it with IsDeterministic = true.

Next: write the code

Alright, so we've covered the hard parts, next, we write the function.

My version of this function used some logic that was specific to my workplace, so I'm going to remove it and we'll write a vanilla function:

let constMod = 1.852 / 1.61 * 60.
let divPi180 = Math.PI / 180.
let div180Pi = 180. / Math.PI

[<SqlFunction(
    IsDeterministic = true,
    IsPrecise = false,
    SystemDataAccess = SystemDataAccessKind.None,
    DataAccess = DataAccessKind.None)>]
let calculateDistance (fromLat : SqlDouble, fromLon : SqlDouble, toLat : SqlDouble, toLon : SqlDouble) : SqlDouble =
    let fromLat = fromLat.Value
    let fromLon = fromLon.Value
    let toLat = toLat.Value
    let toLon = toLon.Value

    let fromLat = fromLat * divPi180
    let toLat = toLat * divPi180
    let fromLon = fromLon * divPi180
    let toLon = toLon * divPi180

    constMod *
    (Math.Acos
        ((Math.Sin toLon) * (Math.Sin fromLon) +
         (Math.Cos toLon) * (Math.Cos fromLon) * (Math.Cos (toLat - fromLat))))
    |> SqlDouble

This should be self-explanatory: we basically convert the data and do some simple math on it.

Third: enable SQLCLR

Alright, so that's that entirety of our .NET code.

Now, we need to enable the SQLCLR, because it's disabled by default.

The SQLCLR can be enabled through GUI or T-SQL, I prefer to do it through GUI because I typo a lot.

To enable it:

  1. Right click your server in SSMS;
  2. Click "Facets";
  3. In the "Facet" dropdown select "Surface Area Configuration";
  4. Change "ClrIntegrationEnabled" to "True";
  5. Click "OK";

Easy enough.

Fourth: trust the assembly, and import it

This is one spot where things aren't completely awesome: the FSharp.Core library isn't built to natively support a "SAFE" import to SQLCLR, so we have to trust it first.

To trust the assemblies, we'll want to get a SHA2_512 hash of them, and optionally, a description.

I, personally, don't care so much about the description at the moment, so I'll leave that out and let you locate it if you like. Instead, I'm just going to demonstrate how to hash it and trust it.

We need to trust FSharp.Core, and then our assembly:

DECLARE @hash AS BINARY(64) = (SELECT HASHBYTES('SHA2_512', (SELECT * FROM OPENROWSET (BULK 'C:\path\to\bin\dir\FSharp.Core.dll', SINGLE_BLOB) AS [Data])))
EXEC sp_add_trusted_assembly @hash

Then, our assembly:

DECLARE @hash AS BINARY(64) = (SELECT HASHBYTES('SHA2_512', (SELECT * FROM OPENROWSET (BULK 'C:\path\to\bin\dir\MyAssembly.dll', SINGLE_BLOB) AS [Data])))
EXEC sp_add_trusted_assembly @hash

Easy enough.

Because FSharp.Core isn't built for native SQL Server support (which, if anyone want's to fix, I've included the error at the end of this article), we have to add it with PERMISSION_SET = UNSAFE, which is, well...unsafe.

So, to load our assembly, we need a name, and the path:

CREATE ASSEMBLY [MyAssembly]
AUTHORIZATION dbo
FROM 'C:\path\to\bin\dir\MyAssembly.dll'
WITH PERMISSION_SET = SAFE

Not particularly hard. The name ([MyAssembly]) is not restricted to anything other than the regular NVARCHAR(128) for sysname, it does not need to match anything from the DLL, but probably easier if it does.

Finally: create the function

Alright, so our assembly is imported, we have it available, the last part is creating the function.

To create the function, we start it off like a normal T-SQL UDF:

CREATE FUNCTION CalculateDistance
(
    @fromLat FLOAT,
    @fromLon FLOAT,
    @toLat FLOAT,
    @toLon FLOAT
)
RETURNS FLOAT

If you've ever written a T-SQL Scalar-Valued UDF, this should look familiar. We build the signature exactly as we defined it in F#, and that part is super important: the signature cannot vary at all.

Next, we write the UDF:

AS EXTERNAL NAME [MyAssembly].[MyAssembly.Namespace.ModuleName].calculateDistance

The EXTERNAL NAME is a three part name:

  1. The assembly name as specified in CREATE ASSEMBLY;
  2. The assembly namespace and module name, the fully-qualified name of the first outer-container of the function we need;
  3. The function name itself;

Once you've created the function, we're literally all done. You can now call directly into your CLR code:

SELECT dbo.CalculateDistance(@fromLat, @fromLon, @toLat, @toLon)

Demonstrations!

For those who want to see the performance difference, the original T-SQL function is:

CREATE FUNCTION CalculateDistanceUdf
(
    @fromLat FLOAT,
    @fromLon FLOAT,
    @toLat FLOAT,
    @toLon FLOAT
)
RETURNS FLOAT
WITH SCHEMABINDING
AS 
BEGIN
    RETURN (1.852 / 1.61) *
        60 *
        DEGREES(
            ACOS(
                SIN(RADIANS(@toLon)) *
                SIN(RADIANS(@fromLon)) +
                COS(RADIANS(@toLon)) *
                COS(RADIANS(@fromLon)) *
                COS(RADIANS(@toLat) - RADIANS(@fromLat))))
END

The WITH SCHEMABINDING is a hint to try to tell SQL Server to mark the function deterministic, and it is as verified with SELECT OBJECTPROPERTY(OBJECT_ID('[dbo].[CalculateDistanceUdf]'), 'IsDeterministic'), but it still performs significantly slower than the SQLCLR alternative.

I borrowed the test from this article to run mine, and wrote them as follows:

CREATE TABLE Numbers (
    Num INT NOT NULL,
    CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Num)
)
GO

WITH N1(C) AS (SELECT 0 UNION ALL SELECT 0),
N2(C) AS (SELECT 0 FROM N1 AS T1 CROSS JOIN N1 AS T2),
N3(C) AS (SELECT 0 FROM N2 AS T1 CROSS JOIN N2 AS T2),
N4(C) AS (SELECT 0 FROM N3 AS T1 CROSS JOIN N3 AS T2),
N5(C) AS (SELECT 0 FROM N4 AS T1 CROSS JOIN N4 AS T2),
N6(C) AS (SELECT 0 FROM N4 AS T1 CROSS JOIN N4 AS T2 CROSS JOIN N3 AS T3),
Nums(Num) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM N6)
INSERT INTO Numbers(Num) SELECT Num FROM Nums
GO

This inserts 1048576 rows to the Numbers table, so it's a good-sized test.

Then we can run each of the following three tests:

DECLARE @fromLat AS FLOAT = 100
DECLARE @fromLon AS FLOAT = 100
DECLARE @toLat AS FLOAT = 120
DECLARE @toLon AS FLOAT = 120

SELECT MAX(dbo.CalculateDistance(Num / @fromLat, Num / @fromLon, Num / @toLat, Num / @toLon)) FROM Numbers
GO

DECLARE @fromLat AS FLOAT = 100
DECLARE @fromLon AS FLOAT = 100
DECLARE @toLat AS FLOAT = 120
DECLARE @toLon AS FLOAT = 120

SELECT MAX(dbo.CalculateDistanceUdf(Num / @fromLat, Num / @fromLon, Num / @toLat, Num / @toLon)) FROM Numbers
GO

DECLARE @fromLat AS FLOAT = 100
DECLARE @fromLon AS FLOAT = 100
DECLARE @toLat AS FLOAT = 120
DECLARE @toLon AS FLOAT = 120

SELECT MAX
    (
        (1.852 / 1.61) *
        60 *
        DEGREES(
            ACOS(
                SIN(RADIANS(Num / @toLon)) *
                SIN(RADIANS(Num / @fromLon)) +
                COS(RADIANS(Num / @toLon)) *
                COS(RADIANS(Num / @fromLon)) *
                COS(RADIANS(Num / @toLat) - RADIANS(Num / @fromLat)))))
FROM Numbers
GO

You can run these each individually to time them. My times were roughly 645ms for the SQLCLR, 3369ms for the T-SQL UDF, and 703ms for the inline T-SQL. As you can see, the SQLCLR function is faster than the inline T-SQL, and let's us encapsulate the logic in a single function. (This actually came about as an issue because we have the calculation there copied-and-pasted over several dozen queries, often 3-8x per query.)

So, that said, in this type of situation (raw math) there's no reason to use T-SQL for the task, and for something reasonably complex like this, no reason not to abstract it. Dump the code in .NET, write your unit tests, and then deploy the assembly to the SQL server.

Now, that said, there are times I wouldn't use a SQLCLR function, such as when the math is ultra simple: i.e. * 3, and there are times when a table-valued UDF would be far superior, so I don't want to make the suggestion that this will always help, just that it's another thing you can try, and it might actually surprise you.


For anyone curious, attempting to create an assembly in F# throws the following warning:

Warning: The Microsoft .NET Framework assembly 'fsharp.core, version=4.4.3.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a, processorarchitecture=msil.' you are registering is not fully tested in the SQL Server hosted environment and is not supported. In the future, if you upgrade or service this assembly or the .NET Framework, your CLR integration routine may stop working. Please refer SQL Server Books Online for more details.

And using a PERMISSION_SET of EXTERNAL_ACCESS or SAFE throws the following error:

CREATE ASSEMBLY failed because type 'Microsoft.FSharp.Collections.FSharpMap`2' in safe assembly 'FSharp.Core' has a static field 'empty'. Attributes of static fields in safe assemblies must be marked readonly in Visual C#, ReadOnly in Visual Basic, or initonly in Visual C++ and intermediate language.

SQL Server Datatypes: How to avoid VarChar

I've seen, time and time again, programmers make many of the same mistakes regarding their SQL datatypes, and one of them is to use VarChar for almost everything. I've seen it so many times that if I had a nickel for each time I saw it, well, let's just say my McLaren P1 would be yellow.

Why do people use VarChar so much?

Well, to be honest, it's easy. We, as people, are generally lazy, and it's easy to store anything in a VarChar(50), or worse, a VarChar(MAX)! Why is this a bad thing? Well for some data, it's not, but for others, it's just not the best option. As developers and programmers, we almost always have a choice as to how we should store our data, and sometimes, it's easy to make an inefficient one.

Let's take a solid example. I was over on Stack Overflow one day, and I noticed a developer doing something odd: the developer was storing an IP address (we'll assume IPv4 of 192.168.0.1 which is a pretty common IP for default gateways in small home and office networks) in a VarChar or a Char field. I'm not sure on the precision of it, or which it was (as the developer left out the DDL), but for sake of argument let's assume it was the smallest precision required to store any IP Address, and as such a VarChar(15).

The developer, much like the rest of us, was trying to find a way to shrink the amount of data used down. So, the developer proposed the suggestion of, instead of store 1.1.1.1, we'll just omit all the characters except the last two (in this example: .1), and keep the fourth octet in the database. The downfall of this is quite obvious: we now have no way of distinguishing whether our value is 1.1.1.1, 2.2.2.1, 3.3.3.1 or any other repeated value. But, there's a better way.

Let's take a peek at what we know at this point:
  1. The data being stored is binary data;
  2. It's being stored in a string field;
  3. The maximum length on the string field is 15 characters;
Now this doesn't just apply to IP Addresses, it also applies to hashes, encrypted data and other binary objects.

At first glance this might not seem so bad. The IP Address as a string is 192.168.0.1. The maximum data-size is going to be 17 bytes, as the VarChar type takes one byte per character, and two bytes of overhead. The size for our specific address is 12, by the same math. The developer took the time to address the issue of fitting the data within the seemingly smallest datatype possible. But what did the developer forget?

First, we're trying to store binary data. The smallest way to store this (at least in string format) is either in hexadecimal or Base64 encoding. Let's assume we use hexadecimal (it really doesn't matter either way). We're storing data that is four bytes, which means we need eight characters. Our example leaves us with 0xC0A80001 or, for short: C0A80001. So, this alone allows us to reduce our maximum storage space to almost half it's original size, and our utilized space (for this example) to 10 bytes from 12. With just one quick optimization we converted our 15-character string to an 8-character hexadecimal string. Now that we know that, we can make another optimization and change it to a Char(8) type. This reduces two more bytes of overhead, and leaves our example at a cool 8 bytes of storage space.

But, we're forgetting one small thing: SQL Server (at least, Microsoft SQL Server) has a Binary type. Much like the Char type, the Binary type has a fixed size. The difference is that the Binary type can store raw byte data. It takes a length, just like the Char does, so in our case, it would be Binary(4) (to store four bytes for one IPv4 address). The binary type will only store the raw data for the address, so we're left with:
  1. Byte 1: 0xC0
  2. Byte 2: 0xA8
  3. Byte 3: 0x00
  4. Byte 4: 0x01
Microsoft SQL Server also has a VarBinary type which works just like the VarChar type. It supports the same size limits: 1-8000 or MAX. It also requires two bytes of overhead for each row, just like a VarChar type.

The nice thing about using a Binary type for this field, is that it allows us to save a significant amount of space. By optimizing this field, we've saved 11 bytes of storage per row. How significant is that? If we had 500,000,000 we've saved 5.5GB of data. (And for big-data applications, 500,000,000 rows is insignificant.)

You might say, "well my application is small data, 500,000,000 rows is a pretty significant number, and 5.5GB for that many records is small." While that may be true, this is just one field we've optimized.

The DateTime example

Let's take another example: I've seen a lot of people use the VarChar type for DateTime data as well, when it's completely unnecessary. The SQL Server has several types for DateTime data, the more useful being DateTime, DateTime2, and DateTimeOffset. Microsoft recommends that you no longer use DateTime for new work, as the DateTime2 and DateTimeOffset types align with the SQL standard, and are more portable. The DateTime2 and DateTimeOffset fields also have better precision and a larger range.

Why is this so important? You can just as easily store a as a string in a VarChar field, and then parse it later. The problem with that is that you can't filter quite so easily for certain criteria. It's easy (at least with a DateTime2 field) to filter for dates within a certain range, on a certain date, etc. It's less intuitive with any string type.

The other problem is less obvious: with a VarChar type, there is no validation done that guarantees the input string is a DateTime string. This means it's up to whatever logic you have manipulating the database to make this guarantee.

What about the NVarChar and NChar types?

I've not discussed these so far because we were talking about binary data, which in most any form is stored in some ASCII or raw form. These types (NVarChar and NChar) are Unicode (UTF-16, specifically) variants of the VarChar and Char types, respectively. These types take two bytes per character, with the variable-length type taking an extra two bytes of overhead. In our example, were the first field type an NVarChar(15) it would have taken up to 32 bytes of data. (As 30 bytes for the 15 characters plus two bytes of overhead.) The specifiable sizes for these two fields are any integers in the range 1-4000, or MAX for NVarChar.

What do the numbers in parenthesis represent?

Many fields have an optional size, precision or other parameter to represent different amounts and forms of data that can be stored within them. For all fields we're discussing in this article, the parenthesis represent how many characters (for the Char, VarChar, NChar and NVarChar types), or how many bytes (for the Binary and VarBinary types) the field can store.

What are the VarChar, NVarChar and VarBinary types doing internally?

All three of these types work in a very specific way, internally. You can see that the maximum size any of the three of them can take is up to 8000 bytes, but what does that mean?

Internally, in Microsoft SQL Server, the variable length fields (which have the optional MAX specification) store data in one of two ways:
  1. For data that fits within 8000 bytes, the data is stored in-row;
  2. For data greater than 8000 bytes, the data is stored out-of-row and a pointer to the data is stored in-row;
This should help clarify what the server is doing, and what the specifications mean, and why I always cringe when I see VarChar(MAX) or NVarChar(MAX), in a situation that doesn't call for it.

In summation:

As always: know your data, know your users, and most of all, know your environment.