I recently learned about this surprising behavior in JavaScript:

null > 0
=> false
null < 0
=> false
null == 0
=> false
null >= 0
=> true

There is, of course, a technical reason behind this. But still… it’s pretty weird.

It got me curious about how my other languages would behave comparing null and 0.

Ruby

I ran this code in the Rails console:

nil > 0
=> NoMethodError: undefined method `>' for nil:NilClass

0 < nil
=> ArgumentError: comparison of Integer with nil failed

Reasonable, though of course they are undesirable runtime errors.

C#

System.Console.WriteLine(0 < null); // False
System.Console.WriteLine(null > 0); // False

This one actually surprised me by compiling at all. It did compile with warnings, though:

Comparing with null of type ‘int?’ always produces ‘false’

I’ve been less active in C# lately, but I honestly assumed it this would be a compiler error. TIL!

F#

printfn "%b" (0 < null)
printfn "%b" (null > 0)

Only F# treated this (properly, whereas I’m concerned) as a compile-time error:

The type ‘int’ does not have ‘null’ as a proper value.

Looks like F# wins this micro-competition. 🏆


<
Previous Post
Learning F# Type Providers via Audio Tag Tools
>
Next Post
Option Pipeline Vs. If-Then