Engineering

FullStack 2017: bits and pieces

FullStack is the self-styled “go-to JavaScript, Node, Angular and IoT conference in London”. I did go to it, so I guess that is fair enough. As the whirls and eddies of three busy days of JavaScript talks subside, some bits and pieces come slowly floating to the surface. I present a collection of them here.

Curiously, several of the most interesting talks had to do with binary encoding and programming directly with ones and zeroes in JavaScript: these are the bits. Other memorable talks had to do with IoT boards: these are the pieces…of course!

The drowned Hippasus

At a time when machines seem to be gaining the upper-hand, it is refreshing to be reminded that most of them cannot do basic arithmetic. As Douglas “The 🐊” Crockford explained in the opening talk of the conference, Numbers, computerised arithmetic is beset by two major problems: overflow/underflow and roundoff errors. Overflow or underflow occur when the result of a calculation is too large or too small to be contained within the number of bits allocated for the number type being used. Roundoff errors occur when a fractional number cannot be represented to its full degree of precision within that same number of bits. Failure to handle these errors has caused at least two major tragedies. In 1996 the Ariane 5 rocket (more than $7 billion in development) launched off the coast of French Guiana, flew for 37 seconds, and then exploded. Its velocity had been greater than expected and a calculation within the guidance system produced a number larger than the 16 bits allotted, sending the rocket off-track. In 1991, during the Gulf War, an American Patriot missile failed to intercept an incoming Iraqi Scud, which killed 28 people. A counter of 0.1 second intervals from the missile’s internal clock had been multiplied by several hours to calculate the time in seconds, but because 0.1 is not represented accurately in binary floating point, a rounding error was multiplied and caused the missile’s tracking system to fail.

Crockford’s talk (amusingly described by the chair, Gerard Sans, as “a good mix of facts and history”) included a nice historical overview of numerical representation and its limits, and touching acknowledgements of Fibonacci and Leibniz at the end. I was reminded of Hippasus, drowned by the gods for divulging the existence of irrational numbers. We are still dealing with his legacy.

JS Math woes

Funnily enough, Crockford was not the only one delving deep into numerical representation at FullStack. A series of talks by Athan Reines introducing stdlib, a new numerical sciences library for Node and the browser, touched on similar issues. Math, Machine Learning, and JavaScript made the case for JavaScript as the machine learning language of the future. Reines sees JavaScript’s speed and C/WebAssembly interop capabilities as more than adequate to meet the demands of machine learning for performance, and puts JavaScript’s current lack of traction in this area down to an absence of good libraries and community interest. I doubt I am the only one hoping stdlib can help fix that. In the meantime, it was interesting to hear more about the technical problems associated with doing accurate maths in JavaScript. It turns out that many built-in maths functions, such as Math.cos and Math.sin, optimised for speed rather than accuracy, produce different results in different browsers, with no standards to govern acceptable precision. This makes the Math library virtually useless for reliable mathematical computation (a full list of JavaScript Math woes can be found in the stdlib repo). In addition, many external libraries are badly written. Reines singled out implementations of square difference functions as commonly falling prey to something called catastrophic cancellation. Consider two ways of calculating the sum x² - y²:

 

In the bad function, if x and y are very close but not identical, most of the significant digits will be destroyed by the subtraction, and rounding errors introduced by the multiplication will dominate the final result (this is catastrophic cancellation). A well written library will limit the impact of rounding errors by subtracting y from x before performing the multiplication. In follow-up talks Node.js Add-ons for High Performance Numeric Computing and WebAssembly and the Future of the Web, Reines detailed some of the practical steps necessary for incorporating C and Fortran code into stdlib. Finally, in The JavaScript Data Science Survival Kit, Philipp Burckhardt, another contributor to stdlib showed how to use stdlib to plot data and build a simple spam classifier.

Lowest common denominator

Crockford ended his talk by suggesting that the next generation of programming languages should adopt a new number type he calls  DEC64, a 64-bit decimal. The central aim of this proposal seems to be to make dealing with numbers more intuitive for humans, at the expense of requiring more work to be done by computers. Its key features are these:

  • It is intended to be the only number type.
  • It has only a single special value, NaN, which is equal to itself (unlike IEEE 754 NaN).
  • It is encoded in base ten rather than binary.
  • It is denormalized (i.e. there is more than one possible representation for the same number).
  • Half is rounded away from zero (specified as a comment in Crockford’s reference implementation rather than in the proposal itself).

It is obviously less complicated to have a single number type, as long as that type is sufficient for all tasks. The special numbers Infinity and -Infinity mandated by the IEEE 754 floating point standard are, for the same reason, both replaced with a single NaN value. Base ten encodings are more intuitive for humans because roundings occur where we expect them. We might not guess, for example, that 1/10 requires an infinite expansion in binary representation, and so usually evaluates to around 0.100000000000000006. Conversely, no-one will be surprised when calculating 2/3 that a decimal type will store it as something over 6.6 recurring. Finally, denormalized representation means that significant digits including zeroes can be preserved (so 0.25 × 2 will evaluate to 0.50), again, helpful when the number needs to be interpreted by humans. Rounding away from zero is what most of us are taught to do in school.

Reines had not seen the talk but when I described the proposal he mentioned incompatibility with existing libraries as a concern. Crockford’s proposal has been torn to shreds on Hacker News by people who find integer types useful or are concerned about performance. The main performance issue is that division becomes more complicated, whereas with a binary float encoding division is accomplished by a simple bitshift. In addition, denormalised representation means that extra calculations have to be performed before numbers can be compared. Infinity values serve useful purposes in allowing certain calculations to proceed that would otherwise fail. Goldberg gives the example of 1/(x + x⁻¹), which correctly evaluates to 0 when x is 0, thanks to rules about Infinity. Finally, rounding half away from 0 is known to produce an upward drift in values on average, so IEEE 754 employs rounding half to even ("banker's rounding") instead. This may be the least defensible point in Crockford's proposal, but it can be abandoned without damage to the rest.

In light of these criticisms, it was a happy coincidence that support for at least one aspect of Crockford’s proposal should have materialised by accident in the form of an anecdote delivered by Myles Borins in Node.js Releases, how do they work?. By this time, of course, my ears were primed for number-related trivia. Borins described how he accidentally introduced a regression in querystring.parse by passing its maxKeys argument directly down to String.prototype.split as it's second argument (a limit on the number of splits to be returned). The patch broke the handling of the case where maxKeys is Infinity, because when split coerces its limit value to an array length, it treats Infinity like NaN, which is to say, as a length of 0. Of course this error would have been impossible if Infinity did not exist. Crockford is surely right that untold errors are caused by complications surrounding number types, even in JavaScript's highly restricted environment. It seems unlikely his proposal will take off in the near future, however.

Bits and pixels

If Crockford and Reines are trying to abstract away some of the bit-level problems that concern us doing arithmetic, Ben Foxall’s Javascript Browser Bits encouraged us to tarry in the deeps by demoing a number of impressive effects achieved by manipulation of binary data in the browser. To prove to yourself that the Node repl lies about the value of 0.1, try this:

 

Who can explain why this works? Please comment! Use OR and NOT operators to annoy your colleagues:

 

More productively, use browser APIs to pull the raw data of images and sound into typed arrays:

 

Since you can pass the same underlying buffer to different typed arrays, you can easily convert an array of 8 bit r, g, b, a values to an array of 32 bit rgba values with Uint32Array. As Reines showed later, it can be helpful to exploit this memory sharing to avoid reallocating memory when processing large volumes of data. Ben's talk had many more tricks for producing effects with sound and pixels, which you can check out here.

Pieces of action!

Finally, a quick review of some of the interesting talks relating to IoT. Amie Dansby’s keynote Back to the Future: IoT Maker Revolution was a rallying cry for experimentation. “I just don’t have the time”, I heard someone mutter. They obviously missed JavaScript and Bluetooth LE, in which Gordon Williams showed how to hook up an Espruino to Chrome’s new Bluetooth API using Puck.js in two minutes. Nick O'Leary's demo of how to use Node-RED to create drag-and-drop pipelines linking IoT devices in seconds might have served the same end. Nick Hehr's Microcontrollers as Microservices was a plug for Tessel and a plea to keep it local: serve your demos from a microcontroller, not the cloud! Finally, the call to experiment was amply met by George Mandis' Tiny Computers, JavaScript and MIDI, who demonstrated various applications using the browser's API for interacting with MIDI devices. Stay tuned: I will be piping the text of my next blogpost with a MIDI flute.

Further Reading

Most of the talks from FullStack are available on the website. Check them out!

The best introduction to the problems surrounding computer arithmetic is Goldberg’s What Every Computer Scientist Should Know About Floating-Point Arithmetic. As the title implies, you must read this if you have not. For a deeper dive into options for decimal encoding, poke around Mike Colishaw’s Speleotrove.

FullStack 2017: bits and pieces
was originally published in YLD Blog on Medium.
Share this article: