TC39 Pipeline Operator - Hack vs F#

JavaScriptFunctional ProgrammingTC39

I want to take some time to share everything I know about the pipeline operator proposal that is currently in stage 2, to the best of my ability. This will, of course, be a somewhat biased account — it's my article, haha — but I'll do my best to present both sides while presenting my case. Also, keep in mind, that as thorough as I'll try to be, I'm completely sure I'm missing things. If you spot something or think of something, feel free to DM me on Twitter. I try to answer all of my DMs.

IMPORTANT: Before we get into this, there are folks, myself included, that have VERY strong feelings about these matters. Please keep those feelings in check, the folks that are working on these standards are good people trying to do right by the community, even if we disagree with them.

What Is A "Pipeline Operator"? #

Simply put, the pipeline is an operator, |> in this case, that allows a software developer to "pipe" the evaluated expression from the left-hand-side (LHS) to some function or expression on the right-hand-side (RHS). There are several implementations and examples of this throughout the programming world in various languages, and we're in the amazing position that the TC39 — the governing body of the ECMAScript standard on which JavaScript is based — is considering adding such an operator to the ECMAScript standard (And therefor JavaScript). There were many competing proposals, two stood out the most, the Hack pipeline — currently in stage 2, and the F# pipeline — skipped despite being very popular.

Piping Functions (now) #

The first thing to understand is what is really means to pipe functions. In today's JavaScript, there's really only one way to "pipe" functions: With a "functional pipe". The functional pipe is a common functional programming utility that exists in libraries like Ramda and RxJS, as well as many others.

The idea behind a functional pipe is to apply a series of functions, in the order they were specified, passing the return value of each function to the next function in the chain. The utility functions that do this are relatively simple:

js
function pipe(initialArg, ...fns) {
return fns.reduce((prevValue, fn) => fn(prevValue), initialArg);
}
 
// Some basic use:
const result = pipe(
2,
(x) => x ** 2,
(x) => x - 1,
);
 
// result: 3

One powerful feature of functional pipes is the fact that functions are portable and composable. So we can change the above example to be more readable — and have more reusable parts — like so:

js
function squared(x) {
return x ** 2;
}
 
function subtractOne(x) {
return x - 1;
}
 
// Refined use
const result = pipe(2, squared, subtractOne);
 
// result: 3

Piping Functions: Use Cases #

Of course, use cases for piping functions are generally much more elaborate than simple math. The most common use cases are usually about allowing functions to be applied repeatedly over various sets of things. A standout example — that of course I'm going to talk about — is found in RxJS.

RxJS uses piped functions in order to transform observables. Years ago, RxJS only had class methods on their Observable type in order to transform observables. This worked generally well, but with so many possible operations and methods for observables, and the fact that methods can't really be "tree-shaken" by modern bundlers, it was found that having so many methods wasn't good for the community that used RxJS. We tried what is called "prototype patching", where modules would add methods to Observable, "ala carte", but that came with a host of other issues (I'll try to address all of that in another post). Ultimately, we settled on using piped functions. The benefit of this is you only "pay for what you use" in bundle size. In other words, you import and use just the operators you need, and the rest can be "tree-shaken" away.

ts
// RxJS 5.5 and lower (methods and no piping):
source$
.filter((x) => x % 2 === 0)
.map((x) => x * x)
.concatMap((x) => of(x + 1, x + 2, x + 2, x + 4))
.subscribe(console.log);
 
// RxJS 5.5 and higher (with piped functions):
source$
.pipe(
filter((x) => x % 2 === 0),
map((x) => x * x),
concatMap((x) => of(x + 1, x + 2, x + 2, x + 4)),
)
.subscribe(console.log);

In RxJS's case, a simple pipe function — as shown in the first example in the article — is used inside of the pipe method on Observable, passing this as the initial argument.

All of the operators, filter, map, and concatMap, are then created via higher-order functions. Functions that take arguments necessary to set up their actions, and return in this case, unary pipeable functions of the shape (source: Observable<In>) => Observable<Out>. (Unary functions are functions that take a single argument and return a value)

This means the overall implementation of the RxJS functions are necessarily complex in terms of functional composition. They're all basically (arg) => (source) => result such that (source) => result can be piped with the functional pipe.

The Hack Pipeline (The TC39's current proposal) #

The current pipeline operator the TC39 is proceeding with is called the Hack pipeline. It is so-named after the Hack language, a PHP dialect created at Facebook, that has a pipeline operator this is modeled after. It's worth noting that this proposal is only in stage 2, and what I'm writing here is the state of the proposal at this time, to the best of my knowledge.

With the Hack pipeline, the value from the LHS is provided to the expression on the RHS with a special character, currently the ^ character, although past variations used %.

Some examples:

ts
// Similar to our first example.
// Short and sweet! (If a little hard on the eyes)
const result = 2 |> ^ ** 2 |> ^ - 1;
// The same thing only using functions:
const result = 2
|> squared(^)
|> subtractOne(^);
// RxJS using the hack pipeline:
// Note the (^) after each Rx operator.
source$
|> filter((x) => x % 2 === 0)(^) // <-- here
|> map((x) => x * x)(^)
|> concatMap((x) => of(x + 1, x + 2, x + 2, x + 4))(^)
|> ^.subscribe(console.log);
// An example of using existing non-unary function APIs:
const randomNumberBetween20And50 = Math.random() * 100
|> Math.pow(^, 2)
|> Math.min(^, 50)
|> Math.max(20, ^)
// An example of use within an async function
async function demo() {
return await fetch(url)
|> await ^.json()
|> await delayValue(1000, ^)
}
// Throwing within a step of the pipe (see "Cons" below):
const validNumberBetween20and50 = maybeGetNumber()
|> ((num) => {
if (Number.isNaN(+num)) {
throw new TypeError('Did not get a number')
}
return num;
})(^)
|> Math.pow(^, 2)
|> Math.min(^, 50)
|> Math.max(20, ^)

Pros #

Cons #

ts
// Hack
const a = 2
|> squared(^)
|> subtractOne(^);
// The same with let and equals
// Amazingly, this works with TypeScript quite well.
// (if x is `any` it infers in each step)
let x;
x = 2;
x = squared(x);
x = subtractOne(x);
// Another pattern with separate declarations in a chain.
// This has the added advantage that you can use each part
// throughout the chain
const a = 2,
b = squared(a),
c = subtractOne(b),
d = a + b + c; // Can't do this with the Hack pipeline

The F# Pipeline #

The F# pipeline is another variant of the pipeline proposal that seemed to have a lot of traction to those external to the TC39, but was skipped in favor of the Hack pipeline, mostly citing reasons in the "pros" for Hack pipeline above. It comes, as you may have guess, from a few languages, but the most notable implementation is in F#. Thus the name.

The idea with the F# pipeline is that it passes the value from the LHS as the last argument to the function on the RHS. In this way, it works best with unary functions exactly like those that work with functional piping above.

Some examples:

ts
// Similar to our first example.
const result = 2
|> (n) => n ** 2
|> (n) => n - 1;
// The same thing only using functions:
const result = 2
|> squared
|> subtractOne
// RxJS using the F# pipeline:
source$
|> filter((x) => x % 2 === 0)
|> map((x) => x * x)
|> concatMap((x) => of(x + 1, x + 2, x + 2, x + 4))
|> result$ => result$.subscribe(console.log);
// An example of using existing non-unary function APIs:
const randomNumberBetween20And50 = Math.random() * 100
|> randomNum => Math.pow(randomNum, 2)
|> squared => Math.min(squared, 50)
|> atLeast50 => Math.max(20, atLeast50)
// An example of use within an async function
// NO EQUIVALENT, TMK. You'd just do what you've always done.
async function demo() {
const response = await fetch(url);
const data = await response.json();
return await delayValue(1000, data);
}
// Throwing within a step of the pipe
const validNumberBetween20and50 = maybeGetNumber()
|> (num) => {
if (Number.isNaN(+num)) {
throw new TypeError('Did not get a number')
}
return num;
}
|> validNum => Math.pow(validNum, 2)
|> squared => Math.min(squared, 50)
|> atLeast50 => Math.max(20, atLeast50)

Pros #

Cons #

Summary #

I know this was flavored a lot with opinion, but hopefully it's enough to get a few more people thinking about this problem, and it's enough to get the very important and smart people working on this proposal to stop and reconsider some of the decisions that were made thus far. I think it's a shame that the proposal has proceeded to stage 2 in its current state.

It's definitely possible to get the best of both worlds either by: 1) Allowing the Hack Pipeline to implicitly act like the F# pipeline when the magic character is not present, or 2) Switch to the F# pipeline, and also land the partial application proposal. On paper, option 2 would, by far, provide the most powerful set of tools to JavaScript developers.

I've ranted about this particular thing for a while, for sure. I'm trying to do what I think is best for the JavaScript community with whatever little influence I have. Honestly, if the Hack proposal would "just work" with pipeable unary functions, I'd have absolutely no beef with it, and this entire article would not be written.

I hope that people on both sides of the debate find the information here useful. I also hope we can resolve things such that there is no "both sides" and instead we're all moving forward together.