# Unsigned right shift bitwise operator versus OR logical operator

Occasionally, I see JavaScript defining default values using the unsigned right shift bitwise operator (`>>>`) rather than the plain old OR (`||`) logical operator, as in the following examples:

```var a = x >>> 0;
var b = y || 0;```

I was curious why that was, so I asked ECMAScript theorist Dmitry Soshnikov about it on Twitter and he replied thusly:

@segdeha in some (but not in all) impl-s, (unsigned) right shift operator (ES3, 11.7.3) can work faster; http://gist.github.com/362218

Followed soon thereafter by:

@segdeha but in real code I prefer to use || as more clear ;)

That raised in me a suspicion that this is a micro-optimization that is probably not usually worth it. Still, there could be cases where it's warranted. The trick is knowing when the loss of readability/maintainability is worth the extra speed.

There may well be situations where this operation is being done in a tight loop and could noticeably slow down the client. Do you do the operation once? Not worth it! 10 times? Probably still not worth it. 10,000 times in quick succession? Maybe worth it. But, we need data to know for sure.

## When in doubt: test!

So, using the gist to which Dmitry linked as a starting point, I put together the following test. Choose the number of iterations, the initial value of the "length" variable and hit go to see the difference in speed between the 2 methods.

```function doBitwise(iters, length) {
var start, len;
start  = new Date;
do {
len = length >>> 0;
} while (iters--);
return ['bitwise', new Date - start, len];
}```

Result: ---
Value: ---

```function doLogical(iters, length) {
var start, len;
start  = new Date;
do {
len = length || 0;
} while (iters--);
return ['logical', new Date - start, len];
}```

Result: ---
Value: ---

## Conclusions

Did you see what I saw? In every case, even when looping 1 million times, the difference between the two methods is very small. To complicate matters, apparently, different JavaScript engine implementations can give you widely varying results for the same test.

My conclusion: this micro-optimization is not worth it. Better to have clear code than to save a couple of milliseconds. Sure, over 1,000,000 iterations, it might save 1/10th of a second (or, it might not), but I would say that if you have a tight loop doing that many iterations…you probably have bigger problems.