Minimum JavaScript to run code inside of GPU
Brian Cannard

Disclaimer: there is no hidden code behind the scenes. The entire GPGPU computation raw, low level, browser APIs are called from code you see on this page, and on this page only.

We describe the minimum required JavaScript to make WebGL API calls in the browser to make our GPU hardware code. The code doesn't use any frameworks, so you right-click on your browser, chose Inspect, and navigate to the Console tab, and copy-paste the JavaScript code presented in this article into the console.

What the code does

The minimum GPU code runs in a single GPU hardware thread, and outputs 32 bits as the result of computation. That's the easiest way to see how it runs.

To make that happen, we're going to execute the following sequence of commands:

  1. Create an invisible <canvas> HTML element.
  2. Call the method getContext() of the element.

WebGL context

It starts from an HTML element. We're in a browser, after all:

The default size of HTML canvas element is 300 by 150 pixels:

So we reduce it to 2x2:

Default WebGL init parameters are suboptimal for GPGPU—with a depth buffer and anti-aliasing, and introducing additional latency—so we disable it:

When OpenGL doesn't work we need to know why. Let's make gl.getError() return text:

To run GPU code, we need to compile it, and then render two triangles:

Now, after we got a missing API, we can run it:

We don't have to read GPU shader execution results which it wrote into framebuffer's color attachment(s), because these simply can be used as textures in new runs after swapping

Float precision and GLSL 1.00

GLSL 1.00 preprocessor cuts all floating point literals below 1.0000001e-37 = pow(2, -123) * (1 + 0.0633825) by a silly point simply shrinking all decimal literals below 1e-37 to zero in a buggy way: the sign is ignored. It's preprocessor, it's not your GPU. There's a lot of "garbage" buggy code there.

We can write a function doing all of this:

So we can run our dynamic shader code:

Let's enable shaders which produce floating point values rather than 4 bytes of RGBA in the range from 0 to 255. Also, let's render to 8 color buffers simultaneously (doesn't work in smartphones). Visible framebuffer has RGBA8888 attachments. But we can't use these attachments as data inside of shader code to read from. Also, RGBA8888 per shader run is not much of output (just 32 bits per GPU thread). On smartphones, this is the only option, but let's enable our desktop machines and laptops to run at full throttle: