Novel fast JIT

I hereby publicly document a novel implementation detail for Just-In-Time compilers. As I am not too motivated to shell out some ton of money for patent process and therefore like to document prior art in case other go and patent this eventually:

AFAIK the state of the art JIT compilers use a tracing technique like in Google’s V8, Mozilla’s SpiderMonkey and Mike Pall’s LuaJIT that first interprets internal byte code representation to collect data for later JIT’ing hot paths.

This has two major drawbacks: Interpreted code is initially slower, and in addition to the actual JIT “backend” an “interpreter” engine needs to be implemented.

I hereby add my research on a own JIT implementation (for a JavaScript- and Lua-like language) that does away with the interpreter implementation entirely, and instead always JITs. The initial JIT code generation thereby uses an non-optimizing and high-speed fast-path that includes emitting code for gathering run-time statistics later used to re-JIT optimized hot paths.

This adds two major advantages: non-optimized code is still faster than classic interpreters, and no additional interpreter code is required making the codebase more lean and clean.

Additionally lazy and delayed per “procedure / function”, “closure”, “scope” or “module” (as applicable to the source language) JIT’ing can be used as optimization implementation detail in order to cut down initial code execution delay for either, classic tracing-interpetors, or this novel fast-path-JIT-only design.

Update: To disclose another novel detail to fight patents ;-) As we have specific technical needs for this new JIT and in our products we even go as far as detecting pure data shuffling, algorithmic functions and generate equivalent OpenCL code to off-load it to the respective massive parallel GPU/CPU cores on the respective platforms, …

Leave a Reply

You must be logged in to post a comment.