Acton VS Wasm benchmarks

Current benchmark data was generated on Thu Feb 01 2024, full log can be found HERE

CONTRIBUTIONS are WELCOME!

[x86_64][4 cores] AMD EPYC 7763 64-Core Processor (Model 1)

* -m in a file name stands for multi-threading or multi-processing

* -i in a file name stands for direct intrinsics usage. (Usage of simd intrinsics via libraries is not counted)

* -ffi in a file name stands for non-stdlib FFI usage

* (You may find time < time(user) + time(sys) for some non-parallelized programs, the overhead is from GC or JIT compiler, which are allowed to take advantage of multi-cores as that's more close to real-world scenarios.)

edigits

Input: 250001

lang code time stddev peak-mem mem time(user) time(sys) compiler compiler/runtime
wasm 1.rs 469ms 42ms 20.3MB 453ms 0ms wasmtime 17.0.0
wasm 1.rs 481ms 2.2ms 58.4MB 543ms 17ms node 18.19.0
acton 1.act 518ms 8.0ms 12.4MB 543ms 47ms actonc 0.19.2

Input: 100000

lang code time stddev peak-mem mem time(user) time(sys) compiler compiler/runtime
wasm 1.rs 120ms 0.3ms 19.6MB 103ms 0ms wasmtime 17.0.0
acton 1.act 167ms 8.8ms 10.8MB 153ms 30ms actonc 0.19.2
wasm 1.rs 176ms 0.9ms 58.2MB 223ms 23ms node 18.19.0

helloworld

Input: QwQ

lang code time stddev peak-mem mem time(user) time(sys) compiler compiler/runtime
acton 1.act 4.2ms 0.4ms 6.4MB 0ms 0ms actonc 0.19.2
wasm 1.rs 6.8ms 0.2ms 19.0MB 0ms 0ms wasmtime 17.0.0
wasm 1.rs 37ms 0.7ms 48.6MB 24ms 4ms node 18.19.0

pidigits

Input: 8000

lang code time stddev peak-mem mem time(user) time(sys) compiler compiler/runtime
wasm 2.rs 2223ms 3.0ms 19.8MB 2210ms 0ms wasmtime 17.0.0
wasm 2.rs 2256ms 1.9ms 56.4MB 2277ms 30ms node 18.19.0
acton 1.act timeout 0.0ms 10.8MB 7663ms 1610ms actonc 0.19.2

Input: 4000

lang code time stddev peak-mem mem time(user) time(sys) compiler compiler/runtime
wasm 2.rs 526ms 0.4ms 18.9MB 507ms 0ms wasmtime 17.0.0
wasm 2.rs 567ms 0.1ms 56.3MB 590ms 23ms node 18.19.0
acton 1-m.act 1681ms 170ms 10.7MB 2257ms 673ms actonc 0.19.2