This all began with a post on how Ruby optimized Ruby 1.9 by
Antinio Cangiano. Don Stewart followed up the race with a pretty cool version at
His Blog. Then Michael Speer took the liberty to add memoization to the Python version:
Memoized Python.
While all this is funny, we are now entering the game where one optimizes like mad until the competition is not fair. I will add my 5 cents with a Standard ML program:
fun fib n =
case n of
0 => 0
| 1 => 1
| k => (fib (k-1)) + (fib (k - 2))
val _ =
let fun loop n =
if n = 36
then ()
else
(print (String.concat
["n=", Int.toString n, " => ", Int.toString (fib n),
"\n"]);
loop (n + 1))
in
loop 0
end
A couple of notes: No tail recursion means no loops. Standard ML does not use arbitrary precision integers in the above code, so putting in a large enough
n will fail (Though by an Overflow exception, not by wrap-around). There are
no memoization in the generated code at all, so we are doing all the heavy work directly. There are no threading as well. The fib function itself is reasonably sized though my counter-loop could be made more functional via calls to List.app and List.tabulate.
This yields:
(Python 2.5.1) python z.py 34.26s user 0.01s system 97% cpu 35.030 total
(ruby 1.8.6) ruby z.rb 77.28s user 0.00s system 97% cpu 1:19.02 total
(MLton SVN-HEAD) ./z 0.88s user 0.00s system 89% cpu 0.984 total
The benchmarks were carried out on the following CPU:
CPU: AMD Athlon(tm) 64 Processor 3000+ (2002.57-MHz K8-class CPU)
Origin = "AuthenticAMD" Id = 0xfc0 Stepping = 0 Features=0x78bfbff
AMD Features=0xe0500800
With the following operating system:
ogre% uname -a
FreeBSD ogre 7.0-BETA2 FreeBSD 7.0-BETA2 #5:
Mon Nov 19 22:29:30 CET 2007
root@ogre:/usr/obj/usr/src/sys/OGRE amd64
While I can't run the Haskell version, my guess is that it is a bit faster than the MLton version due to runtime startup cost. MLton has a pretty grim startup cost to pay.
View comments