March 2017
Mo Tu We Th Fr Sa Su
<< >>


Getting Some Numbers

Posted on Jan 21 2010

Some people often ask me what are the advantages of Haxe in terms of speed.

Of course, there are many runtime optimizations in Haxe that make your program run faster, but one of the very important thing to me is also the speed of the compiler. Compiling quickly help you to test some minor changes more often, to find and fix errors faster, and prevent you from losing your concentration and your time waiting for the compiler to be done with its work.

So what about comparing Haxe and AS3 compilers speed ?

First, Haxe starts with an obvious disadvantage : because it has type inference, structural subtyping and generics, the compiler needs to perform much more type checks that in more classical type system such as AS3 one.

But let's see how it rates anyway.

I tried the following : compiling the whole hx-format library, which is 64 files / 10.000 lines / 300KB of Haxe to Flash9. Here's the commandline for that :

time haxe -swf9 format.swf -cp hxformat/tests/all All.hx

On my QuadCore (Haxe compiler does not yet use multicores, that's something we need to fix btw) the command run in 0.31 seconds which is a decent enough.

Now, I'm generating the corresponding AS3 output by using :

haxe -as3 format_as3 -cp hxformat/tests/all All.hx

(a small fix was needed in format.amf.Reader since some expression is not accepted by AS3 generator, but well...)

Then here we go with the AS3 compilation. This was actually a bit hard, because I found that the format.swf.Namespace class is actually a reserved name, so had to do a bit of renaming here and there, then the mxmlc compiler crashed when compiling format.mp3.MPEG (but maybe I'm not up-to-date), and I had to recreate a file that would ensure that all files will get compiled (since import in AS3 is lazy while it's not in Haxe), and so some other minor fixes. Took me 15 minutes but I was able to compile everything except format.mp3 package.

So at last here we go :

mxmlc --output all.swf -compiler.optimize

Took... 3.3 seconds on my QuadCore

So, even if we take out something like 1 second for JVM startup (mxmlc is Java) then we still have Haxe which is... 7x faster than mxmlc ?

Now, that's just a benchmark, I would be happy if other people could try the same with their own project to see if I'm accurate enough or not, but this seems to confirm that indeed, Haxe is fast !

(anybody who remember no more coffee break while compiling ?)

  • Jan 21, 2010 at 01:34

    Well, thanks for this nice reply!

    Hey, what about a AS3 to Haxe converter? This would be a great way of getting more people using Haxe with existing application!
    (I would of course convert some BIG part of my frameworks on that!)

    Let me know if this is in the roadmap!

  • yelbota
    Jan 21, 2010 at 07:36

    will there ever be haxe support incremental compilation?

  • Jan 21, 2010 at 09:21

    I'm sure Haxe compiler is faster by all means, but have you heard of FSCH?

    That allows you to start the JVM only once and keep it opened during your compiling "session". It also uses incremental compilation, i think.

    I use it whenever I have to use MXMLC, it saves quite some time.

    Cheers : )

  • Jan 21, 2010 at 09:39

    yelbota: Why do you need incremental compilation if a compiler is that fast?

    Nicolas: Don't you think that the compiler could become slower when you start making use of threads? 0.3sec is already a really good number. I think that the overhead for the thread management does not pay off.

    I have done some tests with the latest Apparat and I see that for small things running in less than half a second, any attempt to spawn multiple threads makes the whole process slower. However the Reducer, which compresses images in a Swf gains of course a lot because I can compress all images on multiple cores.

    What is your strategy, or what do you want to perform concurrent? The typer, the parser?



  • yelbota
    Jan 21, 2010 at 09:57

    Joa Ebert: And what will happen to the really big projects? Haxe library for gtk-server contains over 3000 static methods in Gtk class. It compiles ~3 seconds on my computer. Why compile it again, if it has already been compiled once?

  • Jan 21, 2010 at 12:57

    To answer to many questions :

    Haxe does not currently have incremental compilation. This is a bit hard to get right (remember AS2 ASO cache issues ?) and less effective when you have type inference since one given change might propagate on a quite big number of classes.

    We are instead focusing on having a faster compiler, but for our projects at Motion-Twin we are very happy with current speed.

    To Joa : the compiler is already perfectly lazy with its typing (the typecheck of a method is deferred until it's actually used until everything else is compiled) so could be a good check to see if multicore either improve or degrade performances.

    Compile times are well distributed (you can run haxe with --times to see on which task the time is spent) : 26% lexing+parsing , 35% typing, 22% swf compilation, 13% swf writing with the hxformat exemple so I guess we can improve everywhere :)

  • Ogla Sungutay
    Jan 21, 2010 at 15:22

    with fcsh, mxmlc will be very fast, close to Haxe.

  • Jan 21, 2010 at 20:10

    I did some tests for concurrent parsing and lexing. The results are okay and you get even better results the more data you have to parse of course. The new Apparat ABC-frontend is also parsing the bytecode using multiple threads.

    A funny side note: I have setup a build server for the Audiotool. That poor guy is doing nothing else than compiling the whole Audiotool again and again. It takes about 4min for a build at the moment but not all modules exist. I think we will end up at around 5 or 6 minutes. Incremental compilation is something you can completely forget since fcsh/mxmlc/compc all together are so buggy that I always have to clean everything.

    When we were still using Eclipse/IntelliJ a build took us about 15min. Especially IntelliJ was fun because they use the FCSH correct and after the first 10 projects, FCSH consumes about 1GB of RAM and is slow as hell. Then the beauty crashes, takes up 1GB again, crashes, and the whole game continues. So our only option is at the moment to have the continous builds.

    So far for the productivity ...

  • Jan 22, 2010 at 23:42

    Joa -
    If you want a less buggy approach, what you really should do is work on a stable AS3 to Haxe convertor for retaining hobnox, then work with Nicolas to better to implement apparat style optimizations directly into the Haxe compliler ;P
    They would go well with all of the existing optimizations Haxe performs.

  • Jan 23, 2010 at 16:15

    Tony: That is of course a valuable approach. This is possible without much stress since one may write a tool based on ASC like AS3V. But instead of analyzing the AST you could create Haxe source from it.

    Actually I think that this is not a bad idea. The Haxe compiler produces already very nice bytecode. Let's see...

  • Jan 23, 2010 at 21:07

    Well, it works. The rest is just boring stuff now. Import to Vector and return statement are added by ASC.


    ''package a.b.c {
    import flash.display.Sprite;
    public class Test extends Sprite implements SomeA, SomeB
    public var a: int;
    private var b: Number = 0.0;

    public function Test()
    trace("Hello World!");


    ''package a.b.c;

    import flash9.Vector;
    import flash9.display.Sprite;

    class Test extends Sprite implements SomeA, SomeB
    public var a: Int;
    private var b: Double = 0.0;
    public function new()
    trace("Hello World!");

  • Jan 27, 2010 at 06:10

    Joa- Excellent!
    So we'll be seeing a fully working frontend soon then i hope ;P

  • Alex
    Feb 05, 2010 at 12:47

    I'm curious though about incremental compiling. I am especially thinking about the code-complete feature haxe actually has. If, for every typing, you have to recompile all files again and again, it might be a huge speed loss all-over. As 26% already goes into lex/parse then shouldn't it be at least possible to cache the generated AST so that you can at least save the 26% here until the files get modified?

  • Feb 07, 2010 at 23:23

    We could save much more, since there is no SWF generation going on when use compiler-based completion.

    But actually, you can usually get completion information in something like under half a second because you don't need to type the whole classes for that, so I'm not sure that optimizing things is really needed there.

  • Alex
    Feb 09, 2010 at 08:31

    Yeah, I know that completion is pretty fast, but yet doesn't the compile all imports first? So if I am importing huge classes, it may slow down pretty much, no?

Name : Email : Website : Message :