May 2021
Mo Tu We Th Fr Sa Su
<< >>


Is NodeJS Wrong ?

Posted on Mar 06 2011

NodeJS is pretty popular these days, so I took some time to have a in-depth look at it and analyze the pro/cons of using it against another server-side technology.

The first issue I can see is not technical : you have to rewrite all your programs and libraries by passing callback methods. This is a bit annoying since if you want to make three async requests you have to write the following :

async1(function(result1) {
    async2(function(result2) {
        async3(function(result3) {
            // do something with results

You also have to adopt the same style as soon as you make a regular function call that will itself make an async call.

There is one possibility however to convert this code into the following :

var result1 = async1();
var result2 = async2();
var result3 = async3();
// do something with results

This involves using Continuation Passing Style transformation (CPS). I am still considering adding CPS to Haxe but if you're using Javascript there is already some CPS tools such as NarrativeJS.

Once the syntax issues are solved, let's see how NodeJS is implemented :

NodeJS has a single event-loop thread that waits for I/O on sockets and files. Once some data is ready, it then triggers the corresponding event method and waits until it returns before waiting again for more I/O events. Since all I/O operations are non-blocking, it will make sure that everything runs correctly as soon as the input is available without any lock, and without the developer having to deal with multithreading.

I think that the last part is the most important : it's very hard to get a multithreaded application work nicely, and much more hard to ensure that the exclusive locks it will have to use will not cause any starving.

Starving in servers can be defined as waiting for some lock while everything else we need to run our program is available, "everything" being usually CPU time and input data.

While it's a nice goal to provide a threadless server technology so developers can focus on the application logic and still get good performances, I think the way NodeJS does it is wrong.

In fact, having a single thread to handle the callbacks is fine as long as you have only short-running events that are most of the time waiting for I/O (networking, filesystem or database). While this remain true for most web services calls, you sometimes have some requests that will need actually some CPU calculus. In that case, NodeJS unique thread will lock your entire server during the time this request is running, until it terminates or wait for more I/O.

So while you can run several NodeJS instances (one per CPU core) to ensure best CPU usage for your hardware, this is not true parallelism as long has one request can lock the others while it's being processed.

Criticizing being always easy, let's see how we can make things different :

When implementing the Tora application server for Neko, I had the same requirements as NodeJS : parallelism without having to write multithreaded applications. The solution I used was to have one entry-thread that waits for incoming connections and then dispatch the sockets to a waiting queue. One of the allocated worker threads will then process the request.

In order to make sure that there is no multithread issue, each time a worker thread needs to call a "module" (which consists in a NekoVM bytecode .n file), it will first fetch one from the common Tora cache, and if not found allocate a new one. Each module having its own memory space, it is then possible to have several instances of the same code running in parallel as they would have done in several processes.

Once the event is processed, we can then put back the module into the cache, ensuring this way that you will have allocate in the worse case scenario as much module instances as you have worker threads.

But since we are in the same process, if a long CPU-consuming request is blocking one thread, further requests will only wait until another thread is free. And if you have more threads than CPU cores, you can actually have a server that will have quite a good level of fairness and thus reduce starving.

Additionally, running into the same process allows for easy memory sharing and other goodies.

Currently, Tora can be used in two ways :

  • behind a HTTP server for request/answer mode. Most of the real-time is spent waiting for MySQL answers, we could here use an alternate MySQL driver that would perform async I/O à la NodeJS
  • as a realtime socket server : in that case after performing the first request, the socket will be given to a unique thread waiting for I/O à la NodeJS but the callback will be handle by one of the worker threads

I wish NodeJS and other single-thread server technologies could take into account important things such as fairness and starving.

  • Mar 06, 2011 at 16:22

    Hello Nicolas!

    Correct me if I'm wrong, but doesn't the Web Workers API solve the issue for Node.js? The way I understand your argument, workers avoid the problem by enabling the server to fork new processes for the CPU-intensive stuff to run in parallel, which then hit the event loop back when ready.

    Am I wrong?

  • Mar 06, 2011 at 17:54

    It seems that Web Workers are not part of current NodeJS, and you shouldn't have to use a specific API to tell which parts of your system might use CPU ;)

  • Bert
    Mar 06, 2011 at 22:00

    If you want this setup with node, you can get it. Just create a pool of "worker" child processes and, once a client connects, send its file descriptor to one of them, and let the worker process do the heavy lifting.

  • Anonymous
    Mar 06, 2011 at 23:12

    Uhh, if you're doing a CPU-bound operation it's likely that you'll want to do it in a more performant language (e.g., C/C++). In the general case, it would make sense to me to offload these CPU-consuming operations into a Node.JS extension which exposes an asynchronous API to the calling code.

  • Mar 07, 2011 at 00:58

    agree with anonymous + Bert ... if you want to do heavy lifting - spawn a child process or use C++ with it's own internal thread. not a big deal.

    as for async calling - there's a ! operator in Kaffeine that works quite well : http:///

  • anon
    Mar 07, 2011 at 01:02

    The async pattern used by Node.js is very similar to (and is inspired by) that used by the Erlang language.

    The type of programming it is intended for is huge numbers of simple communicating processes, though node.js lacks Erlang's explicit process creation and discovery and communications. That is, its not really clear how you would create a non-tcp/udp based communication channel between two processes.

  • Brenden Grace
    Mar 07, 2011 at 03:03

    Slow code is slow code. Period. You can get around increasing your request per second count by spawning threads or spawning processes (node.js).

  • Mandamus
    Mar 07, 2011 at 05:50

    node.js is a tool. Don't like what it does or how it does it, or it's just not for you? Use a different tool. It's not rocket science.

  • Mar 07, 2011 at 08:29

    What do you think of Jscex ( I also wrote a simple demo for NodeJs.

  • Mar 07, 2011 at 08:46

    There are times when node's callback structure comes in handy, and when you don't want lots of nested callbacks, just use Seq.

  • Mar 07, 2011 at 09:39

    Sorry but I can't accept "it's not-an-issue" answer ;)

    Of course if you have some very specific high-CPU task such as image manipulation algorithm, calling an external command is the way to go.

    However, you might often end-up writing code that might - in some cases - use CPU.

    Let's say for instance that you're sorting some item list before printing some HTML.

    While for most of your users this will be quite fast, maybe for a few ones that will have many items, it might take a bit of CPU. Let's say two seconds for instance. At that time, all your other users having a pending request on this server will be locked.

    Fairness (like multithreading) is not something that you should have to care about when writing your application, this should be directly handled by the server software itself.

  • Mar 07, 2011 at 10:04

    2 seconds is an inordinate amount of time of straight ahead program running especially on V8. You're right in the sense that if such a problem came about in Node one would need to find a solution to fix it. When I've had this problem I've simply created a child work process that job are piped to.

  • Mar 07, 2011 at 10:07

    I agree with Nicolas. I am using Node.js in 3DTin to run some highly CPU intensive javascript code on server and I can tell it blocks the server (even when the Amazon EC2 instance is dual core). In my case, it's not the matter of writing the code in Javascript or C++. The algorithm itself is long running (catmull clark subdivision). I've good reason to run it in javascript (I run the same code in browser too).

    I'm hopeful that Node.js community will come up with different strategies (like worker child processes mentioned above) that will help solve this problem. For now the traffic is not so heavy for me and I will try to separate the logic of web server and CPU intensive worker threads in two separate node.js processes. That will probably help.

    Thanks Nicolas for the article, it'll increase the awareness of node.js community on this matter.

  • Romario
    Mar 07, 2011 at 11:12

    I guess the basic problem with Node.js is that it's built on top of V8 and V8 is notorious for its processor dependence, it doesn't work on processors other than Intel and ARM. It's tied to the Intel and ARM processors by the machine code.

    The programming details and coding paradigms you've talked about can be solved programmatically. But, the dependence on a particular hardware cannot.

  • Brenden Grace
    Mar 07, 2011 at 13:02

    Nicolas, it isn't an issue. You handle slow code with threads and node suggests you handle it with processes. I'm baffled why you think your N threads will be locked up any less than N processes ...

  • Mar 07, 2011 at 13:42

    I've done the asynchronous server thing as well for a webapp I'm building. However I found the limitation of of cornered (centered, what have you) I/O handling bothersome. For instance, the processing of a request might want to shoot of another request (say to a webservice), and then you're screwed if you centered all your I/O on front of the work callback.

    The solution I'm using is to use pythons greenlet module (a C level co-routine implementation), to implement my own I/O bound scheduler, such that I can write synchronous fashion code in python processing requests and making requests, however in the background the scheduler switches the "tasks" around everytime they enter an I/O waiting state.

  • MBinSTL
    Mar 07, 2011 at 17:01

    A few points.

    There already exists a powerful cross-environment JavaScript CPS implementation called JooseX.CPS:


    If you're not familiar with the Joose object system (works great in browsers and node.js), you should give it a look:

    Also, CPS is not the only option for dumping callbacks in browsers / node.js. Another would be the functional reactive style. See Flapjax:

    I'm working on a reimplementation of Flapjax right now:

    It's got Joose under the hood and I'm generalizing all the library functions for n-ary EventStreams and Behaviors (Reactive concepts). It's very much a work in progress and the test coverage is non-existant atm, but that's owing to the fact I'm working from an existing, working code base. As soon as I have all the core estream and behavior facilities in place, I'm planning to write some exhaustive tests that use JooseX.CPS together with the Joose3 author's Test.Run library:

  • Phu Nguyen
    Mar 07, 2011 at 18:12

    IO and CPU are scarce resources should be controlled globally, per-system. Spawning threads/forking is not the way to go. Just throw I/O / CPU operations to a message queue and have thread pools pull from that. Doing it however adds complexity but really decouples your system.

  • Nicolas
    Mar 07, 2011 at 18:29

    Basically Node.js is going back to cooperative multitasking like what we had in Windows 3.1. One buggy or slow task could block the whole system (here the whole process).

    If you spawn multiple node.js processes, you are in fact reintroducing the overhead of threads, and the concurrency problems that come with it. (But one slow event will always make the all queued event in the same process to be delayed).

    Also for your remark concerning rewriting code to adapt to Node.js, chance are the code doesn't exist yet anyway, even with blocking IO in javascript anyway.

    Maybe Node.js is great, but it seems more about a nice and fun experiment than an industrial strengh stack. I don't see the point to try to stick with only a few thread when server come typically with dozen of cores.

  • MBinSTL
    Mar 07, 2011 at 18:55

    It's *very* important to note that not every function call in node.js magically becomes asynchronous. The async methods are generally those involving I/O (http, tcp, file system..). Please review the node.js API:

    Most functions, in fact, will *not* be asynchronous and you *won't* need to rewrite them.

    So when an I/O method is called, because those are async ("non-blocking") the node.js process will immediately make another trip around the event loop and later execute the async I/O method's callback when the return value is signaled to the process.

    If your node.js process is going to be extremely busy, i.e. servicing thousands upon thousands of requests per second, then it will be wise to consider the composition of your synchronous routines. Yes, if a synchronous routine is invoked and it hogs the CPU for a long time then nothing else can happen in the meantime -- events will just get queued and clients waiting for a response will be left hanging.

    In those types of scenarios it would be important to devise or use a work queue which takes the intensive synchronous calculations and moves them out of the way of the front-end process which is servicing I/O (e.g. answering http requests).

  • Joost
    Mar 07, 2011 at 19:54

    Make your cpu intensive task asynchronous: Split the task into multiple smaller tasks and connect them by process.nexttick. This will ensure responsiveness for other requests, without all the overhead involved when creating many worker threads or processes.

  • Mar 07, 2011 at 20:54

    @Joost : the whole point if that you shouldn't have to do that by hand, a good server architecture would just prevent locking from happening without you to worry about.

  • Joost
    Mar 07, 2011 at 23:32

    Well i guess that's just the trade off. Of course you can rent extra servers and use plain old threading if you don't want to need to worry. But with careful design it's probably possible to do it much more efficient in node.

  • Mar 08, 2011 at 05:11

    25k concurrent connections to #nodejs and #socketio.

  • Mar 08, 2011 at 05:43

    Weepy: One thing in Kaffeine actually looks quite useful (sorry the rest just made things harder to read a lot of the time). That is the .= operator.

    Would be nice to save time from doing "str = str.upperCase();" etc all the time and just do "str .= upperCase();"

    Now if only I could pass via reference in haxe (I want to do what now?!) I guess we could just do this with "using". Seems there was one thing I miss from PHP :(

    (then again the piping wasn't too bad either)

  • Mar 08, 2011 at 09:46


    Node.js satisfies a large problem domain for many people, it may not be perfect, neither are blocking threads. Certainly, trying to argue if it's right/wrong is the same as is pepsi better than coke, an exercise in futility.

    Haxe has a great opportunity to transcend the argument by simply shipping the Node.js api in the standard distro and let the developer decide if they want to use neko's blocking api or node's async api.

    Given runtime transparency is Haxe's meme, ship it already! :)

  • Anon
    Mar 08, 2011 at 23:45

    To answer the question posed by the title, No.

  • Veds
    Mar 09, 2011 at 01:07

    You dont understand Node.js, before writing up an article please L2Research

  • DavidGOrtega
    Mar 09, 2011 at 03:07

    Hi Nicholas,

    I agree with you... Node.js is not the panacea as many others considers... Actually I'm writing a data mining server which uses actually JavascriptCore (pretty same V8) giving access to files, DB, Hadoop and Mongo, webkit, a scrapping framework and a tool pretty similar than Apache Tika... Since many of the operations takes a lot of time is not desirable to just block other requests until pretty huge CPU consuming javascript is finished...

    What susprises me also is that many developers are agree to use web workers before using a thread model... Web wokers are pretty much more harmful...

    Have you take a look to narwhal??

  • Mar 11, 2011 at 10:15
    .. typedef signatures for Haxe/nodejs

  • Mar 15, 2011 at 20:37

    This is a great chance to be able to grasps new knowledge. The apps is indeed very interesting thus pretty useful. The way the instructions were written was pretty much handy. I have to say that I can be able to use this in my line of work.


  • Mar 16, 2011 at 10:57

    Very fun read through this thread and excited to see where you're taking Haxe in the future Nicolas!

    Since I have my own CPS style framework with a few twists for AS3, C#, and game development, I thought I'd throw it out there.

    The main difference (as far as I know) from similar frameworks, is the ability to control flow (and everything really) via cascading tags. Would love to see this type of functionality built in to a real language.

  • Mar 21, 2011 at 04:20

    Typical case of jealousy. Node.js and Ryan were greatly recognized. Neko and Nicholas, were not. Sorry, pal.

    QQ less, keep trying.

  • Mar 21, 2011 at 21:33

    @Idiocracy : typical case of stupidity. Try first understanding what I wrote and how I suggest that NodeJS could be improved before making this kind of comment next time.

  • Kyle
    Mar 21, 2011 at 23:25

    I think it comes down to a fundamental design decision. When you go for an event driven model you should be trying to think of ways to make end user responsiveness the key driving factor. For that, *nothing* should take a long time from the node.js server point of view. If something will take a long time, off load it to something else and have that something else trigger an event when it is done. Or if you *must* have node.js doing something CPU intensive break the job into extremely small chunks that can be handled by the event loop recursively.
    Also if you were to do that I would suggest running a load balancer on your server that points users to as many node.js processes as you have processor power for. Blocking is the enemy and virtual antithesis of an event loop design. Node.js is really just enforcing the rules that you should be following anyway(for that architecture).
    Also want to reduce how much you are using your CPU? Offload it onto the browsers... You did write your code in JavaScript right?

  • Mar 22, 2011 at 01:41

    Kyle: brilliant explanation. Nicholas: learn. Or keep writing Y2K web servers. You choose.

  • Julian Kennedy
    Mar 24, 2011 at 09:31

    Kyle: i TOTALLY second that. Nicholas: Nice effort, but I agree with Idiocracy. Brush up on some fundamental theory, mate. :-)

    No amount of clever technology can substitute for fundamentally bad design. Our entire industry is filled with 'gadgets' that try to excuse sloppy design and coding.

    In the end, boring technologies like good 'ol apache and relational databases and multi threaded webservers survive because they work. And they work reliably.
    And that means more to clients that any amount of buzzwords and shiny toys.

  • Mar 26, 2011 at 11:26

    @Kyle : sorry but I don't agree here, you're having a it's-good-as-it conservatism argument here.

    The current NodeJS implementation is blocking for every request you make, and this could be avoided by using the proper architecture without dropping the event-driven model or having the end-user change a single line of his code.

    You're calling it not necessary, for me this is a requirement for a good scalable server-side architecture.

    Splitting the CPU usage into several requests just to accommodate for parallelism seems like rewriting the wheel, and offloading on the client seems like a bad idea if you manipulate data that shouldn't be visible by the user (as we do for our games at Motion-Twin).

    Also, you're completely neglecting the fact that most of NodeJS users are not even aware of this issue, and will not understand the consequences of using "some" cpu (I'm not even talking about 10s blocking here, just 0.3s is enough to slowdown your website).

    Using several NodeJS servers and a load balancer will only reduce the probability of having one of your request blocked, it will not remove the blocking at all.

    Again, I have nothing against NodeJS principles, I just wished it could be improved - I'm even suggesting a way to do it - so it can become more usable than it is right now.

  • Cauê Waneck
    Mar 30, 2011 at 05:51

    I don't really understand why people see single-threaded server applications as a good thing, while the whole world is heading towards a completly different path, with distributed networks and processing.

    I think the best way to implement a server would be with actors/coroutines, a scheduler and message passing. With a little work we could make it happen on Haxe to all targets, and then no google go/erlang/js/anything would be as awesome for client/server connections.

    I intend to implement that later on. This could really change the way we program in the web, by sharing/sending data between client/server as if they were coroutines of the same process.
    And even later, working on that, we could also implement supervisor processes and hot code swapping on the server-side.

    I can even dream about how good it will feel to write a client/server-based game under this paradigm. : )

  • Cauê Waneck
    Mar 30, 2011 at 06:03

    By the way, I use tora servers on production already, and I think it gets threading right ! It already has support for message passing, and if we write a common way to deal with scheduling and coroutines in Haxe, we would be very close to implement a system like the one I mentioned.

    I don't really know its internals very well, but for what I've read it would benefit also from a non-blocking io and libs like libevent, since we could limit the spawn of "blocked" threads, which can lead to an issue if there are many open (waiting for data) connections.

  • fred
    Apr 01, 2011 at 18:59

    wow you guys are way smarter than me

  • sledorze
    Apr 02, 2011 at 21:51

    May you elaborate a bit more on continuation integration in haxe?

  • mike
    Jun 21, 2011 at 08:24

    while many nodeJS applications are running at great performance, you should start to question yourself. nodeJS is only blocking when you don't understand it.

  • Serch
    Jul 08, 2011 at 10:14

    From most of these comments, I'm just impressed how sensitive people get when someone criticize Node.js :D

  • Serch
    Jul 25, 2011 at 14:29

    @Nicolas: I found an article that you prob will like to read

    PD: I'm not taking any sides neither trying to change your (very valid) opinion. I just think this might answer your question

  • mario
    Aug 21, 2011 at 05:02

    First, if you're CPU is pegged at 100%, having more threads is not going to help matters any. Thread completion will take longer as the OS will timeslice between them not to mention the resources needed for context switching. Threads come with a price.

    Node.js is not really comparable to Windows 3.1. Most of the WIN16 API was blocking. The dreaded floppy format was a blocking operation.The same operation in node would be performed in chunks ceding to the event loop. The problems with WIN16 can easily be avoided with proper async patterns.

    In the worst case, a node process could be hung by a function that spins in an infinite loop. It is much easier to fix those than deadlocks, race conditions, ... Think out of the box and scale with processes.

  • Charles Pritchard
    Aug 29, 2011 at 23:26

    Node.js does have forking options, I'm hoping to see them pick up more w3c standards, such as the Workers family. It's not particularly difficult to implement workers, nor websockets APIs in js.

    In those cases, the authors are -and-have-to- be aware of the low-level semantics. Yes, they're in a low-level API, which is like re-inventing the wheel, when other high level APIs are available.

    node.js is better looked at as an app server, and is best used in conjunction with a tuned nginx install. And when a programmer is creating their own app, from the ground up, they're likely going to care about those low-level details.

  • Jan
    Sep 04, 2011 at 12:19

    The async pattern used by Node.js is very similar to (and is inspired by) that used by the Erlang language.

    Just to get that right, NodeJs is in no way based on the erlang concurrency model.
    What erlang uses is the Actor model, independent lightweight processes communicating with each other via message passing and a scheduler that time slices them. So you can't get a loop that stalls your entire program. There are other Actor languages that differ a bit, like Io where you have async message sends and cooperative multitasking.
    But concurrency is generally implicit.

    NodeJs on the other hand uses ContinuationPassingStyle which is pretty old (1975) and originates from LISP, concurrency is modeled by providing callbacks to functions that are actually concurrent and not caring when they will get evoked. So you see that there is actually no real concurrency model here, there is a model of information flow that allows one to make certain parts of the system concurrent without the programmer noticing, but there is no general underlying concept. So if you want to introduce a concurrent function the framework designers didn't think of you have to create it in another model.


  • Mark Ma
    Oct 04, 2011 at 12:59

    Fairness might be a false proposition, I believe that multithreading applications is also work UNFAIRLY in long-term of view. you may read more about multithreading and how does it works in single-core CPU.

  • stef13013
    Nov 06, 2011 at 00:07

    Great article.

    If I need threads to resolve latency problems (and I guess I will), Apache will be a more logical/mature solution.

    In my opinion, Node.js is also coming with two problems :
    1. Too young
    2. Too hype (see Idiocracy posts and others stuff like "already-used-by-great-companies-all-around-the-world, will kill Apache, etc.)

  • nonsense
    Nov 16, 2011 at 12:52

    so , i don't see anything that JS can do and C/C++ (or java) can't.
    So javascript is script (and uggly for large dev) and a way slower than C/C++. You need performance ? realy ? Huh 'o_O

    Ok Node handle 25k request in your project... with C/C++ you will handle 2.5 millions...

    Node aproach is naive and gadget because JS is slow and has zero coding comfort. Now with our gigaMhz processing unit people are not considering optimisation and optimal harware use anymore. coding and software design are true jobs and need true knowleges about hardware and software architecture. It's like graphism and typography... since adobe product are hackable every one claim to be typograph and designer because the tool offers great shortcut... but design is not just point and click on filters, and coding is not a syntax diarrhea.
    Node and js has the only benefit to democratize the coding experience but.... really server side JS for pro ?
    every one that knows C/C++ java or Haxe have already banished JS. or look a it with blood eyes. (google did it with gwt, Mozzilla-Adobe... with ECMAScript4 (AS3) , Nicolas with Haxe...)

    take a look at this post for in deph analysis of node system.

    sorry for my bad english ;-)

  • Ken
    Nov 16, 2011 at 20:28

    Can C/C++ do closures? No.
    Can C/C++ dynamically modify objects? No.
    Can C/C++ handle event-based programming with simple syntax? No.
    Does C/C++ do garbage collection? No.

    JavaScript has dozens of benefits over C/C++, and of course some downsides, such as performance.

    However, the things that JavaScript does well, it does them so well that it's worthwhile giving up a bit of performance.

    Saying that JavaScript provides "zero coding comfort" just means that you have never used it. Plus, since it is based on a Java/C/C++ syntax style, your comment makes even less sense.

    Other than getting used to the syntax of anonymous functions, almost everything I am used to typing for expressions in C/C++ for the past 20 years still applies in JavaScript. It is EXTREMELY comfortable and took me only a few days to get proficient with it.

    Plus when adding an object system such as Joose, it just gets better.

    Also, your 25K vs. 2.5 million requests per second comparison makes no sense. A huge portion of the time to process a request in a node.js program (or a C/C++ one) is spent waiting for IO to files or databases. The code for the database or file system is already native, so the overhead of node.js only applies to a part of the total request processing time. I'd be surprised if you would see even 50K requests by just replacing JavaScript with C++.

    While it may be true that JavaScript and node.js open up the field for less experience developers, in the hands of a pro, it is amazing what you can accomplish.

  • nonsense
    Nov 17, 2011 at 21:14

    Do you really think js VM is written in JS ?
    Whatever you do in JS is actually done by the VM (i.e. V8) which is a piece of C/C++ code.
    And yes C and C++ are not natively dynamic but C/C++ provides all the tools to build dynamic objects, closures... Dynamic access is creepy because it can fail. the interpreter/C-layer have to costly check its validity at runtime. (you can see an example of how this can be done on Haxe's cpp target). In fact : you can consider JS as a top layer of a C/C++ core. sometimes abstractions allow to do things with less words. But each layer of abstraction has a cost. There is no magic :(.

    godwin law seems to apply to node.js as well... :D
    this is my modest contribution :-p

  • omg_cargo_cult_attack
    Dec 09, 2011 at 08:02

    All of you Cargo Cult people (especially those with 25k requests per sec) go back to your professor and tell him that you urgently need to repeat your computer science (especially concurrency part) course Now!

    thanks, bye!

  • Sure, it is.
    Jan 08, 2012 at 23:54

    Node is a religion. You believe in or not. While one server-side/client-side shared language *might* have some interest other arguments are just crap from a technical view.

    How do you think synchronous stuff are made asynchronous ? (i.e: node-sqlite) ... simply by spawning thread through libuv.

    So each software component (node-sqlite, node-whatever-cpu-intensive, node-whatever) create its own thread/thread pool

    In a non hello-world application, you can reach a state where node.js spawn more process/thread than a nice SEDA based server. But sure, you still benefit from the async-callback mechanism pain (how many framework try to address this issue ?) ...

    Despite benchmarking hello-world application, can you remind me what node.js is good for ?

  • Jonny
    Feb 23, 2012 at 20:48

    Having used JavaScript for over ten years Ken makes some points but they are rather moot. Java(ECMA)Script doesn't offer anything unique from other well-designed and well-thought-out languages. The syntax is nice in some ways but absolutely horrid in others, two major reasons ECMAScript needs to shape-up or ship-out:

    - Completely hacky OO paradigm
    - Sorry state of developer tools that never get it right

    But I don't think this will happen because like PHP the language is one massive hack. PythonScript would actually make sense, but ECMAScript is a ball of phlegm held together with duct-tape and chewing-gum that makes the random occurence of <blink /> and <marquee /> in the HTML standard seem less awkward than a parade of vegans crashing my summer barbeque:

    function MyClass () {}
    SomeHackyMethodToEmulateInheritance(MyClass, MySuperClass);
    MyClass.prototype.whut = function (strange, syntax)
       MySuperClass.prototype.whut.apply (this, [/* completely */ strange, syntax /* yet again */]);
      var dis = this;
      (function (be, fun, but) {
      }) (1, 2, 3, "oh", "no", "you", "didn't!");
      var symantically = good || broken ? true : false;

    Why don't I just stab a fork in my face and save myself the time? C++ can't do closures but C++-0x can, and with template meta programming C++ can do a form of reflection. And it all compiles down into machine code that laps JavaScript around the block so many times that mothers would cry over poor lil JavaScript's crushed self-esteem and the reality that he'll end-up flipping burgers while living in his mother's basement with a shrine of photos on the wall of pretty girls he never talks to. People who try to convince me that more JavaScript is better are tipping their hand and revealing two things to me:

    - Laziness
    - Inexperience

    Larry Wall would roll-over in his... rocket. Laziness is meant to benefit the end-user, but instead we're penalizing the end-user by insisting on using advancements in hardware performance like a crutch instead of like the gift it was meant to be. You're spending all your time giggling with glee talking about how cool your codebase is and meanwhile your customers are twiddling their thumbs waiting for your page to load wishing they were out golfing instead.

    Rather than constantly re-inventing the wheel and making it more square it makes more sense to drive W3C nuts to improve the scripting language in browsers whether that means improving ECMAScript or replacing it with something else. Finally things are beginning to take shape with the advent of HTML5 but it's a misnomer. Very few actual improvements were made to the HTML standard. We've got canvas and video, big woop, where's my datagrid? You mean I have to kludge together another wad of phlegm for another ten years for something as blatantly obvious as an HTML datagrid? Well, thanks for the websockets anyway.

  • Jonny
    Feb 23, 2012 at 22:23

    I should clarify my point.

    Node.js is the sword used often by inexperienced gen-Y JavaScript fanboys in attempt to spread their ilk across the Internet and across my face.

    Some people say "Well if you don't like it, don't use it" but you're not following. The idea of relying soley on JavaScript to develop applications is a contagious cancer, it is a disease and it must be wiped out. A proactive all-out-assault on these ideas is manditory.

  • Pat
    May 10, 2012 at 17:55

    So... one of your main reasons for not liking javascript is that you can write ugly code? Like that isn't possible with c++?

    ''People who try to convince me that more JavaScript is better are tipping their hand and revealing two things to me:

    - Laziness
    - Inexperience''

    People said the same thing about Assembly, and then C, and C++, and Java, etc, etc.

    It's never been about one being better than the other. It's about expressiveness, time to write code and using different tools for different jobs. Memory is cheap.

    Maybe you are lazy for writing your code in c++ and not assembly, you know, because it would be faster.

    I agree with the hacky OO paradigm, so don't use it then. Use prototypal inheritance like it was designed for.

    And as for datagrids... webforms tried that, and that was pretty awful. Just use one of the billion jquery plugins to do it, or make your own, it's not that hard.

  • Raoul Duke
    Jul 13, 2012 at 16:51

    random thoughts just because this seems to be an interesting dumping ground:

    * Nicolas isn't wrong, imho. especially since while saying eventing is not great that threading is also terribly fraught with peril.

    * but given enough time, there will be enough sugar to make node.js feel less like the step backwards it did at first.

    * the erlang runtime (ditto haskell et. al.) does time slicing of actors automatically, node.js doesn't afaik.

    * threading vs. eventing do not have to be enemies:

    * the best is the enemy of the good, there is no accounting for taste, most programmers are "pop", not "classical":

    * everybody should go read more.

    * regarding JS flame war, see how Google has hired a lot of the Object Capability / E-lang people to go in and effing fix it:

  • Schmidty
    Feb 10, 2014 at 01:52

    Node.js is just not designed to attack problems where shared resources are the first problem you think of. It does have a few advantages here - it's even more atomic than python so you tend to need few if any locks. Starving on a lock is almost unheard of in Node - starving the event-loop, however, can happen. You're absolutely correct that the only way to obtain true concurrency is to fork, using `child_process` or `cluster` and passing data through pipes. I'm doing work right now where this fact makes my job somewhat harder and forces a regrettable pass-by-pipe on nontrivial data.

    Eventing is great for certain types of web services, and for GUI work as well. Go try node-webkit sometime, it's magic. It *will* hold you back in a variety of traditional tasks.

Name : Email : Website : Message :