25/04/2012 | Blog
Some time ago adclick.lt, an ad delivery network I support, started struggling with available server resources. We had two choices – either get additional server or try to optimize our code. The second choice worked for a while until there was almost no room for improvements. As it turns out, apache + php (php-fpm didn’t help either) can’t handle complex real-time calculations with more than 400 requests per second.
And it did (see graphs, I think they speak for themselves). Initial benchmarks showed that the new engine handled 4 to 5 times more requests per second. I was really satisfied with the result. One may ask, however, how’s that possible when engine relies on complex calculations? Asynchronous I/O operation handling shouldn’t improve that much, should it? Well, it does. In addition to that, persistent in memory data holding completely removes memcached requirement which, according to xdebug profiles, ate up to 30-40% of a request in the old, php based, engine.
There are some gotchas, though. More than on one occasion I found myself solving race conditions. That’s what happens, when you come from synchronous development camp. There’s also the stability issue – Node.js is fairly new (relatively). But so far so good, no problems related to it’s core. This can’t be said about community created packages, however. Because while there’s a huge number of them, most of them are either forgotten or with large lists of issues. Navigating and finding one that suits one’s needs is a separate and a no small task at that.
Overall, I can only say, that I just love Node.js. The philosophy behind it hits my way of thinking spot on. It’s just like Ryan Dahl, the author of this creation once said: “The amount of complexity I’m willing to tolerate is proportional to the size of the problem being solved.” I think this is the main reason why coding for Node.js was such a satisfying experience. Add the adclick.lt performance issues being solved and what else could one want?
No related posts.