Thursday, 23 October 2008

A Gazillion-user Comet Server With libevent, Part 0

Abstract
After reading an inspiring saga about building a Comet server with Erlang and Mochiweb, I inadvertently snowballed into making my own full-blown challenger using C and libevent. The results hint an order of magnitude increase of performance compared to state of the art open source servers. Comet is very important for Web 2.0 services, it reduces the amount of requests to the backend by the clients and brings real time updates. This is a description of the many frustrations and achievements doing this project. It will be posted in 4 installments, from 0 to 3.

Updates: Typos, spell check, thanks Stef/Nico.

Introduction

A recent post by Richard Jones (of last.fm fame) inspired me to start a comet server that scales well reviving old-school skills. In his post A Million-user Comet Application With Mochiweb Part 1 he presents a (mockup) prototype Erlang HTTP server. The goal of that project is to make a functional Comet server.
Gazillion (n.): Informal An indefinitely large number.
There is no working prototype on this first introductory instalment, hence the name "Part 0." But plenty of code, don't despair.

The Comet Problem

Since Comet is a push technology, most possible solutions rely on keeping an HTTP connection open because the server can't connect back to clients. It's a type of subscription model with some hacks on top. Current open source Comet servers can handle 10 to 20,000 simultaneous connections on a stock server. Most are written in Java, Python, and Erlang. On the same article the developers of Liberator, a closed source commercial server (C or C-something I guess), claim to be able to sustain up to a million client updates per second for 10,000 clients. Their site expands hinting it's running a daemon per core with client-side (Browser/Javascript) doing load balancing. All these figures were reported by those projects own developers. I couldn't find any independent benchmark. But they really sound like a good crowd, so I can take their words for it and you should too.

The scalability problem of AJAX, and now Comet, is a major problem for the adoption of web technologies. Imagine the dialog between a Javascript mail application and a server, with the client polling every X seconds:
Client: "Is there anything new?"
Server: "Not yet..."
Client: "Now?"
Server: "No..."
Client: "Are we there yet"
Server: "@#%@$^!" (HTTP 503 Service Unavailable)
Comet fixes that but pays the price of open connections. Word on the streets is around 50KB per open connection for Java/Python using careful programming, and don't even think on so many objects to write to the wire. Garbage Collection optimization can become your own private horror story.

So after all that introduction, this is my own multipart presentation of a (mockup) prototype. It should be an (ANSI/POSIX) C library and server using the fantastic libevent library (hi Niels!), and the popular libpcre regular expression engine library (more on later posts.) The goal is to crash the cool crowd party and show some old-school moves.

Among the many observations RJ makes, on his first installment he mentions:
The resident size of the mochiweb beam process with 10,000 active connections was 450MB - that’s 45KB per connection. CPU utilization on the machine was practically nothing, as expected.
(Edit: But on his second post he takes those numbers down to 8KB per user by tuning memory management. That is still about 8 GB for 1M users and without counting system resources!)

Scalability: Some Ballpark Math

To have an idea what to expect we need some ballpark calculations. This crude numbers would affect any kind of approach because it is the Operating System side. A starting point is finding out what happens with any given program when there are many sockets connected. With this little program we can see:
/*
Copyright (C) 2008 Alejo Sanchez
(Inspired on bench.c by Niels Provos)

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <sys/types.h>
#include <sys/time.h>
#include <sys/socket.h>
#include <sys/resource.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int
main (int argc, char **argv)
{
struct rlimit rl; /* to bump up system limits for this process */
int *pipes; /* pipe (pairs) memory block */
int *cp; /* traverse pipes */
int npipes;
int i, c;

npipes = 100000; /* default */
while ((c = getopt(argc, argv, "n:")) != -1) {
switch (c) {
case 'n':
npipes = atoi(optarg);
rl.rlim_cur = rl.rlim_max = npipes * 2 + 20;
break;
default:
fprintf(stderr, "Illegal argument \"%c\"\n", c);
exit(1);
}
}

if (setrlimit(RLIMIT_NOFILE, &rl) == -1) {
perror("setrlimit");
exit(1);
}

if ((pipes = (int *) malloc(npipes * 2 * sizeof(int))) == NULL) {
perror("malloc");
exit(1);
}

for (cp = pipes, i = 0; i < npipes; i++, cp += 2) {
if (socketpair(AF_UNIX, SOCK_STREAM, 0, cp) == -1) {
perror("pipe");
exit(1);
}
}

printf("%i socket pairs created, check memory. Sleeping 1 sec.\n", i);
sleep(1);

exit(0);
}
A test with 200,000 sockets (note it's 100,000 pairs) showed a process size of 2MB, so far so good. But the command free showed about 210MB less free memory. It can make you think it is buffers and cache but those numbers didn't move. Repeated tests gave a very similar number and it had correlation with the amount of sockets created. The output of free wasn't useful, same with top. A bit of investigation showed this changes on /proc/meminfo:
MemTotal:      2041864 kB
MemFree: 1007248 kB
Buffers: 57744 kB
Cached: 400772 kB
[13 uninteresting lines]
Slab: 257196 kB
SReclaimable: 136784 kB
SUnreclaim: 120412 kB

MemTotal: 2041864 kB
MemFree: 1225060 kB
Buffers: 57744 kB
Cached: 400772 kB
[13 uninteresting lines]
Slab: 40612 kB
SReclaimable: 34020 kB
SUnreclaim: 6592 kB
The difference is about 217MB, that is around 1KB per connected socket. The Linux kernel takes a large amount of memory for connected sockets, it seems. This memory is initialized and ready for those sockets, but not yet in use. There is a good writeup about the slab allocator.

OS X Crashing, and Linux too

The operating system imposes some limits on the amount of open files. These and other limits can be modified by editing the file /etc/sysctl.conf on both Linux and OS X. The most important for our tests is fs.file-max (kern.maxfiles in OS X) as it controls the global maximum of open files (sockets included.) In Linux there is a per user limit to set in /etc/security/limits.conf:
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#domain type item value
alecco hard nofile 1001000

# End of file
To reload the configuration run sysctl -p changes. and re-login for limits.conf Then either the program has to increase the soft limits with setrlimit, or running ulimit -n unlimited on the shell before invoking the program.

Trying to make a million connected pipes both OS X and Linux crash-freezed. I'm probably missing something here as RC claims he got his prototype to do it (according to post title.) The maximum my Linux could handle was 400,000 and by those numbers some programs start to get killed.

I couldn't find configuration for the behaviour of the slab allocator. There should be a way to prevent it from eating so much memory, IMHO. But still there isn't clear evidence it is related to the crashes. Anyway, this is a fixable environment limit, the code clearly can scale as it never gets over 25MB. When RC gets to explain a bit more perhaps this will just be a non-issue.

With libevent and Linux the scalability of the building blocks should be O(log n) as they show. To get to a more realistic number a test with libevent's HTTP support was needed. In about an hour I wrote a simple 137 line server. To attack it what better than Apache's ab. Resources now jumped to a maximum of 21MB resident (25MB virtual) for 200,000 working connections, but once again the OS was showing ~450MB extra memory used (400,000 connected sockets as ab was running local.) But, lo an behold, the thingie was starting to take shape.
  • For 10,000 parallel clients hammering the server could answer at 44,000 requests per second (12,000 for OS X.)
  • For 10,000 parallel clients with a reconnect per request it was still high at 18,000 requests per second!
Not bad for a notebook, huh? On a single CPU core! The numbers were practically the same for repeated tests. Furthermore, libevent HTTP code does memory allocation all over the place and behaved much better than I expected (my hopes were for something around 10,000 req./sec.) Here's the code:

/*
Comet-c, a high performance Comet server.
Copyright (C) 2008 Alejo Sanchez

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/

#include <sys/types.h>

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

#include <event.h>
#include <evhttp.h>

int debug = 0;

void
generic_request_handler(struct evhttp_request *req, void *arg)
{
struct evbuffer *evb = evbuffer_new();

/*
XXX add here code for managing non-subscription requests
*/
if (debug)
fprintf(stderr, "Request for %s from %s\n", req->uri, req->remote_host);

evbuffer_add_printf(evb, "blah blah");
evhttp_send_reply(req, HTTP_OK, "Hello", evb);
evbuffer_free(evb);
return;
}

/*
* Bayeux /meta handler
*/

void
bayeux_meta_handler_cb(struct evhttp_request *req, void *arg)
{
struct evbuffer *evb = evbuffer_new();

if (debug)
fprintf(stderr, "Request for %s from %s\n", req->uri, req->remote_host);
/*
XXX add here code for managing non-subscription requests
*/
evbuffer_add_printf(evb, "blah blah");
evhttp_send_reply(req, HTTP_OK, "Hello", evb);
evbuffer_free(evb);
return;
}

void
usage(const char *progname)
{
fprintf(stderr,
"%s: [-B] [-d] [-p port] [-l addr]\n"
"\t -B enable Bayeux support (on)\n"
"\t -d enable debug (off)\n"
"\t -l local address to bind comet server on (127.0.0.1)\n"
"\t -p port port number to create comet server on (8080)\n"
"\t (C) Alejo Sanchez - AGPL)\n",
progname);
}

int
main(int argc, char **argv)
{
extern char *optarg;
extern int optind;
short http_port = 8080;
char *http_addr = "127.0.0.1";
struct evhttp *http_server = NULL;
int c;
int bayeux = 1;

while ((c = getopt(argc, argv, "Bd:p:l:")) != -1)
switch(c) {
case 'B':
bayeux++;
break;
case 'd':
debug++;
break;
case 'p':
http_port = atoi(optarg);
if (http_port == 0) {
usage(argv[0]);
exit(1);
}
break;
case 'l':
http_addr = optarg;
break;
default:
usage(argv[0]);
exit(1);
}
argc -= optind;
argv += optind;

/* init libevent */
event_init();

http_server = evhttp_start(http_addr, http_port);
if (http_server == NULL) {
fprintf(stderr, "Error starting comet server on port %d\n",
http_port);
exit(1);
}

/* XXX bayeux /meta handler */
if (bayeux)
evhttp_set_cb(http_server, "/meta", bayeux_meta_handler_cb, NULL);

/* XXX default handler */
evhttp_set_gencb(http_server, generic_request_handler, NULL);

fprintf(stderr, "Comet server started on port %d\n", http_port);
event_dispatch(); /* Brooom, brooom */

exit(0); /* UNREACHED ? */
}

A Comet version of the server would surely improve on those amounts as the client doesn't need to pull from the server, each request is mostly server-side writes.

So that was a nice mockup but it's not a working prototype, yet. A prototype would manage registrations of clients to channels, perhaps using a standard transport protocol, and doing a little bit of this and that.

A report on the state of the art of Comet servers shows the most popular transport is
Bayeux. So this prototype can't skip that.

So I should just plug in one of the JSON C parsers and it should be OK, right? Wrong again. Just like Richard Dawkins described this situation:
[So, programming was] a classic addiction: prolonged frustration, occasionally rewarded by a briefly glowing fix of achievement. It was that pernicious "just one more push to see what's over the next mountain and then I'll call it a day" syndrome. It was a lonely vice, interfering with sleeping, eating, useful work and healthy human intercourse. I'm glad it's over and I won't start up again. Except ... perhaps one day, just a little ...
Let's just say those JSON implementations didn't live up to my expectations. But, what a time waster...

So, if we got this far, let's make a little Bayeux parser! How bad could it be?

Coming up: First prototype, trying to do decent parsing in C without killing performance (right), more analysis on the original saga by Richard Jones, and I hope, please, some, sleep... But, well, I'm typing and it's just a matter of alt-tab, so perhaps, let's see just a little bit more... Just the one...

Alecco

PS: Sorry about posting licenses, it is for the lack of warranty part mostly.

13 comments:

Unknown said...

Impressive work. More please! =)

I couldn't find your email anywhere -- I don't suppose you could drop me a line sometime? My email is CarterMichael@gmail.com

Collin said...

I'd consider the strategy employed by the orbited.org project.

They are emulating a feature of HTML5, Web Socket, over HTTP in current browsers.

This let's them be forward compatible and lets us leverage existing protocols without having to figure out how to implement XMPP over Bayeux over HTTP or build some convoluted Bayeux to IMAP agent.

Give me a socket and a protocol and I'll move the earth, or something like that.

Anonymous said...

Nice!

How would you scale it to more than one machine though?

I think that's one of the reasons that many implementations use erlang. Scaling there is "free".

Stephan.Schmidt said...

"44,000 requests per second"

Real requests or simple-doing-nothing-requests?

http://mailinator.blogspot.com/2008/08/benchmarking-talkinator.html

Because Talkinator "on my quad-core desktop the talkinator server can push about 39000 messages per second."

And

"Keep in mind this is processed messages as in decoded, packaged, and queued for particular recipients. (I often see claims of people sending millions of messages per second on various non-chat systems - this is quite easy if you don't have an incoming request to parse and are sending one-to-one endpoints - simply fill a 10G buffer with pre-formatted messages, start the timer, and hit "send" - basically your CPU is doing nothing and all you're measuring is the speed of your network)."

Peace
-stephan

Alecco Locco said...

@Michael Carter

Thanks :)

@Collin

I'll definitely have a look.

@madssj

Liberator and others scale by client-side load balancing, as in picking a server out of a pool. I didn't get to horizontal or vertical scaling yet, but a simple idea could be to have the server could know if it is slave (configuration or runtime parameter) to subscribe up the channels it has clients in... Using libevent and Comet :)

About Erlang, I have the utmost respect for them, though I don't agree with their scaling arguments. For example having garbage collection often means picking to waste a ridiculous amount of memory on discarded objects (and wait for the best moment to run GC) or kill the CPU cache every time the GC is called (like that hibernate thing seems to do.) I don't know Erlang internals so this is just an educated guess, take it with a grain of salt.

@Stephan.Schmidt

This is "Part 0" with a mock up, so it is a "hello world" example. The idea of this first post was to get an idea of what we should expect later on.

The code is right there, the purpose is to see how fast plain libevent would be and how much memory it'll take. But I have to bite my tong and spare the spoilers :)

paul.querna said...

For a JSON Parse, consider using libjsox:

http://code.google.com/p/libjsox/

It tends to work quite well, and doesn't force an object model on to you.

(Though its much simpler, like SAX vs DOM XML Parsers, but using it you can build your own objects).

-Paul

Unknown said...

@madssj you have to consider that the Erlang garbage collector does not work in the same way the Java GC used to work. In Erlang you never have a need to "stop the world" during garbage collection as the heaps are per process.

Martin Tyler said...

Hi,

Nice work. Just to clarify some things on Liberator.

We have benchmarked up to 30,000 concurrent clients on a single server. Client side load balancing is generally for redundancy or if you want even more clients. The 30,000 top figure or '1 million updates to 10,000 clients' suite-spot are both on single servers (multi threaded not multi process)

Liberator uses our own event manager which i believe works similarly to libevent, and makes use of epoll, /dev/poll, poll, select etc as appropriate.

I have never tried to push it further than 30,000 clients, i assumed 16bit port numbers would limit you to 32k or 64k - and since our business case never really needed higher numbers i never investigated if it that was the case of if there were ways around it.

Looking forward to seeing any further developments you have, sadly i dont have time to try out these things as much as when I first implemented Liberator.

StreamHub Team said...

Not bad for so little code. Event-driven network IO is definitely the only way to get the scalability. We use Java NIO for our Comet server. On a single machine we found we could get as many clients as free ports, so circa 64000. To get more clients than that we had to implement a cluster. We found it was best to have 10000-30000 clients per cluster-node or the message latency would go up too much.

Its interesting how a lot of people are now talking about using Erlang and other languages built for concurrency. We will definitely be watching how that develops. However, we've found one of the limiting factors in getting big Comet scalability is the Operating Systems TCP/IP implementation. You may be able to get better results by increasing the TCP buffer sizes on Linux. Different languages often end up making the same OS system calls anyway whether it be BSD, WinSock or Posix...

Unknown said...

Hi,

I'm doing as what you set out to do, write a comet server using libevent. The one thing I'm banging my head on is how to send back responses from the server, using libevent calls. I can use the chunk calls, evhttp_send_reply_start(), evhttp_send_reply_chunk() & evhttp_send_reply_end(), but they don't appear to flush the write buffer. What calls were you going to use to send back responses to the browser? Thanks!

Alecco Locco said...

@Mark

I'm sorry that's the lost code but on a quick glance on test/regress_http.c it seems it was with something along the lines of:

event_once(-1, EV_TIMEOUT, http_chunked_trickle_cb, state, &when);

Note EV_TIMEOUT. From the man page event(3):

event_once(int fd, short event, void (*fn)(int, short, void *), void *arg, struct timeval *tv);

This call flushes. The code on test/* is very good to get a starting skeleton for your code! Niels did a great job, as usual :)

[Note: code looks horrible because Blogger doesn't allow <pre> in comments.]

Please let me know how it goes. Don't hessitate to send me an email!

Cheers.

Alecco

Anonymous said...

@StreamHub Team - that makes no sense. You don't need ephemeral ports for inbound connections. You need them for outbound connections.

Unknown said...

Hi,

Take a look at http://migratory.ro where we've published recently the new benchmark of Migratory Push Server.

We achieved with Migratory Push Server data streaming up to 1 million users and almost reached 1Gbps b/w with under 100 milliseconds end-to-end data latency on a small server (Dell SC1435 2 x dual-core @2GHz + 16 GB RAM). Benchmark document available at:

http://migratory.ro/data/MigratoryPushServerBenchmarks.pdf

Mihai

 
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.