Monday, 20 December 2010

The Inevitable Revolution

A couple of years ago, Donald Knuth said in an interview[1]:


Let me put it this way: During the past 50 years, I’ve written well over a thousand programs, many of which have substantial size. I can’t think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading.


Most algorithms were designed with a classic approach of doing one instruction at a time. This is called scalar processing. Algorithms are then implemented on a programming language and finally run compiled or interpreted. There are important issues on these transformations of the code, but that's out of the scope of this post.


The number of transistors inside processors has been roughly doubling each year[2]. Due to the nature of the transistors electricity gets transformed to heat. A lot of heat in a very small space packed with millions of transistors. So CPU manufacturers have reached a hard limit of CPU power heat dissipation[3]. The only way out of this dead end was to increase the number of CPU cores, stopping the Ghz race.


Another big issue with current computers is latency. While processors can run multiple instructions per nanosecond (ns), accessing memory takes about 100 ns. To work around this problem manufacturers place memory caches running at speeds much closer to CPU execution times. But this caches are very expensive and only processors aimed at the high-end server market have big configurations.


If programs were executed one instruction at a time most of the CPU would be idle waiting for a particular unit to finish its current task. Very often instructions in a given block of code can be run in parallel because they are not dependent (e.g. initializing unrelated variables.) The CPU performs this optimizations by having a deep pipeline of instructions currently executing (resembling assembly lines in factories.) To manage this pipeline the circuitry has to determine the instruction dependencies and sometimes reorder the instructions to improve throughput. Good compilers and VMs optimize compiled programs by ordering the instructions to match the characteristics of different CPU models.


When the flow of control of a program reaches a conditional branch (if-then-else) the processor evaluates the condition (e.g. variable a is zero) and make a jump (or not) to the following instruction. This evaluation usually takes long time and disrupts superscalar pipelining. To overcome this the processor has a dedicated unit called branch predictor[4] to evaluate conditional branches by remembering what branch was taken before. If the predictor is successful the flow is keeps running fast uninterrupted. But when there's a branch misprediction the wrongly picked instructions in the pipeline must be undone. This causes a pipeline flush often costing 5ns. For algorithms with many conditional branches with a probability close to .5 this can multiply into major bottleneck (e.g. walking binary trees or classic compression.)


The premises for making algorithms in the literature are completely out of touch with all these issues. One of the old maxims was to avoid full scans at all costs, but a sequential memory scan nowadays goes at 6 to 14 GiB per second and the limits are mostly the data bus bandwidth. Random access traversal in memory often is several orders of magnitude slower due to the latency issues compounded with branch mispredictions. The data structures often used don't scale due to fragmentation. In many cases are sequential in nature so parallel execution is either impossible or requires very slow locking mechanisms for critical areas making the code error-prone and filled with hard to predict bottlenecks.


To use larger amounts of main memory the processors switched to big data addresses. Every memory reference in a 64bit address space requires 8 bytes. In very common scenarios where linked structures are widely used for data storage in memory these huge pointers become a significant overhead. For example, each node of a binary tree will require at least 16 bytes and often 8 more to link to parent (as in popular red-black tree implementations.) If the node only stores 32bit integers the overhead is 400% to 600%!


In many cases the same code is applied to many elements of a data structure iteratively. With current processors it is possible to run the same instructions to packs of data and it's called SIMD (Single Instruction on Multiple Data.) This is widely used for media processing but it is becoming common for general purpose data processing. Most SIMD implementations are evolving to give better support for non-multimedia uses (e.g. Intel's SIMD string instructions.) Some of the most interesting new algorithms of the last decade were redisigns to exploit SIMD processing. Programming directly to one of this SIMD implementations can be quite tricky but it is possible to simplify this with compiler intrinsics or re-targetting compiler tools (like MIT's Cilk.)


There are many interesting instructions available on processors performing very useful tasks (e.g. bit scan) completely neglected by programming textbooks. The savings in time and complexity for algorithms can be very significant.


I differ with proffessor D. Knuth and think this is a great opportunity and all this limitations actually make these times very interesting for computer scientists. It's time for a revolution! Almost everything has to be revised to match the new hardware paradigm. For this there's a need for an army of Knuths to find the way. We need to shake the CS establishment out of their comfortable leather chairs.

[Note: I do not mean Prof. D. Knuth, in fact he is one of the very few CS masters who ties high level theory with low level realistic implementations.]

The New Algorithm Design Maxims


  • Find alternatives to sequential processing
  • Minimize memory use and avoid bloat (Latency)
  • Store related data as close as possible (Caching)
  • Minimize branch mispredictions or remove branches altogether
  • Favor vector/matrix data structures over linked nodes (Pointers, Caching)
  • Exploit vector processing if possible (SIMD)
  • Embrace specialized instructions widely available

A very common approach to speed up processing is to perform space-time trade-offs. For example changing programmatic code to big table lookups. While this works in the small micro-benchmarks it is a short-sighted trick that usually makes scaling very hard. CPU cores can perform billions of instructions per second if used wisely and the new trend for big data processing is to do the opposite, compute-space trade-off. In particular with fast compression algorithms.


Good times.

[1] InformIT: Interview with Donald Knuth (2008)
[2] Wikipedia: Moore's Law
[3] Wikipedia: CPU power dissipation
[4] Wikipedia: Branch Predictor

Tuesday, 25 August 2009

SQLite: A Lesson In Low-Defect Software

[EDIT: Minor edits to fix HTML formatting, style, corrected footnote credit, and code indentation the editor kills on every edit.]

While looking for some old papers and presentations on SQLite (remember my disk died a few months ago without proper backup), even though it seems the old ones are offline or lost, there's a new presentation on their experience producing excellent software.


It seems this presentation wasn't picked up properly so here's a shortened review with more detail on statement and branch testing with GCC. This is mind-blowing stuff, at least for me. Almost everything in this post comes from the paper. I'm trying to stay as close as possible to the original with only a few things added to make it a bit easier to understand at first sight and allow straight copy-paste to play with. I also skipped the less extraordinary SCM recommendations at the end.
  • Use comments on your source code to engage the other half of your brain
    • One side does "math" and the other "language".
  • Safety != Reliability. Safety: no harm; reliability: no failures.
  • Safe Software = Extremely Reliable Software.
  • What Programming Languages Does The World's Most Reliable Software Use? Avionics: ADA, C; Space Shuttle: HAL/S. Not Python, TCL/TK, .NET, LUA, Java (safe but not reliable.)
  • DO-178B and ED-12B "Software Considerations in Airborne Systems and Equipment Certification": Development process matters, not the programming language. Captures best practices.
  • [Remember from An Introduction to SQLite: What makes SQLite great is the extensive testing (60%).]
  • Use your whole brain.
  • Full coverage testing.
  • Good configuration management.
  • Don't just fix bugs, fix your process.
  • Comment: each function or procedure and major code blocks.
  • Comment: all variables and constant declarations.
  • SQLite has a code to comment ratio of 2:1.
  • Full coverage testing:
    • Automated tests that exercise all features of the program: all entry points, subroutines, all branches and conditions, all cases, all boundary values.
    • DO-178B places special emphasis on testing.
    • Statement Coverage: Tests cause every line of code to run at least once.
    • Branch Coverage: Tests cause every machine-language branch operation
      to evaluate to both TRUE and FALSE at least once.
    • (See section below on how to do test coverage with GCC.)
  • Fly what you test! (Measuring coverage validates your tests, not your product.)
  • Defensive Programming (See section below.)

Testing in SQLite

  • 99% Statement Coverage
  • 95% Branch Coverage
  • Goal: 100% branch coverage by Dec 2009
  • Striving for 100% test coverage has been [their] most effective method for finding bugs.
  • Testing in C/TCL (1M), C (2.3M), SQL logic (5.8M)
  • Crash testing.
  • I/O error and out-of-memory testing.
  • Fuzz testing.
  • Valgrind (memory: usage, profiling, leak tracking.)
  • Most bugs are found internally – before release.
  • External bugs are mostly build problems.
  • [SQLite team] doesn't ship “alpha” or “beta” releases, all SQLite releases are production-ready.
  • It is rare for users to find "wrong answers."

Test Coverage with GCC



Consider this snippet:
 1 int exampleFunction(int a, int b){
 2 
 3    int ans = 0;
 4 
 5    if ( a>b && a<2*b ){
 6       ans = a;
 7    } else {
 8       ans = b;
 9    }
10 
11    return ans;
12 }
There are 6 statements in lines 1, 3, 5, 6, 8, 11. To test this function you could run this test:
exampleFunction(1,1);

That would cover statements in lines 1, 3, 5, 8, 11, but not line 6 (because !(a>b)). The following test addresses this statement:
exampleFunction(3,2);

Now that would get all statements covered (line 6, ans = a.) But what about all possible branches? There are 4 possible branches:
  1. a>b: Taken on test 2.
  2. !(a>b): Taken on test 1.
  3. a<2*b: Taken on test 2.
  4. !(a<2*b): Never taken.
So to match branch 4 (a>b && !(a<2*b)):
exampleFunction(4,2);
VoilĂ ! All statements and branches tested.

It seems gcc has some great features to help. With -fprofile-arcs and -ftest-coverage it will generate branch coverage data, from gcc Debugging Options manual:
-fprofile-arcs
Add code so that program flow arcs are instrumented. During execution the program records how many times each branch and call is executed and how many times it is taken or returns. When the compiled program exits it saves this data to a file called aux-name.gcda for each source file. The data may be used for profile-directed optimizations (-fbranch-probabilities), or for test coverage analysis (-ftest-coverage).
[...]
-ftest-coverage
Produce a notes file that the gcov code-coverage utility can use to show program coverage. Each source file's note file is called aux-name.gcno. Refer to the -fprofile-arcs option above for a description of auxname and instructions on how to generate test coverage data. Coverage data will match the source files more closely, if you do not optimize.
So let's make this test with testme.c with only the first test:
int exampleFunction(int a, int b){

    int ans = 0;

    if ( a>b && a<2*b ){
        ans = a;
    } else {
        ans = b;
    }

    return ans;

}

int main() {
    exampleFunction(1,1);
    return 0;
}
Compiling and running:
$ gcc -g -fprofile-arcs -ftest-coverage testme.c -o testme && ./testme \
&& gcov -c testme.c && cat testme.c.gcov
File 'testme.c'
Lines executed:88.89% of 9
testme.c:creating 'testme.c.gcov'

-:    0:Source:testme.c
-:    0:Graph:testme.gcno
-:    0:Data:testme.gcda
-:    0:Runs:1
-:    0:Programs:1
1:    1:int exampleFunction(int a, int b){
-:    2:
1:    3:   int ans = 0;
-:    4:
1:    5:   if ( a>b && a<2*b ){
#####:    6:      ans = a;
-:    7:   } else {
1:    8:      ans = b;
-:    9:   }
-:   10:
1:   11:   return ans;
-:   12:}
-:   13:
1:   14:int main() {
1:   15:    exampleFunction(1,1);
1:   16:    return 0;
-:   17:}
The first column on the report shows how many times the source line was evaluated. The second column is the source line number. Note the ##### mark on the missed statement. Now if we add the second test:
$ gcc -g -fprofile-arcs -ftest-coverage testme.c -o testme && ./testme \
&& gcov -c testme.c && cat testme.c.gcov
File 'testme.c'
Lines executed:100.00% of 10
testme.c:creating 'testme.c.gcov'

-:    0:Source:testme.c
-:    0:Graph:testme.gcno
-:    0:Data:testme.gcda
-:    0:Runs:1
-:    0:Programs:1
2:    1:int exampleFunction(int a, int b){
-:    2:
2:    3:   int ans = 0;
-:    4:
3:    5:   if ( a>b && a<2*b ){
1:    6:      ans = a;
-:    7:   } else {
1:    8:      ans = b;
-:    9:   }
-:   10:
2:   11:   return ans;
-:   12:}
-:   13:
1:   14:int main() {
1:   15:    exampleFunction(1,1);
1:   16:    exampleFunction(3,2);
1:   17:    return 0;
-:   18:}
It now shows the statement in line 6 was taken.

Let's do the branch test with only the first test. Note the -b flag to gcov is the only command line change:

$ gcc -g -fprofile-arcs -ftest-coverage testme.c -o testme && ./testme \
&& gcov -b -c testme.c && cat testme.c.gcov
File 'testme.c'
Lines executed:88.89% of 9
Branches executed:50.00% of 4
Taken at least once:25.00% of 4
Calls executed:100.00% of 1
testme.c:creating 'testme.c.gcov'

-:    0:Source:testme.c
-:    0:Graph:testme.gcno
-:    0:Data:testme.gcda
-:    0:Runs:1
-:    0:Programs:1
function exampleFunction called 1 returned 100% blocks executed 60%
1:    1:int exampleFunction(int a, int b){
-:    2:
1:    3:   int ans = 0;
-:    4:
1:    5:   if ( a>b && a<2*b ){
branch  0 taken 0 (fallthrough)
branch  1 taken 1
branch  2 never executed
branch  3 never executed
#####:    6:      ans = a;
-:    7:   } else {
1:    8:      ans = b;
-:    9:   }
-:   10:
1:   11:   return ans;
-:   12:}
-:   13:
function main called 1 returned 100% blocks executed 100%
1:   14:int main() {
1:   15:    exampleFunction(1,1);
call    0 returned 1
1:   16:    return 0;
-:   17:}
The report says the branches taken at least once are 25%. Further in the report, after line 5: it lists the 4 branches from that line. Branch 0 (a>b) was evaluated but not taken (because a=b.) Branch 1 (!(a>b)) was taken. Since branches 2 and 3 depend on branch 0 those weren't even executed. Now let's run with the second test:
$ gcc -g -fprofile-arcs -ftest-coverage testme.c -o testme && ./testme \
&& gcov -b -c testme.c && cat testme.c.gcov
File 'testme.c'
Lines executed:100.00% of 10
Branches executed:100.00% of 4
Taken at least once:75.00% of 4
Calls executed:100.00% of 2
testme.c:creating 'testme.c.gcov'

-:    0:Source:testme.c
-:    0:Graph:testme.gcno
-:    0:Data:testme.gcda
-:    0:Runs:1
-:    0:Programs:1
function exampleFunction called 2 returned 100% blocks executed 100%
2:    1:int exampleFunction(int a, int b){
-:    2:
2:    3:   int ans = 0;
-:    4:
3:    5:   if ( a>b && a<2*b ){
branch  0 taken 1 (fallthrough)
branch  1 taken 1
branch  2 taken 1 (fallthrough)
branch  3 taken 0
1:    6:      ans = a;
-:    7:   } else {
1:    8:      ans = b;
-:    9:   }
-:   10:
2:   11:   return ans;
-:   12:}
-:   13:
function main called 1 returned 100% blocks executed 100%
1:   14:int main() {
1:   15:    exampleFunction(1,1);
call    0 returned 1
1:   16:    exampleFunction(3,2);
call    0 returned 1
1:   17:    return 0;
Here we see the only remaining branch not taken is 3 (but it was executed). Let's run with the third test:

$ gcc -g -fprofile-arcs -ftest-coverage testme.c -o testme && ./testme \
&& gcov -b -c testme.c && cat testme.c.gcov
File 'testme.c'
Lines executed:100.00% of 11
Branches executed:100.00% of 4
Taken at least once:100.00% of 4
Calls executed:100.00% of 3
testme.c:creating 'testme.c.gcov'

-:    0:Source:testme.c
-:    0:Graph:testme.gcno
-:    0:Data:testme.gcda
-:    0:Runs:1
-:    0:Programs:1
function exampleFunction called 3 returned 100% blocks executed 100%
3:    1:int exampleFunction(int a, int b){
-:    2:
3:    3:   int ans = 0;
-:    4:
4:    5:   if ( a>b && a<2*b ){
branch  0 taken 2 (fallthrough)
branch  1 taken 1
branch  2 taken 1 (fallthrough)
branch  3 taken 1
1:    6:      ans = a;
-:    7:   } else {
2:    8:      ans = b;
-:    9:   }
-:   10:
3:   11:   return ans;
-:   12:}
-:   13:
function main called 1 returned 100% blocks executed 100%
1:   14:int main() {
1:   15:    exampleFunction(1,1);
call    0 returned 1
1:   16:    exampleFunction(3,2);
call    0 returned 1
1:   17:    exampleFunction(4,2);
call    0 returned 1
1:   18:    return 0;
Great, 100% coverage now!


Defensive Programming


Input variable boundary checking would create many branches never taken on tests. Consider for example checking the value of input nBytes doesn't get over a limit 0x7fffff00:
void *sqlite3InternalMalloc(int nBytes){
    if( nBytes<=0 || nBytes>=0x7fffff00 ){
        return 0;
    }else{
        return sqlite3LowLevelMalloc(nBytes);
    }
}
That branch can't be tested. SQLite has some interesting macros NEVER() and ALWAYS():
#if defined(SQLITE_COVERAGE_TEST) 
#  define ALWAYS(X)   1 
#  define NEVER(X)    0 

#elif defined(SQLITE_DEBUG) 
#  define ALWAYS(X)   ((X)?1:sqlite3Panic()) 
#  define NEVER(X)    ((X)?sqlite3Panic():0) 

#else
#  define ALWAYS(X)   (X)   //  PASS THROUGH (What you fly)
#  define NEVER(X)    (X)   //  PASS THROUGH (What you fly)
#endif
Applied to the bounds check above:
void *sqlite3InternalMalloc(int nBytes){
    if( nBytes<=0 || NEVER(nBytes>=0x7fffff00) ){
        return 0;
    }else{
        return sqlite3LowLevelMalloc(nBytes);
    }
}

Footnote



AFAIK, testing can't get any better than this. Somebody could do a machine-checked proof, like the Cambridge seL4 team (UNSW/Sydney) did (also mind-blowing by itself, but probably overkill for now.)

Tuesday, 10 February 2009

Making a free hostel reservation system - pt 1

Leveraging my inclination to postpone the comet server and other things, I got myself into a 2 day sprint to make a proof of concept of a hostel/b&b reservations system that wouldn't suck and could be implemented for free. Current services are all closed source quasi-scams holding the customer's payment as ransom and take the first payment of the first day as commission (~15 bucks per transaction!) while still making the establishments update manually (wtf!) by themselves. That for something that can be coded in a couple of days. And there are at least dozens of these middlemen. Customers lose the most as the process today takes very long with multiple sites and with the risk of using their credit cards with these dodgy intermediaries. These customers usually have to book on shared computers, and they might have to pay for the time to use the computer. Most commercial reservation systems are so bad their code needs your browsing session to jump to second and even third domains. On the other hand, major hostel chains ask over u$d 1,500 per year and demand and around 20% discount for their members. Each member has to cough up u$d 15 per year.

I know many will tell me about how I should make money out of this, but I don't think in this present system of chaos, abuse, and misinformation anybody can have a decent working business model. I'll be happy to just show how disposable they really are to both hostels and travelers.

Some goals for hostel owners:
  • To own the data and be able to pull out as they please, no lock-in.
  • Simple way to have their own data layout to play with, thus avoiding multiple reservation tracking systems.
  • Zero cost.
  • Minimum possible technical knowledge requirements.
Some goals for travelers:
  • Have a fast availability system either global or on the hostel's page.
  • No credit card required.
  • Easy reservation form request with confirmation by email/phone confirmation (most hostels have this already in place.)
Technical:
  • Google Docs (it shouldn't be that hard to switch to alternatives and it's a sort of neutral brand for everybody.) Spreadsheet to manage and export the reservations, Forms to receive the reservation requests, Gadget with the core logic and easy to place inside even static pages hostels have, Sites for the central page.
The idea so far is to give each hostel a Google Spreadsheet (or a template for one and instructions) where they have on one worksheet all the rooms and can fill with the names of the reservations or of the current occupiers. Then a separate worksheet can count and show the result of available beds/rooms for a specific date and share that with the world. The gadget feeds from that data creating a script tag hack requesting a specific range of cells with JSON formatting and callback. The customer searches specifying date and amount of people, then if there is availability a small form is displayed. This form is adjusted to match a Google Docs form requesting the basic data (name, email, phone, date of arrival, number of nights.)

So it's one spreadsheet with two worksheets and a form, per hostel.


On the legal side this site could be set with these:
  • Use only Creative Commons non-commercial share-alike license for the all published data (the shared bit of the spreadsheets, not all the data!)
  • Use GPLv3 for the JavaScript code (a bit redundant but just to make it clear, there is still heavy server-side JavaScript on the corporate world, you wouldn't believe what I've seen.)
That should be enough to have people feel like joining on the project as developers or users. I hope.

Missing but possibly easy to implement features:
  • Internationalization and localization (existing booking systems suck at this, also.)
  • A sane way to make a guarantee to replace deposits (e.g. ask for a $1 paypal/amazon donation to a range of charities like EFF, CC, or WaterAid.)
  • Hostel media hosting or display (i.e. pictures, videos, descriptions.)
  • A fair review system (well, not that easy due to the abundance of trolls.)
And here is the working gadget (source here):



It can be placed on any static page with just this iframe:

Monday, 29 December 2008

And The Dog Ate My Homework

After relocating from London to Buenos Aires my HD broke. Then, the pile of DVDs and CDs got lost including the OSX rescue (finding Mac install DVDs on christmas to borrow was quite a challenge) and my pre-flight backup DVDs were gone there too. Oh, and my hidden TrueCrypt volume got trashed by stupidly adding things to the outer volume (paranoia doesn't pay.)

Luckily the outlines of upcoming posts were stored on Blogger, and I remember most of the code's ideas. I'll try to move on from this.

Besides that hiccup things are doing very well and even got a suntan.

Backup now to a pendrive, to some DVDs, to an external HD, to Gmail, everywhere. Use TrueCrypt, but wisely, get yourself some subtle warning if you do the hidden volume trick! (And at least write-protect the drive.)

Sunday, 9 November 2008

The Ephemeral Ports Problem (and solution)

Richard Jones ran into a problem doing load tests with many open sockets. It was quite an interesting thing to investigate. For reasons still unknown to me after many hours of reading kernel code, documentation, and mailing lists, Linux shares the assigned list of ephemeral ports across all local IPs for unconnected sockets. I hope this post will save some time to someone and there is a workaround suggestion for libevent. (By the way RJ, you were right on this problem! Though I differ now on the solution... Sorry, can't help it, it seems :)

Ephemeral Ports

Ephemeral ports are an assigned range of IP ports to be used by unprivileged processes. This range is usually a few thousand ports above 1024. This range can be modified by the administrator by modifying the relevant sysctl variable. As you probably know, transport protocols in the TCP/IP suite use ports, and when a program wants to initiate communication with this protocols it needs to ask for a local address, and if not done explicitly the OS assigns one automatically. The problem lies on the lookup of available ports.

The operating system tracks the port numbers in use and also the ones in use recently (to know how to handle leftover incoming network packets.) In Linux, this is done with a technique called hash table. The code in the Linux kernel for TCP/IP networking is quite complicated, lacks documentation or comments, and is hard to track what is defined where. After many days banging my head against the crude code, I finally got it. Random posts on internet and a side comment on a standard draft said the ephemeral range was shared for all local addresses on most operating systems. I wanted to know where, how, and if possible, why. So far, I only got the first two and only hints of the last.

Ephemeral Port Assignment

The pattern to create a TCP socket for client software is to call:
int  sock_fd = socket(AF_INET, SOCK_STREAM, 0);
This will make a socket of TCP/IP famiy, of stream type (connected), of default protocol ("0" in this case, TCP.) After this you can manually assign it to a local address by calling:
bind(sock_fd, local_addr, local_addr_length);
That address should contain both the IP address and the port. If the port specified is 0, the kernel looks up for an available port in the ephemeral range. After this you make the actual connection to the server with:
connect(sock_fd, destination_addr, destination_addr_length);
If the bind step was omitted the kernel's connect code does a similar, but slightly different lookup of available ports. Let's compare both lookups.

The bind lookup algorithm resides in net/ipv4/inet_connection_sock.c's function inet_csk_get_port():
/* Obtain a reference to a local port for the given sock,
* if snum is zero it means select any available local port.
*/
int inet_csk_get_port(struct sock *sk, unsigned short snum)
{
/* ... */
if (!snum) {
int remaining, rover, low, high;

inet_get_local_port_range(&low, &high);
remaining = (high - low) + 1;
rover = net_random() % remaining + low;

do {
head = &hashinfo->bhash[inet_bhashfn(net, rover,
hashinfo->bhash_size)];
spin_lock(&head->lock);
inet_bind_bucket_for_each(tb, node, &head->chain)
if (tb->ib_net == net && tb->port == rover)
goto next;
break;
next:
spin_unlock(&head->lock);
if (++rover > high)
rover = low;
} while (--remaining > 0);

/* Exhausted local port range during search? It is not
* possible for us to be holding one of the bind hash
* locks if this test triggers, because if 'remaining'
* drops to zero, we broke out of the do/while loop at
* the top level, not from the 'break;' statement.
*/
ret = 1;
if (remaining <= 0)
goto fail;

/* OK, here is the one we will use. HEAD is
* non-NULL and we hold it's mutex.
*/
snum = rover;
} else {
When snum is 0, it looks for an available bucket in the hash table, but if there is anything in it (any socket using that port, or recently closed) it keeps looking. If the search hits the end, the function fails. To note, there is no use of local IP address in the hash table! The net thing passed isn't forthat. The hash table only cares of port numbers. In contrast, the port lookup on connect in net/ipv4/inet_hashtables.c does:
int __inet_hash_connect(struct inet_timewait_death_row *death_row,
struct sock *sk, u32 port_offset,
int (*check_established)(struct inet_timewait_death_row *,
struct sock *, __u16, struct inet_timewait_sock **),
void (*hash)(struct sock *sk))
{
/* ... */
if (!snum) {
int i, remaining, low, high, port;
static u32 hint;
u32 offset = hint + port_offset;
struct hlist_node *node;
struct inet_timewait_sock *tw = NULL;

inet_get_local_port_range(&low, &high);
remaining = (high - low) + 1;

local_bh_disable();
for (i = 1; i <= remaining; i++) {
port = low + (i + offset) % remaining;
head = &hinfo->bhash[inet_bhashfn(net, port,
hinfo->bhash_size)];
spin_lock(&head->lock);

/* Does not bother with rcv_saddr checks,
* because the established check is already
* unique enough.
*/
inet_bind_bucket_for_each(tb, node, &head->chain) {
if (tb->ib_net == net && tb->port == port) {
WARN_ON(hlist_empty(&tb->owners));
if (tb->fastreuse >= 0)
goto next_port;
if (!check_established(death_row, sk,
port, &tw))
goto ok;
goto next_port;
}
}

tb = inet_bind_bucket_create(hinfo->bind_bucket_cachep,
net, head, port);
if (!tb) {
spin_unlock(&head->lock);
break;
}
tb->fastreuse = -1;
goto ok;

next_port:
spin_unlock(&head->lock);
}
local_bh_enable();

return -EADDRNOTAVAIL;

ok:
/* ... */
The algorithm is quite similar but if the hash table bucket for the port is in use, it calls check_established() to perform further checks:

/* called with local bh disabled */
static int __inet_check_established(struct inet_timewait_death_row *death_row,
struct sock *sk, __u16 lport,
struct inet_timewait_sock **twp)
{
/* ... */
/* Check TIME-WAIT sockets first. */
sk_for_each(sk2, node, &head->twchain) {
tw = inet_twsk(sk2);

if (INET_TW_MATCH(sk2, net, hash, acookie,
saddr, daddr, ports, dif)) {
if (twsk_unique(sk, sk2, twp))
goto unique;
else
goto not_unique;
}
}
tw = NULL;

/* And established part... */
sk_for_each(sk2, node, &head->chain) {
if (INET_MATCH(sk2, net, hash, acookie,
saddr, daddr, ports, dif))
goto not_unique;
}

unique:
/* Must record num and sport now. Otherwise we will see
* in hash table socket with a funny identity. */
inet->num = lport;
inet->sport = htons(lport);
sk->sk_hash = hash;
WARN_ON(!sk_unhashed(sk));
__sk_add_node(sk, &head->chain);
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
write_unlock(lock);

if (twp) {
*twp = tw;
NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
} else if (tw) {
/* Silly. Should hash-dance instead... */
inet_twsk_deschedule(tw, death_row);
NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);

inet_twsk_put(tw);
}

return 0;

not_unique:
write_unlock(lock);
return -EADDRNOTAVAIL;
}
This allows to reuse the same local port as long as the 5-tuple (protocol, source address, source port, destination address, destination port) doesn't exist already (the INET_MATCH call.)

Catch 22

So there is a dilemma on how to create more client TCP sockets than the number of available ephemeral ports (let's call n_sockets and n_ephemeral.)
  • Increasing n_sockets by having with multiple source IP addresses (the RJ approach) won't work, because it will fail on the lookup of available ephemeral ports (it doesn't care about the source address.)
  • If you make just a connect call you get limited to n_ephemeral because the lookup isn't for IP and ephemeral port, it's just a lookup of a port within a local IP (as noted on the comment above.)
[Note: there is no way to do an incomplete bind of only the IP address part and leaving the port to be assigned for later.

After this situation RJ offered a patch to libevent to do it the way httpperf does, binding local address and port. This means the client code has to do the port allocation lookup and if not carefully managed it will be an incredible amount of work on tries to call bind(). In my opinion, this is hackish and ugly. It's not their fault, they were cornered by poor implementations and poor interfaces. In RJ's case libevent always calls bind before connect so there isn't even a chance to do it right, as it is.

Also I didn't like the idea of having to bother the user to have more local addresses and having to pass that to the client program.

My $.02

As a programmer, one way to allow so many connections to a server from a single host would be to instead increase the number of ports the server is listening. This is very common and should be trivial to do and scales very well (n_ephemeral times the number of server ports.) The only limitation is if there is a firewall or some other kind of filter but it is quite unlikely. In this particular case this would require a modification of libevent, to prevent it from calling bind() before connect if no local address is specified (for client code.) This is in effect a four line patch and no change of libevent API (RJ's diff adds another API function and is about 16 lines):
--- http.c      2008-09-08 01:11:13.000000000 +0100
+++ http.c.new 2008-11-13 02:09:12.000000000 +0000
@@ -1731,7 +1731,10 @@
assert(!(evcon->flags & EVHTTP_CON_INCOMING));
evcon->flags |= EVHTTP_CON_OUTGOING;

- evcon->fd = bind_socket(evcon->bind_address, 0 /*port*/, 0 /*reuse*/);
+ if (evcon->bind_address)
+ evcon->fd = bind_socket(evcon->bind_address, 0 /*port*/, 0 /*reuse*/);
+ else
+ evcon->fd = socket(AF_INET, SOCK_STREAM, 0); /* generic socket */
if (evcon->fd == -1) {
event_debug(("%s: failed to bind to \"%s\"",
__func__, evcon->bind_address));
(Yes, I already mailed Niels a few days ago about it. But as usual, he'll probably have a better way to do it. Hi Niels ;)

Trying to Make Sense of the Kernel Algorithm

Why are ephemeral ports searched this way? Why is bind() so strict? Well, at that point:
  • The kernel only knows it is a TCP socket.
  • It doesn't know if it is going to be a client or server (listen) socket.
  • And even if it knew it is a client, it wouldn't know yet the destination address and port.
Some peripheral comments on the subject on Linux kernel mailing list mention the issues of strange things like double connects (valid in TCP.) I am still not convinced this isn't just an archaic lookup that doesn't consider the local address. This issue is discussed by Fernando Gont (a fellow UTN-er, what a coincidence.) in his IETF draft. This was made earlier this year (February 2008) I guess for working out the issues with port prediction (like Dan Kaminsky's DNS bug.) Very interesting read.

Extra


Here is some code to play with:
/*
Copyright (C) 2008 Alejo Sanchez

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/resource.h>
#include <errno.h>
#include <netdb.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>

const char *addrs[] = { "127.0.0.1", "127.0.0.2" };

int
main (int argc, char **argv)
{
struct rlimit rl; /* to bump up system limits for this process */
int *sockets; /* ptr to array of sockets */
int nsockets = 120000;
int c, i;

while ((c = getopt(argc, argv, "n:")) != -1) {
switch (c) {
case 'n':
nsockets = atoi(optarg);
break;
default:
fprintf(stderr, "Illegal argument \"%c\"\n", c);
exit(1);
}
}

rl.rlim_cur = rl.rlim_max = nsockets + 10;
if (setrlimit(RLIMIT_NOFILE, &rl) == -1) {
perror("setrlimit");
exit(1);
}

if ((sockets = (int *) malloc(nsockets * 2 * sizeof(int))) == NULL) {
perror("malloc");
exit(1);
}

for (i = 0; i < nsockets; i++) {
#ifdef BIND_ONLY
struct addrinfo *aitop, ai_hints = { .ai_family = AF_INET,
.ai_socktype = SOCK_STREAM, .ai_flags = AI_PASSIVE };
const char *addr = addrs[i % (sizeof(addrs) / sizeof (addrs[0]))];
const char *portstr = "0";

getaddrinfo(addr, portstr, &ai_hints, &aitop);

sockets[i] = socket(AF_INET, SOCK_STREAM, 0);

if (bind(sockets[i], aitop->ai_addr, aitop->ai_addrlen) == -1) {
fprintf(stderr, "Error binding %s, for %s : %d\n",
strerror(errno), addr, portstr);
} else
fprintf(stderr, "ok addr: %s, i: %d\n", addr, i);
#else
struct addrinfo *aitop, ai_hints = { .ai_family = AF_INET,
.ai_socktype = SOCK_STREAM, .ai_flags = AI_PASSIVE };
char portstr[20];

snprintf(portstr, sizeof(portstr), "%d", 8080 + (i % 4));
getaddrinfo("127.0.0.1", portstr, &ai_hints, &aitop); /* dst */
sockets[i] = socket(AF_INET, SOCK_STREAM, 0);

if (connect(sockets[i], aitop->ai_addr, aitop->ai_addrlen) == -1) {
fprintf(stderr, "Error connecting %s, for port %d\n",
strerror(errno), portstr);
}
#endif

free(aitop);
}

printf("%i socket pairs created, check memory. Sleeping 10 sec.\n", i);
sleep(10);

exit(0);
}

Thursday, 23 October 2008

A Gazillion-user Comet Server With libevent, Part 0

Abstract
After reading an inspiring saga about building a Comet server with Erlang and Mochiweb, I inadvertently snowballed into making my own full-blown challenger using C and libevent. The results hint an order of magnitude increase of performance compared to state of the art open source servers. Comet is very important for Web 2.0 services, it reduces the amount of requests to the backend by the clients and brings real time updates. This is a description of the many frustrations and achievements doing this project. It will be posted in 4 installments, from 0 to 3.

Updates: Typos, spell check, thanks Stef/Nico.

Introduction

A recent post by Richard Jones (of last.fm fame) inspired me to start a comet server that scales well reviving old-school skills. In his post A Million-user Comet Application With Mochiweb Part 1 he presents a (mockup) prototype Erlang HTTP server. The goal of that project is to make a functional Comet server.
Gazillion (n.): Informal An indefinitely large number.
There is no working prototype on this first introductory instalment, hence the name "Part 0." But plenty of code, don't despair.

The Comet Problem

Since Comet is a push technology, most possible solutions rely on keeping an HTTP connection open because the server can't connect back to clients. It's a type of subscription model with some hacks on top. Current open source Comet servers can handle 10 to 20,000 simultaneous connections on a stock server. Most are written in Java, Python, and Erlang. On the same article the developers of Liberator, a closed source commercial server (C or C-something I guess), claim to be able to sustain up to a million client updates per second for 10,000 clients. Their site expands hinting it's running a daemon per core with client-side (Browser/Javascript) doing load balancing. All these figures were reported by those projects own developers. I couldn't find any independent benchmark. But they really sound like a good crowd, so I can take their words for it and you should too.

The scalability problem of AJAX, and now Comet, is a major problem for the adoption of web technologies. Imagine the dialog between a Javascript mail application and a server, with the client polling every X seconds:
Client: "Is there anything new?"
Server: "Not yet..."
Client: "Now?"
Server: "No..."
Client: "Are we there yet"
Server: "@#%@$^!" (HTTP 503 Service Unavailable)
Comet fixes that but pays the price of open connections. Word on the streets is around 50KB per open connection for Java/Python using careful programming, and don't even think on so many objects to write to the wire. Garbage Collection optimization can become your own private horror story.

So after all that introduction, this is my own multipart presentation of a (mockup) prototype. It should be an (ANSI/POSIX) C library and server using the fantastic libevent library (hi Niels!), and the popular libpcre regular expression engine library (more on later posts.) The goal is to crash the cool crowd party and show some old-school moves.

Among the many observations RJ makes, on his first installment he mentions:
The resident size of the mochiweb beam process with 10,000 active connections was 450MB - that’s 45KB per connection. CPU utilization on the machine was practically nothing, as expected.
(Edit: But on his second post he takes those numbers down to 8KB per user by tuning memory management. That is still about 8 GB for 1M users and without counting system resources!)

Scalability: Some Ballpark Math

To have an idea what to expect we need some ballpark calculations. This crude numbers would affect any kind of approach because it is the Operating System side. A starting point is finding out what happens with any given program when there are many sockets connected. With this little program we can see:
/*
Copyright (C) 2008 Alejo Sanchez
(Inspired on bench.c by Niels Provos)

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <sys/types.h>
#include <sys/time.h>
#include <sys/socket.h>
#include <sys/resource.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int
main (int argc, char **argv)
{
struct rlimit rl; /* to bump up system limits for this process */
int *pipes; /* pipe (pairs) memory block */
int *cp; /* traverse pipes */
int npipes;
int i, c;

npipes = 100000; /* default */
while ((c = getopt(argc, argv, "n:")) != -1) {
switch (c) {
case 'n':
npipes = atoi(optarg);
rl.rlim_cur = rl.rlim_max = npipes * 2 + 20;
break;
default:
fprintf(stderr, "Illegal argument \"%c\"\n", c);
exit(1);
}
}

if (setrlimit(RLIMIT_NOFILE, &rl) == -1) {
perror("setrlimit");
exit(1);
}

if ((pipes = (int *) malloc(npipes * 2 * sizeof(int))) == NULL) {
perror("malloc");
exit(1);
}

for (cp = pipes, i = 0; i < npipes; i++, cp += 2) {
if (socketpair(AF_UNIX, SOCK_STREAM, 0, cp) == -1) {
perror("pipe");
exit(1);
}
}

printf("%i socket pairs created, check memory. Sleeping 1 sec.\n", i);
sleep(1);

exit(0);
}
A test with 200,000 sockets (note it's 100,000 pairs) showed a process size of 2MB, so far so good. But the command free showed about 210MB less free memory. It can make you think it is buffers and cache but those numbers didn't move. Repeated tests gave a very similar number and it had correlation with the amount of sockets created. The output of free wasn't useful, same with top. A bit of investigation showed this changes on /proc/meminfo:
MemTotal:      2041864 kB
MemFree: 1007248 kB
Buffers: 57744 kB
Cached: 400772 kB
[13 uninteresting lines]
Slab: 257196 kB
SReclaimable: 136784 kB
SUnreclaim: 120412 kB

MemTotal: 2041864 kB
MemFree: 1225060 kB
Buffers: 57744 kB
Cached: 400772 kB
[13 uninteresting lines]
Slab: 40612 kB
SReclaimable: 34020 kB
SUnreclaim: 6592 kB
The difference is about 217MB, that is around 1KB per connected socket. The Linux kernel takes a large amount of memory for connected sockets, it seems. This memory is initialized and ready for those sockets, but not yet in use. There is a good writeup about the slab allocator.

OS X Crashing, and Linux too

The operating system imposes some limits on the amount of open files. These and other limits can be modified by editing the file /etc/sysctl.conf on both Linux and OS X. The most important for our tests is fs.file-max (kern.maxfiles in OS X) as it controls the global maximum of open files (sockets included.) In Linux there is a per user limit to set in /etc/security/limits.conf:
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#domain type item value
alecco hard nofile 1001000

# End of file
To reload the configuration run sysctl -p changes. and re-login for limits.conf Then either the program has to increase the soft limits with setrlimit, or running ulimit -n unlimited on the shell before invoking the program.

Trying to make a million connected pipes both OS X and Linux crash-freezed. I'm probably missing something here as RC claims he got his prototype to do it (according to post title.) The maximum my Linux could handle was 400,000 and by those numbers some programs start to get killed.

I couldn't find configuration for the behaviour of the slab allocator. There should be a way to prevent it from eating so much memory, IMHO. But still there isn't clear evidence it is related to the crashes. Anyway, this is a fixable environment limit, the code clearly can scale as it never gets over 25MB. When RC gets to explain a bit more perhaps this will just be a non-issue.

With libevent and Linux the scalability of the building blocks should be O(log n) as they show. To get to a more realistic number a test with libevent's HTTP support was needed. In about an hour I wrote a simple 137 line server. To attack it what better than Apache's ab. Resources now jumped to a maximum of 21MB resident (25MB virtual) for 200,000 working connections, but once again the OS was showing ~450MB extra memory used (400,000 connected sockets as ab was running local.) But, lo an behold, the thingie was starting to take shape.
  • For 10,000 parallel clients hammering the server could answer at 44,000 requests per second (12,000 for OS X.)
  • For 10,000 parallel clients with a reconnect per request it was still high at 18,000 requests per second!
Not bad for a notebook, huh? On a single CPU core! The numbers were practically the same for repeated tests. Furthermore, libevent HTTP code does memory allocation all over the place and behaved much better than I expected (my hopes were for something around 10,000 req./sec.) Here's the code:

/*
Comet-c, a high performance Comet server.
Copyright (C) 2008 Alejo Sanchez

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
*/

#include <sys/types.h>

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

#include <event.h>
#include <evhttp.h>

int debug = 0;

void
generic_request_handler(struct evhttp_request *req, void *arg)
{
struct evbuffer *evb = evbuffer_new();

/*
XXX add here code for managing non-subscription requests
*/
if (debug)
fprintf(stderr, "Request for %s from %s\n", req->uri, req->remote_host);

evbuffer_add_printf(evb, "blah blah");
evhttp_send_reply(req, HTTP_OK, "Hello", evb);
evbuffer_free(evb);
return;
}

/*
* Bayeux /meta handler
*/

void
bayeux_meta_handler_cb(struct evhttp_request *req, void *arg)
{
struct evbuffer *evb = evbuffer_new();

if (debug)
fprintf(stderr, "Request for %s from %s\n", req->uri, req->remote_host);
/*
XXX add here code for managing non-subscription requests
*/
evbuffer_add_printf(evb, "blah blah");
evhttp_send_reply(req, HTTP_OK, "Hello", evb);
evbuffer_free(evb);
return;
}

void
usage(const char *progname)
{
fprintf(stderr,
"%s: [-B] [-d] [-p port] [-l addr]\n"
"\t -B enable Bayeux support (on)\n"
"\t -d enable debug (off)\n"
"\t -l local address to bind comet server on (127.0.0.1)\n"
"\t -p port port number to create comet server on (8080)\n"
"\t (C) Alejo Sanchez - AGPL)\n",
progname);
}

int
main(int argc, char **argv)
{
extern char *optarg;
extern int optind;
short http_port = 8080;
char *http_addr = "127.0.0.1";
struct evhttp *http_server = NULL;
int c;
int bayeux = 1;

while ((c = getopt(argc, argv, "Bd:p:l:")) != -1)
switch(c) {
case 'B':
bayeux++;
break;
case 'd':
debug++;
break;
case 'p':
http_port = atoi(optarg);
if (http_port == 0) {
usage(argv[0]);
exit(1);
}
break;
case 'l':
http_addr = optarg;
break;
default:
usage(argv[0]);
exit(1);
}
argc -= optind;
argv += optind;

/* init libevent */
event_init();

http_server = evhttp_start(http_addr, http_port);
if (http_server == NULL) {
fprintf(stderr, "Error starting comet server on port %d\n",
http_port);
exit(1);
}

/* XXX bayeux /meta handler */
if (bayeux)
evhttp_set_cb(http_server, "/meta", bayeux_meta_handler_cb, NULL);

/* XXX default handler */
evhttp_set_gencb(http_server, generic_request_handler, NULL);

fprintf(stderr, "Comet server started on port %d\n", http_port);
event_dispatch(); /* Brooom, brooom */

exit(0); /* UNREACHED ? */
}

A Comet version of the server would surely improve on those amounts as the client doesn't need to pull from the server, each request is mostly server-side writes.

So that was a nice mockup but it's not a working prototype, yet. A prototype would manage registrations of clients to channels, perhaps using a standard transport protocol, and doing a little bit of this and that.

A report on the state of the art of Comet servers shows the most popular transport is
Bayeux. So this prototype can't skip that.

So I should just plug in one of the JSON C parsers and it should be OK, right? Wrong again. Just like Richard Dawkins described this situation:
[So, programming was] a classic addiction: prolonged frustration, occasionally rewarded by a briefly glowing fix of achievement. It was that pernicious "just one more push to see what's over the next mountain and then I'll call it a day" syndrome. It was a lonely vice, interfering with sleeping, eating, useful work and healthy human intercourse. I'm glad it's over and I won't start up again. Except ... perhaps one day, just a little ...
Let's just say those JSON implementations didn't live up to my expectations. But, what a time waster...

So, if we got this far, let's make a little Bayeux parser! How bad could it be?

Coming up: First prototype, trying to do decent parsing in C without killing performance (right), more analysis on the original saga by Richard Jones, and I hope, please, some, sleep... But, well, I'm typing and it's just a matter of alt-tab, so perhaps, let's see just a little bit more... Just the one...

Alecco

PS: Sorry about posting licenses, it is for the lack of warranty part mostly.
 
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.