Why does setting SO_SNDBUF and SO_RCVBUF destroy performance?

  c++, networking, performance, sockets

Running in Docker on a MacOS, I have a simple server and client setup to measure how fast I can allocate data on the client and send it to the server. The tests are done using loopback (in the same docker container). The message size for my tests was 1000000 bytes.

When I set SO_RCVBUF and SO_SNDBUF to their respective defaults, the performance halves.

SO_RCVBUF defaults to 65536 and SO_SNDBUF defaults to 1313280 (retrieved by calling getsockopt and dividing by 2).

Tests:

  • When I test setting neither buffer size, I get about 7 Gb/s throughput.
  • When I set one buffer or the other to the default (or higher) I get 3.5 Gb/s.
  • When I set both buffer sizes to the default I get 2.5 Gb/s.

Server code: (cs is an accepted stream socket)

void tcp_rr(int cs, uint64_t& processed) {
    /* I remove this entire thing and performance improves */
    if (setsockopt(cs, SOL_SOCKET, SO_RCVBUF, &ENV.recv_buf, sizeof(ENV.recv_buf)) == -1) {
        perror("RCVBUF failure");
        return;
    }
    char *buf = (char *)malloc(ENV.msg_size);
    while (true) {
        int recved = 0;
        while (recved < ENV.msg_size) {
            int recvret = recv(cs, buf + recved, ENV.msg_size - recved, 0);
            if (recvret <= 0) {
                if (recvret < 0) {
                    perror("Recv error");
                }
                return;
            }
            processed += recvret;
            recved += recvret;
        }
    }
    free(buf);
}

Client code: (s is a connected stream socket)

void tcp_rr(int s, uint64_t& processed, BenchStats& stats) {
    /* I remove this entire thing and performance improves */
    if (setsockopt(s, SOL_SOCKET, SO_SNDBUF, &ENV.send_buf, sizeof(ENV.send_buf)) == -1) {
        perror("SNDBUF failure");
        return;
    }
    char *buf = (char *)malloc(ENV.msg_size);
    while (stats.elapsed_millis() < TEST_TIME_MILLIS) {
        int sent = 0;
        while (sent < ENV.msg_size) {
            int sendret = send(s, buf + sent, ENV.msg_size - sent, 0);
            if (sendret <= 0) {
                if (sendret < 0) {
                    perror("Send error");
                }
                return;
            }
            processed += sendret;
            sent += sendret;
        }
    }
    free(buf);
}

Source: Windows Questions C++

LEAVE A COMMENT