Java vs C Network Programming. Binary Protocols and Vectorized IO
 
PROGRAMMING
Java vs C Network Programming. Binary Protocols and Vectorized IO
2021-06-01 | by David "DeMO" Martínez Oliveira

So far we have been using text strings as the basis for our protocols, but sometimes you may be interested in using binary protocols. Overall, a binary protocol is more efficient and depending on your application may be the right choice. However, binary protocols are not that straightforward and have their own peculiarities.

You may be wondering what does this guy mean by binary protocol?. Let's quickly clarify this, before we go straight into the details.

Imagine that you need to design a protocol to live stream the samples of a ADC (some data acquisition system). Imagine that the ADC has a 16 bits resolution and the HW has been configured such as the whole dynamic range of the ADC will be used. That is, the ADC nodes will be sending values from 0 to 65535. Let's suppose that the physical process being monitored actually uses the whole dynamic range, or, if you prefer, we will get values uniformly distributed in that range... Otherwise we could use some kind of incremental encoding or compression technique and reduce the amount of bits we need to transfer per sample.

So, in this case, if we use a text based protocols, our values will be ranging from 0 (that is just one character) to 65535 (those are 5 characters)... In average the length of our message will be:

l = 1 * (Num 1 digit)/total nums + 2 * (nums 2 digits)/total ...

Let's work this out. We have 10 numbers (0 to 9) that requires 1 single character. Then we have 90 (10-99) numbers that requires 2 single characters. Then we have numbers from 100 to 999 using 3 single characters (that is 900 numbers). Following this:

1 digit -> 10 (0-9)
2 digits -> 90 (10-99)
3 digits -> 900 (100-999)
4 digits -> 9000 (1000-9999)
5 digits -> 55535 (10000-65535)

Now we can calculate the average length of the message (assuming the values are uniformly distributed):

1.0*(10.0/65535.0)+2.0*(90/65535)+3.0*(900.0/65535.0)+4.0*(9000.0/65535.0)+5*(55535.0/65535.0)

The average length of the message will be 4.8 bytes. This makes sense as we have way much more 5 digits numbers than any of the others.

On the other hand, if we encode the numbers in binary, we just need two bytes to represent any of the values above. It is true that we will use two bytes also for the 10 numbers that will just use 1 byte in our text protocol... but all other 65525 values will be encoded with just 2 bytes.

Note: we could have used a text based protocol but encode our numbers in hexadecimal for instance. In that case, our values will run from 0x00 to 0xFFFF and we will just need 4 characters in the worst case (we can forget about the 0x if all our values are hex... So the average message length will be lower (be free to repeat the calculations above), but still larger than the binary case.

So, this is the main advantage of using a binary protocol. Compactness. Another advantage is that data type conversions will be easier and, therefore our program will run faster....

Before jumping into the code, let's quickly go through the points that make binary protocols tricky. If you are thinking that binary protocols are damn simple then, it is very likely that you are doing them wrong... I know because I use to think that :).

Data types sizes

At first, you may consider to use C structures and use read and write to move data from the network into memory and vice-versa. That usually works fine, when all your network nodes are the same kind of machine, and all your code is compiled with the same compiler (in the most general case).... But if not.... That will just not work. Let's see why.

One of the problems with binary protocols is data types sizes. Different processors even from the same family may have different word size and therefore, the same data type may need more or less space in memory. For instance a long in a 32bits machine is 4 bytes and in a 64bits machine it is 8 bytes. Independently of the actual processor architecture, languages like Java also set its own datatype sizes that may not match a native application written in C.

To solve this problem, specially when programming in C (where the processor type may have an impact) we can use the stdint.h header that defines different integer types based on its actual physical size. Using this header we can declare a uint16_t value that is ensured to be an unsigned integer of 16bits in any platform we compile our program.

Endianness

Another problem is endianness. When working with the same family of processors, and once you have taken care of the size of your data, this is not a problem and sometimes people overlooks it, because things just works. However, different processor families may have different endianness and that may lead to reading completely wrong data... Imagine the number 0x1234 send over the network as a 0x12 followed by a 0x34.... If we just read those values directly in an integer value (using read for instance), a big endian machine will read the number 0x1234 but a little endian machine will read 0x3412 (or is it the other way around?.. it doesn't really matters... one of the machines will get the wrong value).

To solve this problem, we have helper functions available since the beginning of Internet. The functions htonXX allows us to convert the Host based representation of type XX to the so called Network representation. On the reception part we can use the functions ntohXX to do the reverse conversion. These functions takes care of doing the right thing on each machine.

Structure Padding

The last problem is (actually may be, depending what you are doing) padding. When you declare a struct in C, data does not lay in memory exactly as you've defined it in your program. The compiler will add padding bytes here and there in order to properly align the different fields of the structure in memory and therefore optimise the way the processor access that data.

The main issue with this is that, the way this padding is done will change with the support of new processors or even on future compiler versions. Adding a few bytes between two fields in your structure, will end up in a completely corrupted data read from the network. As said, when all your machines are the same and all your programs are compiler with the same compiler, you are fine (your program will just be sending some extra zeros on the wire).... but whenever anything of that change.... the system will just crash badly.

This is maybe a bit harder to understand. Let's write a small program to illustrate what are we talking about:

#include <stdio.h>
#include <stddef.h>

struct type1_t {
  int  val1;
  char val2;
  long val3;
  char val4;
};

int main () {
  printf ("Size of type1 : %ld\n", sizeof (struct type1_t));
  printf ("Offset to val1: %ld\n", offsetof(struct type1_t, val1));
  printf ("Offset to val2: %ld\n", offsetof(struct type1_t, val2));
  printf ("Offset to val3: %ld\n", offsetof(struct type1_t, val3));
  printf ("Offset to val4: %ld\n", offsetof(struct type1_t, val4));
}

Now let´s compile this into a 64bits and a 32 bits binary:

$ gcc -o structs.64 structs.c
$ gcc -m32 -o structs.32 structs.c
$ ./structs.64
Size of type1 : 24
Offset to val1: 0
Offset to val2: 4
Offset to val3: 8
Offset to val4: 16
$ ./structs.32
Size of type1 : 16
Offset to val1: 0
Offset to val2: 4
Offset to val3: 8
Offset to val4: 12

You can see clearly how the compiler adds different padding bytes in the middle of the structure that ends into different sizes. This is also due to the fact that we are using generic types that are different for 32bits and 64bits. Using the types provided by stdint.h will circumvent most of these issues except when we use pointers.

To further explore the consequences of padding (you may have similar issues when reading file formats) you can use clang with the -Wpadded flag. This will inform you about the padding entered by the compiler:

$ clang -Wpadded -o structs.clang structs.c
structs.c:8:8: warning: padding struct 'struct type1_t' with 3 bytes to align 'val3' [-Wpadded]
  long val3;
       ^
structs.c:5:8: warning: padding size of 'struct type1_t' with 7 bytes to alignment boundary [-Wpadded]
struct type1_t {
       ^
2 warnings generated.

To solve this problem you will need to manually craft your binary structures to make sure that... on the wire all bytes and bits fits in the right position, so the other end will know where to find them and interpret its value properly.

Alternatively, You can also pad your structures manually, or use some of the compiler directives to ask the compiler to pack the structures in memory. However going this way will make your code depend on the compiler and it may break in future when new versions of the compiler are released... Also note that, independently of being able to read your data directly into a memory structure, you will need to deal with the endianness issue anyway and transform some of the data after reading..

TLV protocols

To illustrate all these issues we are going to implement a very simple TLV protocol. TLV stands for Type, Length and Value. It is a very simple and popular way to send data types in independent packets. Each packet contains a header with a type (the T in TLV) identifier that will give us the hint we need to properly read the value field, and also a lenght field (the L in TLV) that will indicate how much data we will expect in the value (that is the V in TLV) field... Think for instance on type string.

Physically our TLV packet will look like this:

| HEADER                            | PAYLOAD         |
+----------------+------------------+-----------------+
| TYPE (2 bytes) | LENGTH (4 bytes) | Value (N Bytes) |
+----------------+------------------+-----------------+
| 00 01          | 02 03 04 05      | 06              |

We will use 2 bytes for the type (1 will likely be enough, but we do not have endianness issues with 1 byte), and 4 bytes for the length. So we can encode 65535 types of data and each data can be as big as 2 ^ 32 bytes long!!!

For our test programs we will define a structure to contain the information to transmit however when actually sending and receiving the data on the net, we will be manually packing the data into a raw byte buffer.

TLV implementation

For this example, I will write the functions to deal with the TLV packets in a separated module. This is the header file tlv.h:

#ifndef TLV_H
#define TLV_H

#include <stdint.h>
#include "channel.h"

#define TLV_HDR_SIZE  6

#define TLV_STRING    1
#define TLV_I8        2
#define TLV_I16       3
#define TLV_I32       4

// TLV Packet
typedef struct pkt_hdr_t {
  uint16_t      type;
  uint32_t      len;
  unsigned char val[0];
} PKT_HDR, PKT;

#ifdef __cplusplus
extern "C" {
#endif

  int pack_i16 (unsigned char *p, uint16_t v);
  int pack_i32 (unsigned char *p, uint32_t v);

  int unpack_i16 (unsigned char *p, uint16_t *v);
  int unpack_i32 (unsigned char *p, uint32_t *v);

  PKT *tlv_read (CHANNEL *c);
  int tlv_write(CHANNEL *c, PKT_HDR *pkt);

  // Helper functions
  PKT *tlv_str (char *str);

#ifdef __cplusplus
}
#endif

#endif

It is a very simple module. It defines function to pack and unpack integers of 16 and 32bits (the two only values we use in the header) and two functions to read a TLV packet into our PKT structure and to send a PKT structure on the network using the right format. I have also added a helper function to create a PKT directly out of a string. This is convenient as I will be implementing again an echo server.

Packing and Unpacking

The packing and unpacking functions are pretty simple. We just put each byte in order in a raw buffer.

int
pack_i16 (unsigned char *p, uint16_t v)
{
  if (!p) return -1;
  
  p[0] = v & 0xff;
  p[1] = (v >> 8) &0xff;
  
  return 2;
}

int
pack_i32 (unsigned char *p, uint32_t v)
{
  if (!p) return -1;
  
  p[0] = v         & 0xff;
  p[1] = (v >> 8)  & 0xff;
  p[2] = (v >> 16) & 0xff;
  p[3] = (v >> 24) & 0xff;
  
  return 4;
}

As you can see, I'm returning the number of bytes I'm writing in the buffer. This will be convenient to write multiple fields one after the other. The unpacking functions are exactly the opposite:

int unpack_i16 (unsigned char *p, uint16_t *v) {
  *v = p[0] + ( p[1]  << 8 );
  return 2;
}

int unpack_i32 (unsigned char *p, uint32_t *v) {
  *v = p[0] + ( p[1]  << 8 ) + (p[2] << 16) + (p[3] << 24);
  
  return 4;
}

We take the bytes from the raw buffer read from the network and put each one on the right position. Note that despite if the machine is little endian or big endian, the code will work the same, as the compiler will generate the right code to store the value in memory in the appropriate order.

Vectorised IO

Now we will see how to send a PKT structure using this simple protocol we are working with. I'll also take this opportunity to introduce the so-called vectorised IO or scatter/gather IO, whatever you prefer. This primitives allows us to store the data we want to send into non-continuous areas of memory, which is convenient specially with binary protocols where we usually have a header that contains the total length of the packet we want to send.

On GNU/Linux we can do this using the readv and writev system calls. They expect an array of struct iovec that basically defines multiple buffers in memory. The system calls will read data of the network and place it in the different buffers (after filling one it moves to the next one), or write data stored in different memory regions to the network interface.

The struct iovec has two fields, one indicates the size of the buffer and the other is a pointer to the memory containing the data.

Using these information, the tlv_write function will look like this:

int
tlv_write (CHANNEL *c, PKT_HDR *data)
{
  struct iovec  pkt[2];    // Vectorised array for writev
  int           n;
  unsigned char hdr[TLV_HDR_SIZE], *phdr=hdr; // On-The-Wire header buffer
  unsigned char *val;

  // Pack Header
  phdr += pack_i16 (phdr, data->type);
  phdr += pack_i32 (phdr, data->len);
  
  pkt[0].iov_len = TLV_HDR_SIZE;
  pkt[0].iov_base = hdr;
  
  pkt[1].iov_len = data->len;
  pkt[1].iov_base = data->val;
  
  return writev (c->s, pkt, 2);
}

The function is pretty simple. We just pack the TL data in the header and store it in one buffer, and then we put the data (the V) in a second buffer. writev will take care of sending all the data atomically for us.

I wrote a similar TLV_read function using readv. Note that for reading, may be better to issue two reads. The first one will read the header and get the actual data length so a proper sized buffer can be allocated to get the data on the second read. In this case I used readv just to illustrate how to use that system call. As you will see, the data buffer is 8Kb big to accommodate big messages and it will just be filled with the currently available information returned by read. This is how it looks like:

PKT* tlv_read (CHANNEL *c)
{
  PKT           *pkt;              // PKT object for easy returning info
  struct iovec  _pkt[2];
  unsigned char hdr[TLV_HDR_SIZE], *phdr=hdr; // On-The-Wire Header buffer
  unsigned char buf[8*1024];       // 8 Kb maximal size

  _pkt[0].iov_len  = TLV_HDR_SIZE;
  _pkt[0].iov_base = hdr;   // Set buffer of 6 bytes to read header
  
  _pkt[1].iov_len = 8*1024;
  _pkt[1].iov_base = buf;   // Set buffer of 8Kb to read data
  int n = readv (c->s, _pkt, 2);
  
  if (n <=0) return NULL;
  pkt = malloc (sizeof(PKT_HDR)+ (n - TLV_HDR_SIZE)); // Allocate space to store the data in a struct
  
  phdr += unpack_i16 (phdr, &pkt->type);  // Unpack header into struct
  phdr += unpack_i32 (phdr, &pkt->len);

  memcpy (&pkt->val, buf, pkt->len); // Copy data after struct field val
  
  return pkt;
}

The tlv_read function allocates memory for the PKT that needs to be freed by the higher level code that makes use of the function.

A server

With all this code we have prepared, a basic echo server using a binary TLV protocol will look like this:

int main () {
  fd_set               rfds;
  struct timeval       tv; 
  CHANNEL              *c, *c1; 
  int                  i, max, n, ops=1;
  unsigned char        *msg;

  for (i = 0; i < MAX_CONNECTIONS; con[i++] = NULL); // Initialise connections
  if ((c = channel_new (CHANNEL_SERVER, NULL, 1234)) < 0) FAIL("channel_new:");
  
  while (1)
    {
      FD_ZERO(&rfds);
      FD_SET(c->s,&rfds); // Add accept socket
      
      // Add all other sockets
      for (max = -1,i = 0; i < MAX_CONNECTIONS; i++)
      if (con[i]) {
        FD_SET(con[i]->s,&rfds);
        if (con[i]->s > max) {
          max = con[i]->s;
        }
      }
      
      /* Set timeout Timeout*/
      tv.tv_sec = 0;
      tv.tv_usec = 100000;

      if ((n = select (max + 1, &rfds, NULL, NULL, &tv)) < 0)
        perror ("select:");
      else
        {
          if (FD_ISSET(c->s,&rfds)) // Accept connection
            {
              if ((c1 = channel_accept (c)) == NULL) FAIL ("channel_accept:");
              add_connection (c1); // Add new socket to connection list
            }
          // Check if there is anything to read from the connections
          for (i = 0; i < MAX_CONNECTIONS; i++)
            {
              if ((con[i] && FD_ISSET(con[i]->s, &rfds)))
              {
                PKT *pkt = NULL;
                if ((pkt = tlv_read (con[i])) != NULL) {
                  char buf[1024];
                  printf (">> Read (%d): %s\n", pkt->type, pkt->val); 
                  snprintf (buf, 1024, "ECHO:%s", pkt->val);
                  PKT *rpkt = tlv_str (buf);
                  tlv_write (con[i], rpkt);
                  free (rpkt);
                }
              channel_free (con[i]);
              con[i] = NULL;
              if (pkt) free (pkt);
            }
        }
    }
  }
  channel_free (c);
  return 0;
    
}

No big surprise here. We are just used our new functions that we have already described, all except tlv_str. You can figure out what it does just looking to the code, but in case you are feeling lazy (I do very often :)), this is what it does:

PKT*
tlv_str (char *str)
{
  PKT *pkt;

  if (!str) return NULL;
  
  pkt = malloc (sizeof(PKT) + strlen(str));
  
  pkt->type = TLV_STRING;
  pkt->len = strlen(str);
  strcpy (pkt->val, str);
  
  return pkt; 
}

A client

The client code is even simpler:

int main (int argc, char *argv[]) {
  CHANNEL              *c; 

  if ((c = channel_new (CHANNEL_CLIENT, "127.0.0.1", 1234)) < 0) FAIL("channel_new:");
  
  PKT   *pkt = tlv_str (argv[1]);
  PKT   *rpkt;

  tlv_write (c, pkt);
  rpkt = tlv_read (c);
  
  printf ("RECEIVED:\n%s\n", rpkt->val);
  
  free (rpkt);
  channel_free (c);
  
  return 0;  
}

So far so good.... Let's see how to do this in Java....

A Java TLV class

As we did with the C version, let's start writing some code to deal with the TLV encoding of our data. In Java, we will implement it as a class that encapsulates a TLV packet. That is conceptually the same thing we did with our C version. The packing and unpacking functions will be static functions as they do not really belong into the objects. This is how this class looks like:

class TLVPacket {
  // Some constants to make the code more readable
  public static final short TLV_STRING = 1;
  public static final short TLV_I8     = 2;
  public static final short TLV_I16    = 3;
  public static final short TLV_I32    = 4;
  public static final short TLV_HDR_SIZE = 6;
  
  // On the wire buffers to store the header and the data
  ByteBuffer hdr;
  ByteBuffer data;
  
  // The actual packet information
  short   type;
  int     len;
  
  // Packing functions exploiting method overloading
  public static void TLVpack (ByteBuffer b, short v) {
    b.put ((byte) (v & 0xff));
    b.put ((byte)((v >> 8) & 0x0ff));
  }
  public static void TLVpack (ByteBuffer b, int v) {
    b.put ((byte) (v & 0xff));
    b.put ((byte)((v >> 8) & 0x0ff));
    b.put ((byte)((v >> 16) & 0x0ff));
    b.put ((byte)((v >> 24) & 0x0ff));
  }
  
  // Return type cannot be used for method overloading so we use different names
  public static short TLVunpacki16 (ByteBuffer b) {
    short v = (short)((byte)b.get() + ((byte)b.get() << 8));
    return v;
  }
  public static int TLVunpacki32 (ByteBuffer b) {
    int    v = (int)((byte)b.get() + ((byte)b.get() << 8)
             + ((byte)b.get() << 16)
             + ((byte)b.get() << 24));
    
    return v;
  }
  
  // Default constructor
  public TLVPacket () {
    len = 0;
    type = 0;
  }
  // String constructor
  public TLVPacket (String s) {
    len = s.length();
    type = TLV_STRING;
    hdr = ByteBuffer.allocate (6);
    data = ByteBuffer.allocate (len);
    data.put (s.getBytes());
  }
  
  public long TLVwrite (SocketChannel c) throws IOException {
    TLVpack (hdr, type);
    TLVpack (hdr, len);
    ByteBuffer[] all = {hdr, data};
    hdr.flip ();
    data.flip();
    return c.write (all);
  }
  
  public String TLVread (SocketChannel c) throws IOException {
    ByteBuffer hdr = ByteBuffer.allocate (6);
    ByteBuffer data = ByteBuffer.allocate (1024);
    ByteBuffer[] all = { hdr, data};
    c.read(all);
    hdr.flip ();
    type = TLVunpacki16 (hdr);
    len = TLVunpacki32 (hdr);

    data.flip();
    byte [] msg= new byte[len];
    data.get (msg);
    String response = new String (msg);

    return response;
  }
}

This time I have added all code at once as it is almost the same than the one we used in our C version, but there are a few remarks:

  • For the packing functions we overloaded the method pack instead of providing two different names. This way, the right method will be called based on the parameters we pass
  • Method overloading doesn't consider the return type so we decided to use two different names for the unpacking functions... just to make this comment. Other possibilities could be used.
  • For the sake of brevity I just changed the read function to return a String. Note that all data is actually read in the associated object so we could just remove the last part of the function (the buffer to string conversion) and let the main program do that conversion if need be.
  • Java NIO supports vectorised IO. To take advantage of this mode we just need to use Buffer objects and create an array of those (equivalent to the struct iovec of C). Then we just write or read in the channel object. This is actually a pretty natural way of implement them.

Conclusion

In this instalment we have gone through some of the most common issues that pop up when implementing binary protocols. Then we implemented a simple TLV protocol in C and in Java that will work independently of the endianess or processor word length. As part of this implementation we introduced the use of vectorised IO in C and also in the Java NIO package.

This concludes the exploration of the Java NIO package, at least for the network related classes.

Java vs C Network Programming. Select and Selectors
Java vs C Network Programming. Java NIO Buffers
Java vs C Network Programming. Echo Server


 
Tu publicidad aquí :)