The first puzzling thing to show up on the transmission graphs was the amount of time one of the machine appeared to be sleeping. Sleeping occurs in the test application as part of the main-loop: when it's not yet time to send (every 20ms in the example) or check for incoming packets (every 2ms), the application simply goes to sleep. Instead of sleeping for a "long" time (for example 19ms if it took 1ms to send the packet), it sleeps multiple times by a small amount (2ms).
All this appears OK on the bottom line of the graph (the Raspberry PI playing the "client" role) but on the top (the Windows 7 netbook), the blue blocks representing the Sleep calls are far too big! They should be the same size as the bottom ones.
As a result, in this particular test, the receive packet operation on the server side is running at longer intervals (16ms) than specified (2ms).
Weirdly enough, the problem didn't show when doing the same test using a Windows 7 desktop PC instead of the netbook.
It turns out that on some Windows machines, the granularity of the Sleep() function can be quite substantial, basically not less than 16ms (not sure if it has anything to do with 32-bit vs 64-bit Windows).
Anyway, this granularity can reduced using the Multimedia function timeBeginPeriod (this requires the app to link against Winmm.lib though):
This is more like it!