converted to 1.6 markup
|No differences found!|
Subject: bandwidth measurement
This text describes a change to the Npv7. It will be included in the final documentation, so feel free to correct it. But if you want to change the system here described, please contact us first.
Link measurement issues
In the current version of the Npv7, the Radar measures the link quality using only the rtt (round-trip time) and packets losses. This isn't optimal, because the bandwidth capacity is ignored, thus a link having a poor bandwidth, f.e 20Kbps, but a good latency, will be preferred over a link with a big bandwidth but a medium latency.
A node must include in the Tracer Packets, not only the rtt of the traversed link, but also the current bandwidth capacity. In this way it will be possible to have a traffic shaping based also on the real bandwidth of the links.
There are two phases of measurement. In the first the total bandwidth capacity of the new links is measured by the hooking node and the destination nodes, in the second the bandwidth of the links is constantly monitored.
The utilised bandwidth will be monitored with the libpcap library.
Total available bandwidth
A <--> B <--> C
The node B is hooking to A and C. At the end of the hook, B measures the total available bandwidth of the links B-->C and B-->A.
There are different methods to measure the bandwidth of a link.
The first and the simplest one is to send an indefinite amount of random packets, for some seconds, to the destination of the link. The transmission rate is then monitored with libpcap and the maximum rate of bytes per second is registered as the maximum available upload bandwidth for that specific link. This method is very effective, indeed the measured bw is the real one and is not approximated. Its side effect is that it requires the complete saturation of the link, for some seconds.
The second technique is more refined. It consists in the Packet Pair and Packet Train techniques, see http://perform.wpi.edu/downloads/wbest/README and http://www-static.cc.gatech.edu/fac/Constantinos.Dovrolis/bw.html
Since the link might be asymmetric the measurement is also repeated for A-->B and C-->B.
In the end, we have the measurement of: A->B, B->A, C->B, B->C.
Realtime bandwidth monitoring
With the use of the libpcap library, B can monitor the bandwidth usage of its links.
Max_link_bw = Total available bandwidth of the link nflows = number of current flows on the link. A flow is the stream of packets associated to a specific connection.
The current available bandwidth of the link is approximated as:
link_avail_bw = Max_link_bw/(nflows+1)
We choose this approximation because the TCP traffic allocation method is similar to a max-min allocation algorithm, thus when there are many flows `link_avail_bw' becomes a good approximated value.
If the radar will notice a big variation of the link's current available bandwidth, it will send a new QSPN.
Each node of the network will delay the forwarding of a received Tracer Packet by a time inversely proportional to `link_avail_bw'. In this way, the TPs will continue to be received in order of efficiency. The side effect of this rule is that the extreme cases will be ignored, i.e. a route with a very low rtt but with a very poor bw, or a route with an optimal bw but with a very high rtt. However, in the real world these extreme cases are rare, because the rtt and the bw are often related. For more information about the order of efficiency of the Tracer Packets you can read the document on the QSPN v2: http://netsukuku.freaknet.org/doc/main_doc/qspn.pdf The bandwidth capacity of the route S -> D is denoted with bw(S -> D) and is equal to the bandwidth of the worst link in the route. For example: the route bandwidth of this segment is bw(S -> D)=32Mb/s. A TP needs to memorized only _one_ bw capacity value, and that is the bw() of the traversed current route. For example, if S sends a TP, when it will arrive on T, the bw capacity memorised in the packet will be bw(S->R->T)=64, but when it will reach O, it will change to bw(S->R->T->O)=32. Note that each bw capacity value corresponds to the approximated available bandwidth of the relative link (`link_avail_bw'). For example, by writing S --64Mb/s--> R
S --64Mb/s--> R --64Mb/s--> T --32Mb/s--> O --100Mb/s--> I --100Mb/s--> D
For more information about the order of efficiency of the Tracer Packets you can read the document on the QSPN v2: http://netsukuku.freaknet.org/doc/main_doc/qspn.pdf
The bandwidth capacity of the route S -> D is denoted with bw(S -> D) and is equal to the bandwidth of the worst link in the route. For example:
the route bandwidth of this segment is bw(S -> D)=32Mb/s.
A TP needs to memorized only _one_ bw capacity value, and that is the bw() of the traversed current route. For example, if S sends a TP, when it will arrive on T, the bw capacity memorised in the packet will be bw(S->R->T)=64, but when it will reach O, it will change to bw(S->R->T->O)=32.
Note that each bw capacity value corresponds to the approximated available bandwidth of the relative link (`link_avail_bw'). For example, by writing S --64Mb/s--> Rwe indicate that the current approximated bw capacity of the S-->R link is 64Mb/s.
The QSPN v1 gives to each node the best download route to reach all the other node.
Let's consider the node S and D:
1: ,--<-<-<---. / \ S D \ / 2: `-->->->---'
The routes of type 1 are the best upload routes from D to S, while the routes of type 2 are the opposite.
The QSPN gives to S only the routes of type 1, while D knows only the route of type 2.
If the routes aren't symmetric, when S will upload something to D, it will use a route of type 1, which will give poor performance.
The solution is very simple: S, when uploading a large chunk of data, will request to D its D->S routes, which are the routes of type 2, i.e. the best upload routes from S to D.
The Netsukuku daemon, using libpcap, will monitor the packets outgoing from localhost. If the packets are being sent for more than 16 seconds, it will request to the destination node the best uploads routes and add them in the routing table. In this way, the upload routes will be requested only when necessary.
Note that in the QSPN v2 there's a builtin support for the asymmetric routing. See http://netsukuku.freaknet.org/doc/main_doc/qspn.pdf
Latency VS bandwidth
It may also happens that a link has a good bandwidth but a high latency. A low latency is needed by semi-realtime applications: for a ssh connection we don't care to have a 100Mbs connection but we want to use the shell in realtime, so when we type a command we get a fast response.
The Netsukuku should create three different routing tables:
- the first shall include the routes formed by the links with the best bandwidth value.
- the second shall include the "best latency" routes
- the routes in the third table shall be an average of the first and the second tables
If the protocol of the application uses the TOS value in its IP packets, it is possible to add a routing rule and choose the right table for that protocol: the ssh protocol will be routed by the second table.
Alternatively it is possible to keep in the kernel only the average table, because this is the most used. The specific "best latency" and "best bandwidth" routes can be added in the kernel only at the moment of their utilization. For example, when bittorrent will start the download from the nodes A,B,C, the ntkd monitor will add in the kernel the "best bandwidth" routes for A,B,C.
The inet-gw which share their Internet connection measure also the utilised bandwidth of the Internet connection. The maximum download and upload bandwidth is known since it must be specified by the user in the netsukuku.conf config file. In this way, monitoring the device used by the Internet default route, it's trivial to known the available bandwidth.
The load balancing will be always used, because the routes are added in the kernel using the multipath option. The packets sent to a node will use different routes
The load balancing inside Netsukuku doesn't suffer the problems of the IGS load balancing. Indeed, the ntk nodes are consistent between themselves, i.e. a ntk node doesn't use NAT to reach another ntk node.