WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Limiting domU network?

I'll do the traffic shaping in a separate email.




And here it is, though it stands a great chance of getting mangled by
line wrapping :-( I've done my best to manually break long lines - so everyhwere there is a line of the form :
blah blah blah \
  blah
that should all be on one line as :
blah blah blah blah

The downside though is that a lot of it is harder to read.


tcstart :

INCLUDE tcstart-class
INCLUDE tcstart-rule


tcstart-class :

# clean existing down- and uplink qdiscs, hide errors
tc qdisc del dev ethint root    2> /dev/null > /dev/null
tc qdisc del dev ethint ingress 2> /dev/null > /dev/null
tc qdisc del dev ethext root    2> /dev/null > /dev/null
tc qdisc del dev ethext ingress 2> /dev/null > /dev/null

# External I/F

# install root HTB, point default traffic to 1:12:
run_tc qdisc add dev ethext root handle 1: htb default 12
# shape everything at uplink speed
run_tc class add dev ethext parent 1: classid 1:1 htb rate \
  $OutSpeed burst 20k cburst 20k

# Internal I/F
# First, an overall queue/classes to split firewall and net traffic
# install root HTB, point default traffic to 100:102:
run_tc qdisc add dev ethint root handle 100: htb default 112
run_tc class add dev ethint parent 100: classid 100:100 htb \
  rate 95000kbit
# Class for firewall traffic - effectively unlimited
run_tc class add dev ethint parent 100:100 classid 100:101 htb \
  rate 75000kbit prio 1
run_tc qdisc add dev ethint parent 100:101 handle 102: sfq \
  perturb 10
# Class for net traffic - limit to line speed
run_tc class add dev ethint parent 100:100 classid 100:102 \
  htb rate $InSpeed burst 20k cburst 20k prio 1

# Need to filter FW generated traffic to 100:111
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 \
  u32 match ip src x.y.z.154/32 flowid 100:101
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 \
  u32 match ip src a.b.c.254/32 flowid 100:101



# Main traffic
# Out
run_tc class add dev ethext parent 1:1 classid 1:10 htb rate \
  1400kbit ceil $OutCeilDef burst 16k cburst 16k prio 1

run_tc class add dev ethext parent 1:10 classid 1:11 htb rate \
  500kbit ceil $OutCeilDef burst 12k cburst 12k prio 1
run_tc class add dev ethext parent 1:10 classid 1:12 htb rate \
  600kbit ceil $OutCeilDef burst 12k cburst 12k prio 2
run_tc class add dev ethext parent 1:10 classid 1:13 htb rate \
  200kbit ceil $OutCeilDef burst 12k cburst 12k prio 3
run_tc class add dev ethext parent 1:10 classid 1:14 htb rate \
  100kbit ceil 3072kbit burst 12k cburst 12k prio 4

run_tc qdisc add dev ethext parent 1:11 handle 11: sfq perturb 10
run_tc qdisc add dev ethext parent 1:12 handle 12: sfq perturb 10
run_tc qdisc add dev ethext parent 1:13 handle 13: sfq perturb 10
run_tc qdisc add dev ethext parent 1:14 handle 14: sfq perturb 10

# In
run_tc class add dev ethint parent 100:102 classid 100:110 htb \
  rate 1400kbit ceil $InCeilDef burst 16k cburst 16k prio 1

run_tc class add dev ethint parent 100:110 classid 100:111 htb \
  rate 500kbit ceil $InCeilDef burst 12k cburst 12k prio 1
run_tc class add dev ethint parent 100:110 classid 100:112 htb \
  rate 600kbit ceil $InCeilDef burst 12k cburst 12k prio 2
run_tc class add dev ethint parent 100:110 classid 100:113 htb \
  rate 200kbit ceil $InCeilDef burst 12k cburst 12k prio 3
run_tc class add dev ethint parent 100:110 classid 100:114 htb \
  rate 100kbit ceil 3072kbit burst 12k cburst 12k prio 4

run_tc qdisc add dev ethint parent 100:111 handle 111: sfq perturb 10
run_tc qdisc add dev ethint parent 100:112 handle 112: sfq perturb 10
run_tc qdisc add dev ethint parent 100:113 handle 113: sfq perturb 10
run_tc qdisc add dev ethint parent 100:114 handle 114: sfq perturb 10



# Misc customers
# Out
run_tc class add dev ethext parent 1:1 classid 1:15 htb \
  rate 500kbit ceil 4096kbit burst 16k cburst 16k prio 1

run_tc class add dev ethext parent 1:15 classid 1:16 htb \
  rate 250kbit ceil 4096kbit burst 12k cburst 12k prio 1
run_tc class add dev ethext parent 1:15 classid 1:17 htb \
  rate 240kbit ceil 4096kbit burst 12k cburst 12k prio 2
run_tc class add dev ethext parent 1:15 classid 1:18 htb \
  rate 5kbit ceil 4096kbit burst 12k cburst 12k prio 3
run_tc class add dev ethext parent 1:15 classid 1:19 htb \
  rate 5kbit ceil 4096kbit burst 12k cburst 12k prio 4

run_tc qdisc add dev ethext parent 1:16 handle 16: sfq perturb 10
run_tc qdisc add dev ethext parent 1:17 handle 17: sfq perturb 10
run_tc qdisc add dev ethext parent 1:18 handle 18: sfq perturb 10
run_tc qdisc add dev ethext parent 1:19 handle 19: sfq perturb 10

# In
run_tc class add dev ethint parent 100:102 classid 100:115 \
  htb rate 500kbit ceil 4096kbit burst 16k cburst 16k prio 1

run_tc class add dev ethint parent 100:115 classid 100:116 \
  htb rate 250kbit ceil 4096kbit burst 12k cburst 12k prio 1
run_tc class add dev ethint parent 100:115 classid 100:117 \
  htb rate 240kbit ceil 4096kbit burst 12k cburst 12k prio 2
run_tc class add dev ethint parent 100:115 classid 100:118 \
  htb rate 5kbit ceil 4096kbit burst 12k cburst 12k prio 3
run_tc class add dev ethint parent 100:115 classid 100:119 \
  htb rate 5kbit ceil 4096kbit burst 12k cburst 12k prio 4

run_tc qdisc add dev ethint parent 100:116 handle 116: sfq perturb 10
run_tc qdisc add dev ethint parent 100:117 handle 117: sfq perturb 10
run_tc qdisc add dev ethint parent 100:118 handle 118: sfq perturb 10
run_tc qdisc add dev ethint parent 100:119 handle 119: sfq perturb 10



# Customer 1 (128kbps)
# Out
run_tc class add dev ethext parent 1:1 classid 1:20 htb \
  rate 128kbit ceil 1024kbit burst 16k cburst 16k prio 1

run_tc class add dev ethext parent 1:20 classid 1:21 htb \
  rate 120kbit ceil 1024kbit burst 12k cburst 12k prio 1
run_tc class add dev ethext parent 1:20 classid 1:22 htb \
  rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 2
run_tc class add dev ethext parent 1:20 classid 1:23 htb \
  rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 3
run_tc class add dev ethext parent 1:20 classid 1:24 htb \
  rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 4

run_tc qdisc add dev ethext parent 1:21 handle 21: sfq perturb 10
run_tc qdisc add dev ethext parent 1:22 handle 22: sfq perturb 10
run_tc qdisc add dev ethext parent 1:23 handle 23: sfq perturb 10
run_tc qdisc add dev ethext parent 1:24 handle 24: sfq perturb 10

# In
run_tc class add dev ethint parent 100:102 classid 100:120 \
  htb rate 1024kbit ceil 1024kbit burst 16k cburst 16k prio 1

run_tc class add dev ethint parent 100:120 classid 100:121 \
  htb rate 120kbit ceil 1024kbit burst 12k cburst 12k prio 1
run_tc class add dev ethint parent 100:120 classid 100:122 \
  htb rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 2
run_tc class add dev ethint parent 100:120 classid 100:123 \
  htb rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 3
run_tc class add dev ethint parent 100:120 classid 100:124 \
  htb rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 4

run_tc qdisc add dev ethint parent 100:121 handle 121: sfq perturb 10
run_tc qdisc add dev ethint parent 100:122 handle 122: sfq perturb 10
run_tc qdisc add dev ethint parent 100:123 handle 123: sfq perturb 10
run_tc qdisc add dev ethint parent 100:124 handle 124: sfq perturb 10

...

That *should* be enough to work out what's going on, but I'll try and break it down a bit as it does look a bit daunting at first. I'll just deal with outbound traffic, the inbound is much the same but with the added complication of not throttling traffic from the router itself. When this was set up, there wasn't a facility for an intermediate, internal, virtual interface (Intermediate Queuing Device, IQD ?) so I'm shaping egress on the internal network. If you have more than on internal interface, then you'd need to use an IQD to shape traffic.

# install root HTB, point default traffic to 1:12:
run_tc qdisc add dev ethext root handle 1: htb default 12
# shape everything at uplink speed
run_tc class add dev ethext parent 1: classid 1:1 htb \
  rate $OutSpeed burst 20k cburst 20k
Self explanatory - setup the root of the class heirarchy.

# Main traffic
# Out
run_tc class add dev ethext parent 1:1 classid 1:10 htb \
  rate 1400kbit ceil $OutCeilDef burst 16k cburst 16k prio 1

Here we add a class for our general traffic - ie everything that doesn't belong to a specific customers allocation. $OutCeilDef is defined in the params file, as is $OutSpeed. rate sets the limit on the bandwidth allowed through the class, while ceil sets a limit on what may be borrowed from other classes that aren't using all of their bandwidth.

run_tc class add dev ethext parent 1:10 classid 1:11 htb \
  rate 500kbit ceil $OutCeilDef burst 12k cburst 12k prio 1
run_tc class add dev ethext parent 1:10 classid 1:12 htb \
  rate 600kbit ceil $OutCeilDef burst 12k cburst 12k prio 2
run_tc class add dev ethext parent 1:10 classid 1:13 htb \
  rate 200kbit ceil $OutCeilDef burst 12k cburst 12k prio 3
run_tc class add dev ethext parent 1:10 classid 1:14 htb \
  rate 100kbit ceil 3072kbit burst 12k cburst 12k prio 4

And within that, we add four further classes - just like the "Wondershaper" setup

run_tc qdisc add dev ethext parent 1:11 handle 11: sfq perturb 10
run_tc qdisc add dev ethext parent 1:12 handle 12: sfq perturb 10
run_tc qdisc add dev ethext parent 1:13 handle 13: sfq perturb 10
run_tc qdisc add dev ethext parent 1:14 handle 14: sfq perturb 10

And within each class, we use SFQ (Statistical Fair Queueing) which I believe does a reasonable job of splitting bandwidth between streams.

Now we add some traffic control for individual customers :

# Customer 1 (128kbps)
# Out
run_tc class add dev ethext parent 1:1 classid 1:20 htb \
  rate 128kbit ceil 1024kbit burst 16k cburst 16k prio 1

run_tc class add dev ethext parent 1:20 classid 1:21 htb \
  rate 120kbit ceil 1024kbit burst 12k cburst 12k prio 1
run_tc class add dev ethext parent 1:20 classid 1:22 htb \
  rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 2
run_tc class add dev ethext parent 1:20 classid 1:23 htb \
  rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 3
run_tc class add dev ethext parent 1:20 classid 1:24 htb \
  rate 2kbit ceil 1024kbit burst 12k cburst 12k prio 4

run_tc qdisc add dev ethext parent 1:21 handle 21: sfq perturb 10
run_tc qdisc add dev ethext parent 1:22 handle 22: sfq perturb 10
run_tc qdisc add dev ethext parent 1:23 handle 23: sfq perturb 10
run_tc qdisc add dev ethext parent 1:24 handle 24: sfq perturb 10

This works exactly the same way as our general traffic setup - only the rates are different. In this case, the customer is guaranteed 128kbps, and allowed to burst up to 1Mbps.

If you are following, you will realise that we now have a tree that (if it doesn't get mangled !) looks like this :

ethext - root htb - class 1:1 - + class 1:10 - + class 1:11 - SFQ
.                               |              + class 1:12 - SFQ
.                               |              + class 1:13 - SFQ
.                               |              + class 1:14 - SFQ
.                               |
.                               + class 1:20 - + class 1:21 - SFQ
.                               |              + class 1:22 - SFQ
.                               |              + class 1:23 - SFQ
.                               |              + class 1:24 - SFQ
.                               |

Note that you need to be able to do basic arithmetic when setting your rates. The sum of the rates for classes 1:11-1:14 must NOT exceed the rate for class 1:10. Non of the ceiling rates for classes 1:11-1:14 can exceed the ceiling for class 1:10. Similarly, the sum of the rates for classes 1:10, 1:20, ... must not exceed the rate for class 1:1, and their ceilings must not exceed the ceiling for class 1:1. If you ignore this, then I believe the result is that the queuing takes place in the wrong class and you lose the prioritisation under heavy traffic conditions.


That's the classes set up, now for some rules.

tcstart-rule :

# Note - order of rules is significant, first matching rule applies

# Customers rules come first, then other rules.

# Misc Customers
# XYZ
# mail
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.123/32 match ip dport 25 0xffff flowid 100:117
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.123/32 match ip sport 25 0xffff flowid 1:17
# everything else
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.123/32 flowid 100:117
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.123/32 flowid 1:17

# Customer 1
# VOIP (SIP 5060-5072 (5056-5087 = 0x13c0/ffe0), \
  RTP 8000-8051 (8000-8063 = 0x1f40/ffc0))
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.157 match ip dport 5056 0xffe0 flowid 100:121
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.157 match ip sport 5056 0xffe0 flowid 1:21
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.157 match ip dport 8000 0xffc0 flowid 100:121
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.157 match ip sport 8000 0xffc0 flowid 1:21
# Mail
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.157/32 match ip dport 25 0xffff flowid 100:123
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.157/32 match ip sport 25 0xffff flowid 1:23
# Everything else
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.157/32 flowid 100:122
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.157/32 flowid 1:22
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.42/32 flowid 100:122
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.42/32 flowid 1:22
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.19/32 flowid 100:122
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.19/32 flowid 1:22

# General Filters

# VoIP (SIP 5060, RTP 10240-11263, IAX2 4569
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.110 match ip dport 5060 0xffff flowid 100:111
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.110 match ip dport 10240 0xfc00 flowid 100:111
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dst a.b.c.110 match ip dport 4569 0xffff flowid 100:111
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.110 match ip sport 5060 0xffff flowid 1:11
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.110 match ip sport 10240 0xfc00 flowid 1:11
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip src a.b.c.110 match ip sport 4569 0xffff flowid 1:11

# DNS
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 53 0xffff flowid 100:111
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 53 0xffff flowid 1:11


# Mail (SMTP 25 & 465, Submisstion 587, POP3 110 & 995, \
  IMAP 143 & 993) is priority 3
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 25 0xffff flowid 100:114
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 25 0xffff flowid 1:14
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 465 0xffff flowid 100:113
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 465 0xffff flowid 1:13
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 587 0xffff flowid 100:113
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 587 0xffff flowid 1:13
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 110 0xffff flowid 100:113
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 110 0xffff flowid 1:13
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 995 0xffff flowid 100:113
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 995 0xffff flowid 1:13
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 143 0xffff flowid 100:113
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 143 0xffff flowid 1:13
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 993 0xffff flowid 100:113
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 993 0xffff flowid 1:13


# RSync traffic (873) priority 4
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip sport 873 0xffff flowid 100:114
run_tc filter add dev ethint parent 100:0 protocol ip prio 1 u32 \
  match ip dport 873 0xffff flowid 100:114
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip sport 873 0xffff flowid 1:14
run_tc filter add dev ethext parent 1:0 protocol ip prio 1 u32 \
  match ip dport 873 0xffff flowid 1:14


# TOS Minimum Delay (ssh, NOT scp) in 1:11:
run_tc filter add dev ethext parent 1:0 protocol ip prio 10 u32 \
  match ip src a.b.c.0/24 match ip tos 0x10 0xff flowid 1:11

# ICMP (ip protocol 1) in the interactive class 1:11 so we
# can do measurements & impress our friends:
run_tc filter add dev ethext parent 1:0 protocol ip prio 10 u32 \
  match ip src a.b.c.0/24 match ip protocol 1 0xff flowid 1:11

# To speed up downloads while an upload is going on, put ACK packets in
# the interactive class:
run_tc filter add dev ethext parent 1:0 protocol ip prio 10 u32 \
  match ip src a.b.c.0/24 match ip protocol 6 0xff \
  match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 \
  match u8 0x10 0xff at 33 flowid 1:11


# Internal I/F

# TOS Minimum Delay (ssh, NOT scp)
run_tc filter add dev ethint parent 100:0 protocol ip prio 10 u32 \
  match ip dst a.b.c.0/24 match ip tos 0x10 0xff flowid 1:111

# ICMP (ip protocol 1) in the interactive class so we can do \
  measurements & impress our friends:
run_tc filter add dev ethint parent 100:0 protocol ip prio 10 u32 \
  match ip dst a.b.c.0/24 match ip protocol 1 0xff flowid 1:111

# To speed up downloads while an upload is going on, put ACK \
  packets in the interactive class:
run_tc filter add dev ethint parent 100:0 protocol ip prio 10 u32 \
  match ip dst a.b.c.0/24 match ip protocol 6 0xff \
  match u8 0x05 0x0f at 0 match u16 0x0000 0xffc0 at 2 \
  match u8 0x10 0xff at 33 flowid 1:111


This should also be reasonably easy to decipher. Here we have a customer XYZ that goes in the "misc customers" traffic allocation - basically these customers get to share a traffic allocation and only the SFQ will protect from one of them hogging the bandwidth. In practice, they are light users and it's not an issue. Then we have Customer 1 that has their own allocation, and we have rules to allocate their traffic to their own classes (note that they have 3 IP addresses in this example). Finally we have the "anything not already classified" rules - VoIP and DNS go into the priority class, mail goes into the low priority class, and rsync goes into the very low priority class.



That sets up the traffic control, now you need to monitor and test it !

You can type something like "/sbin/tc -s class show dev ethext" and you'll get several pages of stats. I knocked up a script that would extract just the basic info to help with testing :
cat show_stats_tc
#/bin/bash

( /sbin/tc -s class show dev ethext
  /sbin/tc -s class show dev ethint parent 100:
  /sbin/tc -s class show dev ethint parent 101: ) | \
    /bin/sed -e :a -e '$!N;s/\n / /;ta' -e 'P;D' | \
    /bin/sed -r -e "s/^class htb ([0-9]+):([0-9]+) .* [0-9]+ \
pkt .dropped ([0-9]+),.* rate ([0-9K]+)bit .*$/\1 \2 \3 \4/" | \
    /bin/grep -v '^$' | \
  sort -n


For graphing, I collect the data and stuff it into a number of rrd files using a script run from cron :

get_stats_tc :

#/bin/bash
# Script to extract values from shorewall output

cd /var/rrd

Now=`date +%s`

( /sbin/tc -s class show dev ethext
  /sbin/tc -s class show dev ethint ) | \
    /bin/sed -e :a -e '$!N;s/\n / /;ta' -e 'P;D' | \
/bin/sed -r -e "s/^class htb [0-9]+:([0-9]+) .* Sent ([0-9]+) bytes [0-9]+ \
  pkt .dropped ([0-9]+),.*$/\1  \2      \3/" | \
    /bin/grep -v '^$' | \
  (
  while read Class ByteCount DropCount
  do
    Bytes[${Class}]=${ByteCount}
    Dropped[${Class}]=${DropCount}
  done


# Main link
/usr/bin/rrdtool update tc-main-in.rrd ${Now}:${Bytes[110]:-"U"}:\
  ${Dropped[110]:="U"}:${Bytes[111]:-"U"}:${Dropped[111]:="U"}:\
  ${Bytes[112]:-"U"}:${Dropped[112]:="U"}:${Bytes[113]:-"U"}:\
  ${Dropped[113]:="U"}:${Bytes[114]:-"U"}:${Dropped[114]:="U"}
/usr/bin/rrdtool update tc-main-out.rrd ${Now}:${Bytes[10]:-"U"}:\
  ${Dropped[10]:="U"}:${Bytes[11]:-"U"}:${Dropped[11]:="U"}:\
  ${Bytes[12]:-"U"}:${Dropped[12]:="U"}:${Bytes[13]:-"U"}:\
  ${Dropped[13]:="U"}:${Bytes[14]:-"U"}:${Dropped[14]:="U"}

# Misc Customers
/usr/bin/rrdtool update tc-misc-cust-in.rrd ${Now}:${Bytes[115]:-"U"}:\
  ${Dropped[115]:="U"}:${Bytes[116]:-"U"}:${Dropped[116]:="U"}:\
  ${Bytes[117]:-"U"}:${Dropped[117]:="U"}:${Bytes[118]:-"U"}:\
  ${Dropped[118]:="U"}:${Bytes[119]:-"U"}:${Dropped[119]:="U"}
/usr/bin/rrdtool update tc-misc-cust-out.rrd ${Now}:${Bytes[15]:-"U"}:\
  ${Dropped[15]:="U"}:${Bytes[16]:-"U"}:${Dropped[16]:="U"}:\
  ${Bytes[17]:-"U"}:${Dropped[17]:="U"}:${Bytes[18]:-"U"}:\
  ${Dropped[18]:="U"}:${Bytes[19]:-"U"}:${Dropped[19]:="U"}

# Customer 1
/usr/bin/rrdtool update tc-tag-in.rrd ${Now}:${Bytes[120]:-"U"}:\
  ${Dropped[120]:="U"}:${Bytes[121]:-"U"}:${Dropped[121]:="U"}:\
  ${Bytes[122]:-"U"}:${Dropped[122]:="U"}:${Bytes[123]:-"U"}:\
  ${Dropped[123]:="U"}:${Bytes[124]:-"U"}:${Dropped[124]:="U"}
/usr/bin/rrdtool update tc-tag-out.rrd ${Now}:${Bytes[20]:-"U"}:\
  ${Dropped[20]:="U"}:${Bytes[21]:-"U"}:${Dropped[21]:="U"}:\
  ${Bytes[22]:-"U"}:${Dropped[22]:="U"}:${Bytes[23]:-"U"}:\
  ${Dropped[23]:="U"}:${Bytes[24]:-"U"}:${Dropped[24]:="U"}

...
)

This is a bit of the system I'm particularly proud of, having managed to separate the collection of the stats from the tc counters and the insertion fo those stats into rrd files. Ie, if the actual classes and this script don't agree, then nothing breaks :-) Eg, if a customer leaves and we delete the classes etc, then updates simply put "U" (unknown) into the RRD database until the script gets modified.

And finally, a script to create the rrd databases

make_tc :

#!/bin/bash
# Make rrd file for Traffic Shaping stats
#
# cx = traffic count
# dx = drop count
# x=g (global),1-4

[ $# -ne 1 ] && ( echo "usage: $0 <filename>" ; exit 1 )

rrdtool create $1.rrd -s 300 \
  DS:cg:DERIVE:600:0:U \
  DS:dg:DERIVE:600:0:U \
  DS:c1:DERIVE:600:0:U \
  DS:d1:DERIVE:600:0:U \
  DS:c2:DERIVE:600:0:U \
  DS:d2:DERIVE:600:0:U \
  DS:c3:DERIVE:600:0:U \
  DS:d3:DERIVE:600:0:U \
  DS:c4:DERIVE:600:0:U \
  DS:d4:DERIVE:600:0:U \
  \
  RRA:AVERAGE:0.5:1:576 \
  RRA:MAX:0.5:1:576 \
  RRA:AVERAGE:0.5:6:672 \
  RRA:MAX:0.5:6:672 \
  RRA:AVERAGE:0.5:24:732 \
  RRA:MAX:0.5:24:732 \
  RRA:AVERAGE:0.5:288:730 \
  RRA:MAX:0.5:288:730

# CFs for :
#   1 x 576    48hrx 5m
#   6 x 672    14d x 1/2hr
#  24 x 732    61d x 2hr
# 288 x 730   730d x 24hr



So if you've followed that through, and are still awake, then you are now a traffic shaping and accounting guru ;-) - or at least you are now equipped to impress your boss :-)

--
Simon Hobson

Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed
author Gladys Hobson. Novels - poetry - short stories - ideal as
Christmas stocking fillers. Some available as e-books.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users