debian-noroot-nodebhelper fix
[svn/Prometheus-QoS/.git] / README
a4f661fc 1============================================================================
3Prometheus QoS - steal fire from your ISP !
4"fair-per-IP" quality of service utility
5requires Linux kernel with HTB qdisc enabled
6GNU+ Copyright(G)2007, Michael Polak (xChaos)
7Credits: Credits: CZFree.Net, Netdave, aquarius
8...and Martin Devera (.cz) for his HTB qdisc (of course)
9...and Jakub Walczak (.pl) for providing feedback and patches
10...and Ing. Jiri Engelthaler (.cz) for bugfixes and Asus WL500 port
11...and Dial Telecom (slightly expensive ISP) for chance to test it
13Feedback: xchaos(at)
18QoS (or Quality-of-service) is IPv4 traffic shaper replacement for Internet
19Service Providers (ISP). Dump your vintage hard-wired routers/shapers
20(C|sco, etc.) in favour of powerful open source and free solution !
22Prometheus QoS generates multiple nested HTB tc classes with various rate
23and ceil values, and implements optional daily traffic quotas and data
24transfer statistics (as HTML). It is compatible with NAT, both asymetrical
25and symetrical, yet still provides good two-way shaping and prioritizing,
26both upload and download. Prometheus QoS allows both "hard shaping"
27(reducing HTB ceil value for aggressive downloaders) and "soft shaping"
28(keeping HTB ceil, but reducing HTB prio, probably optimal solution for
29normal users).
31Prometheus iGW was written in C<<1, which means it compiles simply with
32GNU C Compiler, and doesn't require any external liberaries (except libc)
33and huge interpreter packages (like Perl or Java) to run. However, it
34depends on HTB algorithm hardcoded in Linux kernel. It is currently being
35tested in real-world enviroment to provide QoS services on 30 Mbps internet
36gateway and proxy being used by 2000+ PCs connected to gateway using
37CZFree.Net broadband community network.
39Advantages over more straightforward traffic control scripts include
40HTB fine tuning features (rate and ceil magic), data transfer statistics,
41optional data transfer quotas, full NAT (both symetric and one way)
42compatibility and optinal sharing of bandwith by IPs in completely
43different subnets.
45Performance and scaling - current release:
47we run Prometheus QoS on Celeron 2.8 Ghz serving around 600 individual
48traffic classes (fine tuning is using five user-defined prometheus.conf
49keyword) and another 2000 IPs sharing bandwith with certain other IPs
50("sharing-" keyword). Prometheus QoS is especially strong tool if you want
51IP's from different subnets to share the same traffic class.
53With 30 Mbps (each way) total capacity of line, Cisco Catalyst 2950 on
54ISP side and up to cca 6000 packets per seconds, we were running into some
55problems with overall system load. We moved QoS from Athlon 1700 XP to
56Celeron 2.8 Ghz, and kept all SNAT related stuff (see optinal-tools directory)
57on Athlon 1.7 Ghz, which alowed for peak throughput up to 10000 pps.
59Performance fine tuning - history:
61With kernel version 2.4.20 and release 0.2 we started to experience problems
62at cca 1500 packets/sec. However, with new iptables indexing feature
63implemented in 0.3 release, system load seems to be approximately
6410 times lower. Same HW was later shaping 2000 packets/sec without problems,
65and it looked like comparable relatively low-end system should be able to do
66traffic shaping for at least 10000 packets/sec (well, if HotSaNIC was turned
67off, of course <g>). With 0.6 release and dynamicaly calculated iptables
68indexing scheme we made it up to 6000 packets/sec, and then we ran into some
69performance-related problems, which may be related to the fact we are doing
70SNAT of 1000+ individual IP addresses on the same machine which is doing
71also the QoS: something on the way seems to be limited to 34 Mbps HD
72(half-duplex, sum of upload and download) no matter what we try. Our ISP
73claims the fault is not on his side, so our next step will be to separate
74traffic shaping and massive SNAT (IP masquerading) and assign separate
75PC-based router to do each task.
77Maximum performance observed with prometheus 0.6 and hashtable optimization
78of tables with individual SNAT targets was up to 9000 packets/sec at cca 40
79Mbps half-duplex (more then 20 Mbps fyull-duplex). However, this required
80massive optimization, including
81echo -n 65000 > /proc/sys/net/ipv4/ip_conntrack_max
83echo -n 21600 > /proc/sys/net/ipv4/netfilter/ip_conntrack_tcp_timeout_established
84and disabling of most userspace applications (like eg. hotsanic). At the
85same time, router machine and system was accumulating wide set of various
86performance related problems, which required us to reboot it at least
89Note: Some time ago it seemed that maximum index of tc classes was restricted
90to 10000 - but I haven't checked this again for quite a while.
91Another note: All the echo stuff in previous paragraph can be also achieved by
92adding following lines to /etc/sysctl.conf which is much cleaner way to do it:
96Future plans include also setting of individual daily limits on maximum
97pps (packets per second) rates allocated to individual IP addresses (this
98may be needed partly because of problems mentioned above).
102Prometheus QoS is free software; you can redistribute it and/or
103modify it under the terms of the GNU General Public License as
104published by the Free Software Foundation; either version 2.1 of
105the License, or (at your option) any later version.
107Prometheus QoS is distributed in the hope that it will be useful,
108but WITHOUT ANY WARRANTY; without even the implied warranty of
110General Public License for more details.
112You should have received a copy of the GNU General Public License
113along with Prometheus QoS source code; if not, write to
114Michael Polak, Svojsikova 7, 169 00 Praha 6 Czech Republic
This page took 0.229264 seconds and 4 git commands to generate.