On Wed, 29 Sep 2004, Jeff Chan wrote:
On Wednesday, September 29, 2004, 6:59:51 PM, David Funk wrote:
A far better way to effect this is to just increase the TTL on those long-term black-hat domains. (A static list is effectively an infinitely large TTL, query once and keep for ever).
But did you see my other comments? The top black-hats change domains frequently. The biggest spammers appear to be the ones to change domains the most often. Their domains only stay on the top of the heap for a few days. So in terms of the biggest spammers, I don't think there are long term ones.
Yes, I did see your comments, including the one where you thought that having a static blacklist was "certainly a good idea though"
Did you understand my comments to the effect that a static blacklist was effectively the same as a -very- large TTL?
If you are against having a large TTL for selected black-hats then you should be screaming -against- the whole concept of a static blacklist.
My point was that intelligent use of TTLs will give improvement of DNS traffic without the inherent inflexibility of static lists. (Not to mention the problem of dealing with FPs that are cast in the concrete of static blacklist files all over the net).
BIND definitely supports per-record positive TTLs, but rbldnsd I think doesn't at least in the dnset type of zone files we use for SURBLs, and the majority of public name servers are using rbldnsd.
Jeff C.
TANSTAAFL
There is a reason for BIND being a resource hog.
20 years ago BIND was small/light-weight (I deployed my first BIND server in 1987). It grew in size/weight not because some developer wanted to craft 'bloat-ware' but because of the demands of a growing Internet (growing in size, meanness, etc).
If you want industrial grade features then maybe you need to consider using industrial strength software.