On Wednesday, September 29, 2004, 9:15:33 PM, David Funk wrote:
On Wed, 29 Sep 2004, Jeff Chan wrote:
On Wednesday, September 29, 2004, 6:59:51 PM, David Funk wrote:
A far better way to effect this is to just increase the TTL on those long-term black-hat domains. (A static list is effectively an infinitely large TTL, query once and keep for ever).
But did you see my other comments? The top black-hats change domains frequently. The biggest spammers appear to be the ones to change domains the most often. Their domains only stay on the top of the heap for a few days. So in terms of the biggest spammers, I don't think there are long term ones.
Yes, I did see your comments, including the one where you thought that having a static blacklist was "certainly a good idea though"
Did you understand my comments to the effect that a static blacklist was effectively the same as a -very- large TTL?
If you are against having a large TTL for selected black-hats then you should be screaming -against- the whole concept of a static blacklist.
I was just being generous. A static whitelist makes vastly more sense than a static blacklist. The top whitelist entries change very little over the course of months. For example, neither yahoo.com nor w3.org are going away any time soon. In contrast, the top blacklist entries change daily as the biggest spammers abandon their domains and move to others.
My point was that intelligent use of TTLs will give improvement of DNS traffic without the inherent inflexibility of static lists. (Not to mention the problem of dealing with FPs that are cast in the concrete of static blacklist files all over the net).
A static whitelist and regular DNS service of a blacklist probably approach the ideal. Blacklist entries with long TTLs don't make as much sense for the reasons given above and earlier.
BIND definitely supports per-record positive TTLs, but rbldnsd I think doesn't at least in the dnset type of zone files we use for SURBLs, and the majority of public name servers are using rbldnsd.
Jeff C.
TANSTAAFL
There is a reason for BIND being a resource hog.
20 years ago BIND was small/light-weight (I deployed my first BIND server in 1987). It grew in size/weight not because some developer wanted to craft 'bloat-ware' but because of the demands of a growing Internet (growing in size, meanness, etc).
If you want industrial grade features then maybe you need to consider using industrial strength software.
If the Internet depended on BIND for RBLs, then RBLs would probably be unworkable. The memory and cpu requirements for rbldnsd are much less than BIND, and rbldnsd responds to queries at least twice as quickly as BIND.
rbldnsd is a more appropriate solution to RBLs than BIND. It's smaller, leaner and much better suited to the task.
Jeff C. -- "If it appears in hams, then don't list it."