[SURBL-Discuss] RFC: New SURBL based on exploited senders?

Jeff Chan jeffc at surbl.org
Fri Mar 25 09:19:18 CET 2005

On Thursday, March 24, 2005, 11:30:23 AM, Patrik Nilsson wrote:
> At 03:21 2005-03-24 -0800, Jeff Chan wrote:
>>intensive.)  They may be able to process up to a hundred times as
>>many of their messages for us (i.e. 6M a day) if this moves
>>forward, though even that would be only a small fraction of their
>>trap hits.

> Is there anything we can do to increase this fraction? Donate CPU cycles, etc?

Thanks for your kind offer, but in this case I expect the answer
is no.  Chances are good that they already have access to many
hundreds of servers for processing their existing trap data.

>>Even after whitelisting there are still a few legitimate-looking
>>domains coming through, so one idea would be to list the records
>>up to the 96th or 97th percentile, but for the remaining ones
>>with fewer hits, only list those that also appeared in existing

> The ones in existing SURBLs are not really that interesting, unless we are 
> looking for a confirmation that what is listed should stay listed.

I think confirmation that a given domain, etc. is being
spamvertised through zombies is quite useful.

> The main 
> point of working on this particular setup would be catching additional 
> domains, not confirming already listed ones, right?

Yes, both.

>>or resolved into sbl.spamhaus.org,

> Might seem like a redundant check for people that are used to running SA 3 
> with uridnsbl, but for people using other SURBL implementations, that are
> not implementing anything like the uridnsbl "check dns servers for the 
> domain against SBL", this might be very useful for catching additional spam 
> domains.

1.  Not every one runs SA 3.
2.  Not every SA 3 user can spare the time delays to use SBL in
3.  SBL URI name server checks results in significantly more
false positives than SURBL URI checks.
4.  SBL correlated with zombie usage probably has much fewer
false positives. 

>>or where the sending
>>software was clearly spamware.  Hopefully that would reduce FPs
>>in these records with fewer hits, but still let us "pull some
>>useable data out of the noise" and list some of the less
>>frequently appearing records.

> I think that the important thing for putting efforts into something like 
> this would be to catch more of the zero-hour domains currently slipping by 
> SURBL for a couple of hours, rather than to just confirm current listings. 
> Agreed?

> Patrik 

It would:

1.  Confirm some SURBL
2.  Find new SBL gang domains
3.  Generally find fresh domains being sent through zombies that
aren't in SBL or SURBLs.

Zombies are the biggest reason for SURBLs IMO, and this new data
source "cuts out the middleman" and gets directly at zombie
usage.  :D 

The main question to me is how to cut down on FPs and that's why
I wanted some comments on the post-processing of data.  It turns
out the source can readily identify and tag records sent using
specific spamware, and those would get "special treatment" to
be much more likely listed.

All would be subject to whitelisting as a final safety valve,
but I'd like to hear more ideas about how to filter zombie-heard


Jeff C.
"If it appears in hams, then don't list it."

More information about the Discuss mailing list