Hello,
I run a website on a Linux box for an interactive community which has unfortunately been plagued over the years by a certain problem user who has in the past been responsible for harassing and stalking visitors. In spite of my having locked his ISP out of my site from the server level, he still obsessively attempts to regain access (this has been going on for over a year now), and on occasion succeeds, thanks to web-based proxy http servers like the Anonymizer, the number of which rank probably somewhere in the hundreds, and then thousands more of the browser-configured proxies.
I could just update my .htaccess every time I find said user logged, but ideally, I shouldn't have to deal with this kind of thing in the first place. He may have all day to try and hack his way in. I don't. His ISP is of no help, after having sent several e-mails to the administrators, and even spending an entire morning on the phone talking to a representative. This pretty much leaves me to deal with it on my own.
Ideally, what I am looking for is some means of having the scripts on my site automatically reject any kind of web-based proxy server -- the browser configured proxies aren't so much a problem as I've been able to design a means of detecting them. I've noticed that libwww based and nph- perl scripts seem to be the most common scripts for anonymous surfing. I'm also aware that email.com's e-mail service is able to deny accesses from proxies. Does anyone have any information on how this works? Port 80 socket connections to see if an address is a website are just too slow, as most of my visitors aren't connected to a T3. I was thinking of checking the HTTP_USER_AGENT, but that only works as long as the proxy script doesn't spoof a real browser type as well.
Any help or pointers would be appreciated. Thanks.