Migitating ISP disruption

The problem

There was an unexpected challenge to put together the security weekly news last night:

My ISP mistakenly thought I had not paid my bills last month and decided to disrupt my web browsing experience by displaying a web page that said something like “information page … you have not paid x,y,z .. to accept this letter click on the confirm button ….”.

At the beginning I was wondering but what the … it’s all paid! this was outrageous in itself but the focus of this post will be how I managed to get around this.

Initially, I just clicked that “confirm” button and after a little while things worked smoothly but a few minutes later I got the disruptive message again! I could not work like that so I decided to investigate and try to get around this.

The investigation

The first thing to do was obviously to investigate what that “confirm” button was doing, the relevant HTML source  code on the page looked like this:

A number of things to note here:
– Why is my IP ( as a hidden field!? the server could get it itself and store it in the session on the server side, that would remove all possibilities of client side tampering.
– It is amazing this works even though the input fields are technically outside of the form!
– Even the ip field does not have a name 😛 (is ip really an HTML attribute? :))

ok, enough purist ranting. At this point it looks like I just need to submit a similar request to the web server and I should get my connection back hopefully minimising the browsing experience disruption:

$ while [ 1 ]; do echo ‘Sleeping ..’; sleep 30; curl -A MSIE -i -d ‘Click=1&ip=’ >> confirm.log; done

The command above is doing the following:
– while [ 1 ]; do # infinite loop: “forever”
– echo ‘Sleeping …’ # Just displays that on the screen useful so that I know what the command is doing
– sleep 30 # sleeps 30 seconds before trying again the curl command that comes next
– curl -A MSIE -i -d ‘Click=1&ip=’ # submit a POST request with the same info as in the page form making it look like the browser is IE (user agent specified via “-A MSIE”), -i is just so that I see the HTTP headers and -d is so that I can pass the POST data to curl. Finally is the URL for the ISP host handling this.
– >> confirm.log  # This appends the output of the web server into a log file that I called “confirm.log”
– done # Marks the end of the infinite loop

So, in human language I am “clicking” that “confirm” button every 30 seconds through the command line. This worked pretty well but I noticed I still got some occasional outages at times so I decided to monitor what was going on, I put together this quick bash script in a minute:


while [ 1 ]; do # Forever
        num_lines=$(lynx –dump www.google.ie|grep -i google|grep -v http| wc -l)# Number of times the word “google” appears when visiting www.google.ie after taking out the links
        status=”up” # Default to up
        if [ $num_lines -ne 5 ]; then # If the number of lines is not equal to 5 then
                status=”down” # The status is down

        log_line=$(date) # Get the output of the date command into a variable
        log_line=$(echo “$log_line $status”) # Append the status calculated above at the end of the variable
        echo $log_line # output this variable on the screen
        sleep 5 # do nothing for 5 seconds

The output of this script is as follows:

Thu Dec 16 07:19:45 CET 2010 up
Thu Dec 16 07:19:50 CET 2010 up
Thu Dec 16 07:19:55 CET 2010 up

So what happens is that if I can connect to www.google.ie and get the word google 5 times I assume that my internet connection is not being disrupted, if I don’t I am seeing the ISP error page.

I ran this small monitor script like this:

$ ./monitor.sh > monitor.log

Then monitored the monitor output in another terminal window like this:

$ while [ 1 ]; do clear ; head -1 monitor.log ; grep down monitor.log ; sleep 30; done

The output was as follows:

Thu Dec 16 07:19:45 CET 2010 up
Thu Dec 16 07:26:06 CET 2010 down
Thu Dec 16 07:56:43 CET 2010 down
Thu Dec 16 08:11:05 CET 2010 down

Thu Dec 16 08:11:40 CET 2010 down
Thu Dec 16 08:26:07 CET 2010 down

Thu Dec 16 08:26:43 CET 2010 down
Thu Dec 16 08:41:06 CET 2010 down

Thu Dec 16 08:41:42 CET 2010 down
Thu Dec 16 08:56:05 CET 2010 down

Thu Dec 16 08:56:41 CET 2010 down
Thu Dec 16 09:11:04 CET 2010 down

Thu Dec 16 09:11:40 CET 2010 down

From this rudimentary monitoring it was clear that my connection is undisrupted for 15 minutes and then stays disrupted for around 36 seconds, not bad! but could I do better than this?

I tried sending the “confirm” button “emulated click” more often but the outage remained the same: 36 seconds of outage every 15 minutes (sleeping 15 instead of 30 seconds did not make a difference):

$ while [ 1 ]; do echo ‘Sleeping ..’; sleep 15; curl -A MSIE -i -d ‘Click=1&ip=’ >> confirm.log; done

In the end, this rudimentary workaround allowed me to have an almost ok web surfing experience until it was late enough in the day for ISP staff to be reachable through their service desk and I could have a conversation with them.

What are the lessons learnt here?
– If a browser can do it, I can do it automatically from the command line too (it does not matter if it is POST, etc)
– They could have tried to stop this in a number of ways but it just becomes an arms race then: For example, if they used a random token as an additional hidden field on the page I could scrap the page first and get the token from there before submitting the request (so that the token matches and is accepted on the server side, in a similar fashion to CRSF protections). I can also copy-paste any browser’s user agent, etc.

They could have made it more disruptive by:
– Stopping the internet connection completely (probably too disruptive)
– Making the wait time to re-enable the connection longer than 36 seconds
– Detecting multiple clicks from an IP address and increase the wait time accordingly. I could just send one click every 15 minutes then but even so, this could be suspicious and detected, although the false positive risk would be higher then.

There was a longer outage later on during the day before they removed this but after that it was all fine.

I am going to have a conversation with them today :).