How Does WordPress Track Clicked Links?

Looking at the Net Codger WordPress blog stats, I see that there is a statistic for clicked links. This represents links on your blog that readers have clicked. But, some of the Codger’s click stats don’t make sense.

A link in a web page is essentially an address(URL) to another page. When the reader clicks the link his browser, the client, makes an HTTP request to the server at the specified address. The connection is direct from the client to the server. The server records the request in its log files and that log record is counted as a hit. The client also provides the server with the URL of the referring page, which the server also records in the log.

When you click on this link to Google your browser directly retrieves the Google home page. Google’s servers log the request as well as the fact that it was this Net Codger page that you were referred from. This is a click.

Since Google’s server receives and responds to the request, it is expected that the Google servers would have a record of the click. But, since your client goes directly to the Google server, the transaction should not interact with WordPress in any way. So, in theory WordPress should have no knowledge of this click or any other request/click to a non-Wordpress site. Yet, WordPress is somehow recording clicks to third party URLs. How is this possible?

Some sites accomplish this kind of click tracking by wrapping the final destination URL in a wrapper URL that sends the request first to their site which records the click and then redirects the client to the third party site. Google does just this so that they can record clicks from their search results.
http://www.google.com/url?q=http://netcodger.wordpress.com/&sa=...
Above is a link to the Net Codger’s blog from a Google search page. We can see that the URL points in fact to Google whose servers then redirect your browser to your favorite blog.

But, I don’t see evidence of WordPress doing this. Certainly they are not doing such URL wrapping/redirection consistently. So, how is WordPress recording clicks of links to third party sites? The Codgery is perplexed. There is skull duggery afoot!

DNS Highjacking Workaround

I’ve been looking at the stats for the Codger’s blog lately and I see a lot of search hits for an old post I did on internet service provider(ISP) domain name service(DNS) highjacking. This is a subject that hasn’t been in my mind for years because I haven’t been experiencing the problem. But the number of hits tells me that lots of people are encountering the issue and want a way out. So, I though I’d provide an option.

First, let me point out that these days a lot of malware hijacks your DNS settings. Some malware places entries in your C:\Windows\System32\Drivers\Etc\hosts file. Make sure that there is nothing in there besides the loop back address 127.0.0.1. Some other malware changes the DNS servers that your system uses. Since most networks assign IP addresses and DNS servers dynamically using DHCP, if you see manually entered DNS server settings that you didn’t create, you’re likely infected. In both cases, clean off the malware and reset the DNS settings to default, typically Obtain DNS server address automatically.

But, if your system is clean and it is indeed the ISP that is feeding you undesired search pages for mistyped URLs or your system is being slowed by delays in DNS responses you have a few options. One such option is to run your own DNS server on your local system or local area network(LAN). If you’re up to that challenge, you’re probably not reading this, so I won’t get into it here. But, if like most people you have your PC directly connected to your ISP’s “modem” or through a broadband router, it is simple to configure an alternate DNS server.

Which alternate DNS server should you use? That question depends on personal preference, location, ISP and who knows what else. There are several publicly available DNS servers that you could use.

Google
8.8.8.8
8.8.4.4

openDNS
208.67.222.222
208.67.220.220

Verizon
4.2.2.1
4.2.2.2

Directly Connected Hosts
On Windows 7 or Vista go to Control Panel and click Network and Internet → Network and Sharing Center → Change adapter settings. The right click on Local Area Connection and choose properties. Select the Networking tab and then select Internet Protocol Version 4 (TCP/IPv4). Once that is highlighted, click Properties.

Now, click Use the following DNS server addresses and enter a pair of the addresses from above in the Preferred and Alternate DNS server fields. As shown above, I like to use one address from two different services at the same time.

Broadband Router
The instructions for broadband routers will vary depending on which brand/model you are using. One of the best selling brands is Cisco/Linksys so, I’ll demonstrate that one here.

Login to your router’s administration page. With Cisco/Linksys, this is done by pointing your web browser to http://192.168.1.1 and using the userID admin and the password admin. The first page will look something like this:

Under the DHCP server settings enter the DNS server IP addresses that you wish to use and click save. Now, close your browser.

On your PC open a command prompt and type ipconfig /renew to immediately pull the new configuration changes from the router. And that’s it, you’ll now use the chosen servers to resolve your DNS queries, rather than those provided by your ISP.

I hope that this is what all those searchers have been looking for.

Rsync Backup To My Book Live

While putting a Western Digital My Book Live through its paces, I needed to backup a Linux system to the My Book Live which functions as a NAS. Because it took me more than just a few minutes, I thought I’d share my backup script, and the reasoning behind it, so that others can get going more quickly.

Using the flexible My Book Live, there are a lot of ways that one could backup a Linux system. You could use tar and save the backups to a network share, right out of the box. But, most Linux admins prefer Rsync for backups these days. Again, because of the My Book Live’s flexibility, you could install the rsync daemon on the My Book Live and have it receive your backups. Or you could install Rsnaphot on it and have it act as a backup server, sucking the backup off of your Linux systems.

When one updates the “firmware”(WD’s Debian install) on the My Book Live, it wipes out any user installed apps and most, though not all, configurations, returning it to a factory vanilla NAS. The Net Codger fully intended to mess with his My Book Live and would almost certainly need to restore it to factory defaults. So, for this backup scenario, I did not want to modify the My Book Live. This eliminated the rsync daemon and Rsnapshot, etc. However, since the SSH configuration is maintained across firmware updates, rsync via SSH was a perfectly viable option. So the following instructions are how to push rsync backups to the My Book Live via SSH. The script also rotates the backups and uses file system hard links to maintain numerous full backups that take only seconds to run each day while consuming minimal disk space.

The first step is to enable SSH on the My Book Live. Western Digital even provided a GUI screen that allows you to enable this service, but you have to enter the URL to it yourself. To do it, first log in to the web interface at http://mybooklive

After you’ve been authenticated, enter this case sensitive URL http://mybooklive/UI/ssh and tick the Enable SSH check box. You can now login via ssh and change the root user’s password with the passwd root command. Great job WD.

The next important step will be to enable passwordless SSH logins to the My Book Live. This is just a few simple commands executed on your Linux desktop. It results in secure passwordless SSH logins from your Linux desktop to the My Book Live.

NetCodger@LinuxWkstn:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/NetCodger/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/NetCodger/.ssh/id_rsa.
Your public key has been saved in /home/NetCodger/.ssh/id_rsa.pub.
The key fingerprint is:
ab:cd:ef:12:34:ab:fe:dc:ba:98:76:54:7e:ee:4f:eb NetCodger@LinuxWkstn
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
| .               |
| o o             |
| . + +.          |
| o + oS.         |
| o o.+ .         |
| .o +...o        |
| .o.. +o .       |
| .. ...+E        |
+-----------------+
NetCodger@LinuxWkstn:~> ssh root@MyBookLive mkdir -p .ssh
root@MyBookLive's password:
NetCodger@LinuxWkstn:~> cat .ssh/id_rsa.pub | ssh root@MyBookLive 'cat >> .ssh/authorized_keys'
root@MyBookLive's password:
NetCodger@LinuxWkstn:~>
NetCodger@LinuxWkstn:~> ssh root@MyBookLive
root@MyBookLive's password:
MyBookLive:~# chmod -R go-rwx .ssh
MyBookLive:~# exit

A few simple commands and your done. From now on, simply typing ssh root@MyBookLive securely logs you in with no password. This is imporatant when you want to use SSH in bash scripts, which is exactly how I’ve chose to do the backups.

The following script keeps a 30 day rotating backup of my excessively large (135GB) home directory on the My Book Live. But, thanks to rsync’s leveraging of file system hard links, 30 full backups occupy less than 300GB and nightly backups take only seconds to complete.


#!/bin/bash
#
# A backup script based on Rsync that pushes backups onto a NAS.
#
# Directories are rotated n times and rsync is called to
# place a new backup in 0.days_ago/
# Net Codger http://netcodger.wordpress.com 4/17/2012

# —– Edit these variables to suit your environment —–
n=30                                                                    ;Number of backups to retain
NAS=MyBookLive                                             ;IP address or resolvable hostname of NAS
SrcDir=/home/NetCodger                              ;Directory to be backed up
DestDir=/shares/Linux/NetCodger              ;Backup destination on NAS
# —– End of edits —–

echo
echo =========================================================================
echo “Starting backup.sh…..”;
date;
echo

# Delete the n’th backup.
echo Removing oldest backup.
ssh root@$NAS ‘[ -d '$DestDir'/'$n'.days_ago ] && rm -rf ‘$DestDir’/’$n’.days_ago ]’

# Rename backup directories to free up the 0 day directory.
ssh root@$NAS ‘for i in {‘$n’..1}; \
do [[ -d '$DestDir'/$(($i-1)).days_ago ]] && \
/bin/mv ‘$DestDir’/$(($i-1)).days_ago ‘$DestDir’/${i}.days_ago; done’

echo

# Run the Rsync command. Nice is used to prevent Rysnc from hogging the CPU.
# –link-dest creates hard links so that each backup run appears as a full
# backup even though they only copy changed blocks since 1.days_ago
nice rsync -av \
–delete \
–link-dest=../1.days_ago \
$SrcDir root@$NAS:$DestDir/0.days_ago/

echo
echo =========================================================================
echo “Completed running backup.sh”;
date;

Simple and secure, storing 30 backups in 1/15th of the space. This si a good backup script. Save it where ever you like. I like /home/NetCodger/backup.sh Don’t forget to make it executable

chmod +x /home/NetCodger/backup.sh

The only thing left is to use cron to make the script run each night. Type crontab -e and add a line like the following.

0 1 * * * /home/NetCodger/backup.sh > /tmp/backup.log 2>&1

This runs the backup script every morning at 1:00am and redirects the output to a log file in the /tmp directory. I like to do this just in case there is some problem with the backup script that I’d like to troubleshoot.

Now contrary to the all too common bragging about Rsync, Rsync is actually quite slow. Rsync over SSH is twice slower still. So the initial run of this script could take a very long time depending on how much data you are pushing. The Net Codger’s initial 135GB backup took a ludicrous 5 hours! But, all subsequent backups consist of only the files changed since the previous backup which is rarely more than a few gigabytes. So subsequent Rsync over SSH backups take from just a few minutes to as little as just a few seconds. Last night’s run was complete in 47 seconds.

That’s it, nightly Rsync backups over SSH to the WD My Book Live, or any SSH enabled storage that you choose.

The Awesome Little Western Digital My Book Live

The Net Codger is mad as hell! For years Western Digital has been selling a tiny network attached hard drive called the My Book which I’ve duly ignored. “Cheap consumer stuff, bah!” Well, last week I purchased one, the Western Digital My Book Live 2 TB drive, and put it through its paces. Why didn’t anyone tell me that, for years, I had been ignoring a true gem? Why didn’t anyone tell me how fantastic this little drive is?

If, like me, you didn’t already know; the Western Digital My Book live is a network attached storage(NAS) device that uses the Linux OS as its core. Out of the box it offers backup options for Windows and Mac using the WD software or Time Machine respectively. It acts as a streaming media server. WD provides an app so that you can easily browse its media from an iPhone or iPad. WD includes a free service that allows you to access your files, stored on the My Book in your home, via the cloud(web) from anywhere. It is a really sweet little device for just a few dollars more than a bare drive of the same size.

But there’s more! The WD My Book Live uses a Power PC 800MHz processor with 256MB of RAM. It also has a gigabit ethernet network interface. The operating system is Debian and, get this, the Lenny and Squeeze repositories are enabled by default! So, I can atp-get install whatever-the-hell-I-want and it just works! Right out of the box! (After enabling SSH at http://mybook/ui/ssh)

For the price of a hard drive I have a fully functional, tiny, fanless, headless Linux PC. Why WD didn’t lock down the OS like most other manufacturers I’ll never know. But, kudos to them for keeping this beauty open! By doing so, they’re providing a fantastic opportunity for the hackers(tinkerers) out there. And a thriving community of My Book tinkerers has developed and is doing all sorts of neat projects with this inexpensive PC that lacks only video and USB ports.

To start off, I decided to test it unmodified in its factory default state. It comes with a CD that includes the installation and setup software, but I hate having to use a, usually Windows, PC to setup a network appliance. So, I went looking for a web interface instead. The WD My Book did not disappoint. Using only a browser, I was able to fully configure the device including a static IP address, SAMBA(Windows) network shares, users and passwords, even the online cloud(web) access feature. The interface was full featured, attractive, polished and intuitive.

Within minutes I had the My Book running on the network. Configuring Windows backups to backup to the My Book network shares was a snap, using Windows’ built-in backup application. Later testing with the WD Smartware monitoring and backup software, available on a share from the My Book and the CD, was also easy to use and worked great. Mac’s Time Machine backup application worked natively with the My Book and at only half the price of an Apple Time Capsule!

Next up was Linux and, as always, things got little a bit less easy. The reason being that, most Linux distributions don’t have slick and ready to use backup applications like Time Machine installed and ready to go. Don’t get me wrong, there are lots of backup tools and options from the old stalwart Tar to the now preferred method of Rsync. But, they are all fiddly command line tools that take a little effort to setup initially. I’ll post details of this process in another post, so it will be easy for you. But, once they’re configured, they work great without a second thought. On the My Book, Linux can backup to a SAMBA network share by default. But, after enabling SSH on the My Book other target options include FTP, SFTP, SSH, NFS, Rsync daemon and possibly more that I haven’t though of yet.

Now, let’s be clear, the WD My Book has no redundancy or data protection(available in the Duo version) so, you would be stupid to use it as your solitary storage device. Sooner or later the WD Green drive inside will fail.

But, using the My Book as a backup target where you are storing a redundant copy of your data is the safe option that it was designed for. It is a great backup strategy for multiple devices that can protect against hardware failure/loss/theft, as well as a grab and go option in the event of a natural disaster like, storm, fire or flood.

The My Book Live is not super fast. The 800MHz processor is small by today’s PC standards, but it’s the Green drive that limits performance, as I discussed in this previous post. Despite this, the performance is perfectly adequate for a home storage device.

But, the WD My Book does what it says on the box for a great price. And thanks to WD leaving open access to its Linux OS, it is a fantastic little device for a wide selection of other uses! You can get the Western Digital My Book Live 2 or 3TB drive for a great price from Amazon and I’d recommend one for every home network.

Western Digital My Book & USB 3 Performance

I just got a Western Digital(WD) 2TB My Book USB2/3 external hard drive and I thought I’d check the performance of it over the two different USB protocols.

I wasn’t able to find what specific type of drive was in the My Book anywhere on WD’s website or on the web in general. Google didn’t know! Nonetheless a screwdriver and a few minutes of codgerly grumbling let me see that the My Book uses WD’s Green drives.

WD Green drives are generally lower speed / variable speed spindles. This supposedly reduces power consumption and heat, making you feel like you are saving the planet. But you’re accessing your data much more slowly. Whatever the case may be, I wasn’t expecting a lot of actual performance from this drive, even via USB 3.0 and I wasn’t very surprised by the results.

To perform this test, I repeatedly wrote and read two different data sets to and from the disk. First I used a single 8GB ISO file to test sequential throughput without random access head seek overhead. Then I tested with a 3.1GB directory containing 250 files varying in size from 1 to 25MB containing random binary data. All tests were performed from the same source disk in the same PC with both USB 2 and USB 3 interfaces.

Here are the averaged results:

Write

Data Type Protocol Time(Seconds) Speed(MegaBytes/s MBps)
8GB File USB 3 178 45
8GB File USB 2 251 31.9
3.1GB – 250 Files USB 3 83 38.4
3.1GB – 250 Files USB 2 113 28

Read

Data Type Protocol Time(Seconds) Speed(MegaBytes/s MBps)
8GB File USB 3 72 111
8GB File USB 2 232 34.5
3.1GB – 250 Files USB 3 38 80
3.1GB – 250 Files USB 2 99 32
hdparm USB 3 3 115
hdparm USB 2 3 34.5

As you can see, USB 3 alone is faster than the same drive via USB 2. However, when writing, the difference is not great. But, USB 3 really shines with this drive when reading, especially sequential data that does not require a lot of head seeking.

For just a few dollars more, WD offers a network version of this drive called the My Book Live. It serves as a NAS, a media server and has cloudy remote file access. I think I’ll get one for the house and retire the large, hot and loud NAS that is presently there.

I wonder what would happen if I put a WD Black drive in the My Book? Where’d that screwdriver get to?

GroupWise 2012 Backup Time Stamps

I did a series of posts about backing up GroupWise 2012.

Backing up GroupWise 2012 with dbcopy
Backing up GrouWise 2012 with BackupExec
Backing up GroupWise with Acronis vProtect 7

All of the methods used did not rely on a dedicated GroupWise backup agent. They were all basically file copy operations of various sorts. All of the methods discussed worked fine and could be used as an effective backup strategy. However, after a time, I discovered that they all lacked a critical step.

Amongst the many things that GroupWise tracks in its various databases is when last a mailbox was backed up. Dedicated GroupWise backup agents set this attribute after completing a backup operation. The last backup time stamp attribute is important when retention policies are implemented. One commonly used policy is the restriction on emptying the mailbox Trash folder until after a backup has been performed. This is a critical policy in many industries for regulatory compliance as it prevents deletion of messages without a record of the message.

Unfortunately, I failed to take this last backup date/time stamp into account in my various backup methods. The result being that users were unable to empty their Trash. This is however, not an uncommon issue and Novell has long provided a command line utility, /opt/novell/groupwise/agents/bin/gwtmstmp, to deal with it.

Using the gwtmstmp utility, it is possible to set or reset several time stamp attributes stored within GroupWise. Using it in a script in conjunction with the previously discussed backup methods allows everyone to be able to empty their Trash folders again. The command we will want to use looks like this:

gwtmstmp -b -s -p /gwsystem/gwpo

This sets(-s) the backup time stamp(-b) for all mailboxes in the specified post office(-p) to the current date and time. If we needed to specify a particular date, the command would look like this:

gwtmstmp -b -s -d 03/31/2012 -p /gwsystem/gwpo

Taking the dbcopy script from this post as an example, we wind up with something like this:

#!/bin/bash
#Differential backup.
#Runs a dbcopy of files dated since Sunday or newer.

cd /opt/novell/groupwise/agents/bin

SUNDAY=`date +%m-%d-%Y -d "last Sunday"`
TODAY=`date +%Y-%m-%d -d "today"`

dbcopy -t 10 -i $SUNDAY /gwsystem/gwdom /mnt/backupserver/gwbackups/$TODAY/gwdom
dbcopy -t 10 -i $SUNDAY /gwsystem/gwpo /mnt/backupserver/gwbackups/$TODAY/gwpo

gwtmstmp -b -s -p /gwsystem/gwpo

Now everybody is happy.

Using vmProtect 7 To Backup GroupWise 2012

A somewhat more modern backup method is system imaging. It typically allows for complete system backups and restores that are also very fast because they are block based rather than file based. In this case, the GroupWise system is running in a few VMWare virtual machines and we will be using Acronis vmProtect 7 to back them up.

The newly released Acronis vmProtect 7 offers some new features and a new reduced price. But, basically it takes image based backups of virtual machines(VM). vmProtect 7 can utilize the Changed Block Tracking(CBT) system of VMWare vSphere to backup only changed blocks. So, what is a relatively fast initial backup becomes much “faster” subsequent backups. This, combined with the low load on the host system means that you can run a high frequency of backups throughout the day.

vmProtect 7 can be installed in two different ways. One installation method is as a virtual machine appliance. The other is as a “Windows Agent”, which is an application/service on a Windows machine. The VM appliance offers compactness and reduced cost as it does not require additional hardware or operating system(OS) licenses. However, it does add load, though slight, to your VM host in the form of consumed datastore space, CPU cycles and memory. It also prevents the use of pluggable media like USB drives as these devices do not pass-through to the VM well.

Using the Windows agent version, on physical hardware, works just as well and is operated in exactly the same way as the virtual appliance. But, it also allows us to use USB drives and other backup destinations. All without additional load to the VM host.

The overall process is fairly simple. Snapshots are created for the VMs, vmProtect copies the snapshotted VM to your chosen destination, and then the snapshot is deleted(consolidated). That’s just how it works with live GroupWise systems, too.

One important caveat is that Acronis vmProtect 7 offers the ability to recover individual files from backup images of supported file systems. The list of supported file systems is fairly extensive and does include the standard NTFS and EXT3. However, Windows Dynamic Disks and Linux Volume Manager(LVM) are NOT supported. So, if your GroupWise system resides on an LVM based volume, as it should, you will not be able to extract individual files from the backup. You can still perform full system restores as well as run the VM in place directly from the backup. But, vmProtect individual file extraction will not work from backups of LVM volumes.

Quick, simple and fast, vmProtect 7 is an excellent backup method for GroupWise or any VM based system.

Note
Acronis vmProtect 7 works only with paid license versions of VMWare ESXi and vCenter. vmProtect does NOT work with the free ESXi.

Update
I missed an important issue with GroupWise backup time stamps. See this post for more details. GroupWise 2012 Backup Time Stamps

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: