I’ve never been a fan of actually using GPS navigation. Sure, I’ve always thought it was insanely cool that it was possible, I just didn’t want to use it myself. For unfamiliar destinations I generally prefer researching a route first, and for familiar ones I generally prefer just relying on my local knowledge. But I’ve found something that I do like using it for: Traffic.

I recently started a new job, exchanging a fairly short commute for a ~40-mile trek across the Los Angeles freeway system. Under ideal conditions, it’s about 45 minutes. When the freeways are bogged down (i.e. when I’m actually going to be driving), it can take an hour and a half or more.

When I landed the job, I replaced my phone with a G2. It’s a heck of a lot faster than my old phone, plus it can handle newer software…like Google’s turn-by-turn navigation app for Android. After trying a couple of different routes the first few days, I tried it out…and discovered that it factors in live traffic data when calculating the remaining time.

The upshot: I can walk out the door, start up the app, and figure out which of three main routes will get me there fastest. (Well, least slowly, anyway.)

Of course, it’s not perfect. It’s based on traffic now, and over the course of a predicted hour-plus, the route could easily get more congested. That’s not even counting potential accidents. It does seem to update frequently, though, and knowing I’ve avoided a 100-minute drive in favor of 70 minutes really outweighs the annoyance of a mechanical voice telling me how to get to the freeway from home.

I do have to remember not to rely on it too heavily at the end of the trip, though. I left it on by mistake after selecting my route to the LA Convention Center for Adobe MAX this morning, and instead of turning it off, I let it direct me straight past the parking garage.

Oops.

Lately I’ve been linkblogging via Twitter, and using Alex King’s Twitter Tools to build a weekly digest in WordPress. The problem is that since I’m pulling the posts from Twitter, I’m stuck with Twitter’s limitations: Short descriptions, cryptic URLs, and unreadable links.

So I wrote a plugin to process the links. When Twitter Tools builds a digest, the plugin calls out to the remote site, follows redirects, retrieves the final URL and (if possible) extracts the page title. Then it replaces the cryptic-looking link with a human-readable link, transforming this:

Check out this site: http://bit.ly/9MhKVv

into this:

Check out this site: Flash: Those Who Ride the Lightning

If it can’t retrieve a title, it uses the final hostname. If it can’t connect at all, it leaves the link unchanged.

The download is here, and that’s where I’ll put future versions:
» Plugin: Twitter Tools – Nice Links.

Future

One thing I’d like to add at some point is cleaning up the title a bit. They can get really long, even without people trying to stuff keywords and descriptions in for SEO purposes. All it takes is a page title plus a site title, like this one. That’s a much more complicated problem, though, since there isn’t any sort of standard for which part of a title is the most important. I suppose I could just clip it to the first few words.

I’d also like to clean up duplicate text. Often the link title and tweet content are going to be the same, or at least overlap, especially if it’s generated by a sharing button or extension. That should be easier to check.

I suppose I can understand putting one of those “If this is an emergency, please hang up and call 911” messages on a health insurance phone menu. But if you’re going to have one, shouldn’t you put it before the five-minute member identification/sign-in process, not after?

Admittedly, the process only took that long because their voice recognition system wasn’t getting along with my voice, but still, isn’t the point to route people to the fastest response in an emergency?

After years of piggybacking on employers’ web servers (with permission, of course!), I’ve moved my personal websites to a third-party web host. It’s kind of weird to be dealing with a web server that I don’t fully control, but DreamHost is really flexible and (most importantly) specifically supports WordPress.

The only thing I’ve really missed so far is Apache’s mod_speling [sic], which will automatically correct any one typo or capitalization error when trying to reach a file. It’s nice to have, but far from critical.

Apparently there are websites out there that are redirecting Internet Explorer users to the Alternative Browser Alliance. This is, IMHO, both counter-productive and counter to the open spirit of the web.

For all the same reasons that you shouldn’t block visitors using Firefox, Safari, Chrome or Opera, or anything else unless there’s an actual, genuine technical reason (and unless you’re doing serious multimedia that has no fallback option, there is rarely a genuine technical reason), you shouldn’t be blocking visitors using Internet Explorer…

Because you’re not going to change them. You’re just going to make them angry.

They arrived at your site looking for something. Slapping them in the face and sending them off to another site is not going to get them to change their behavior and come back. It’s just going to make them look somewhere else for someone offering the same thing who won’t make them jump through hoops.

Case Study

Last week I received a message through the Alternative Browser Alliance’s contact form asking, “What does this have to do with cpanel?” I wanted to reply, “Nothing, why do you ask?”…but the person who asked the question hadn’t left an email address, just the name “King Kong.”

(Tip: If you want an answer to a question, give people a way to contact you!)

So I checked the server logs and saw that he(?) had arrived on the Why Alternative Browsers? page and had left no referrer. Great, another dead end.

I was ready to write it off as spam, but then I decided to search the logs for cpanel, and found several hits referred by a cpanel tutorial. I visited the page and didn’t see any links to my site, but when I looked at the source, I spotted this script:

if(navigator.userAgent.indexOf("MSIE")!= -1)
{
   window.location = "http://www.alternativebrowseralliance.com/why.html";
}

Wow. They just redirected all IE users with no explanation — not even pointing out that they were being shunted off to another website! Imagine opening the front door of a computer repair shop and walking inside to find a political activist’s office instead!

Presumably “King Kong” had searched for cpanel, followed a link to this tutorial, and found himself looking at a page about alternative web browsers. No wonder he didn’t leave a contact address. He didn’t want an answer. He was angry and blowing off steam — at me, for something that someone else did.

And did badly, I might add: Three of the five visits I could actually identify in the logs claimed to be Opera Mini, not Internet Explorer. I don’t recall whether Opera Mini can masquerade as another browser (the current Android version doesn’t offer the option, but this claimed to be an older Java version), but the desktop version certainly can. Older versions of Opera used to deliberately identify themselves as IE (with a tag adding that, no, actually it’s Opera), and would have been caught by this script!

The User-Agent isn’t a reliable indicator. It was never intended to be. If you must single out Internet Explorer for some reason, use conditional comments. That’s what they’re designed for.

If what you want to do is block IE visitors, though, think about what you’re really accomplishing. And please, don’t just silently shove the “problem” visitors onto someone else.

I found a sneaky type of spambot this morning. It was impersonating regular commenters on Speed Force, using their names and (at first glance) email addresses to blend in.

The names weren’t terribly surprising, but the email addresses were. Where had it gotten them? WordPress shouldn’t reveal them, unless there’s a bug somewhere. Was one of my plugins accidentally leaking email addresses? Had someone figured out a way to correlate Gravatar hashes with another database of emails?

As I looked through the comments, I realized that in most cases, it wasn’t the commenter’s usual email address. Here’s what the spambot was doing:

  1. Extract the author’s name and website from an existing comment.
  2. Construct an email address using the author’s first name and the website’s domain name.
  3. Post a comment using the extracted name, the constructed email, and a link to the spamvertised site.

The actual content (if you can call it that) of the comments was just a random string of numbers, and the site was a variation on “hello world,” leading me to suspect that it might be a trial run. Certainly they could have been a lot sneakier: I’ve seen comment spam that extracts text from other comments, or from outbound links, or even from related sites to make it look like an actual relevant comment.

I’d worry about giving them ideas, but I suspect it’s already the next step in the design.

Update: They came back for a second round, this time here at K2R, and I noticed something else: It only uses the first name for the constructed email address, but does so naively, just breaking the name by spaces. This is particularly amusing with names like “Mr. So-and-so,” where it creates an address like mr@example.com, and pingbacks, where the “name” is really the title of a post.