Usability researchers like Jakob Nielsen have long contended that people read differently online than on paper: they tend to skim, follow links, and jump off quickly if they don’t find what they want. (Rather than being judgmental, Nielsen simply advises that you tailor your online writing to this.)

Now the Washington Post reports that people are observing a spillover effect, with online skimming habits interfering with concentration when it comes to more in-depth reading…even when reading print.

Sadly, the article mostly distinguishes between two modes of reading:

  • Online skimming of casual articles and social media on screens.
  • Serious reading of novels and in-depth content in print.

There’s nothing to distinguish, for instance, reading on a dedicated e-reader (without the siren call of Facebook or hyperlinks) from reading on your laptop. Nor is there anything to distinguish casually reading on your phone’s tiny screen from reading at your computer, or sitting on the couch and reading on a book-sized tablet.

A tablet and a breaking paperback.

Personally, I’ve found display size to be the biggest factor in maintaining attention. I get tired of reading on my phone, and I tend to skim on a desktop or laptop, but a handheld tablet (I have a Nexus 7, which is close to the size of a paperback book) works out about right. I’d much rather read a long article on that tablet than on a computer, even with the browser window sized for optimal column width.

So yeah, it’s much easier to concentrate on something long in a book than on a desktop or cell phone…but an ebook reader or a tablet is a lot closer.

There are still tradeoffs: My Les Misérables re-read changed drastically when I switched from a paperback to my tabletnot because I couldn’t concentrate on long passages, but because my method of notetaking changed. My reading actually sped up, but my commentary slowed down. (It did ultimately take me longer than I expected to finish the book, but only because I had so many other books I wanted to read, and ended up taking breaks and reading them instead.)

The main problem I do have with a tablet isn’t continuing to read, but starting to read. Even without a wifi signal, it’s tempting to catch up on email, or saved articles in Pocket, and before I know it, I’m done with lunch and I haven’t even opened the book I’d planned on reading.

(Via Phi Beta Kappa)

I was reading up on wearable computing today, and with the SDCC badge presale looming, I found myself wondering whether a smart watch would be useful for Comic-Con.  (No plans to actually buy one, I’m just thinking.) I don’t normally wear a watch these days, but it does get annoying to have to reach into my pocket when I want to check the time. For this reason, I make a point to wear a watch at conventions so that I can see the time at a glance and avoid missing events or meetup times.

So, keeping in mind that the current generation of smart watches (Pebble, Galaxy Gear, etc.) mostly pair up with a phone to do the heavy lifting…what might a smartwatch do better for a con than a phone (or a regular watch)?

1. Messages. Between the noise and the walking, it’s already too easy to miss calls or even texts when you’re out on the floor of the convention. It’s easier to notice a buzz on your wrist than a buzz in your pocket, and less intrusive to glance at your wrist to see if it’s something urgent when you’re interacting with people in the real world. You can also tell instantly when you’re crowd-weaving to meet someone whether that text they just sent is “I’m here,” “Running late,” or “Change of plans, meet me at Hall G lobby.”

2. Schedule reminders. Put the event, time, and room number on the screen. How to make it more awesome: pull down the floorplan and use your location to calculate how long it’ll take to get there, and notify you far enough ahead of time that you can make it, Google Now-style. This is more useful for smaller conventions or at least smaller panels at SDCC, since the big ones require you to line up way ahead of time anyway.

3. Wi-Fi hotspot detector. Even if the watch doesn’t support wi-fi, your phone does, and it can ping the watch to let you know.

4. Breaking news alerts. Ironically, I feel like I miss more news when I’m at Comic-Con than when I’m following along from home. This would have to be very well filtered in order to be useful without pulling you out of actually experiencing the convention.

A step counter would be interesting, but I can probably find an app for my phone.

I doubt I’d use a wrist-mounted camera like the one on Samsung’s Galaxy Gear much. Google Glass would be more practical for the blink-and-you’ll miss-it moments, and if you have time to compose a shot, you have time to pull out a phone or dedicated camera. OTOH, a wrist camera is probably a little less creepy than Glass. (On the gripping hand, maybe not.)

Of course the absolute best use of a smartphone at Comic-Con:

5. Get one that can actually handle calls, and wear it with a Dick Tracy costume.

What uses can you think of?

Interesting idea: The Human Body as Touchscreen Replacement. The downside to using a touchscreen over something with physical controls is that you lose that instant feedback of where the buttons are. (Skip a song on an old-school iPod while driving? Easy. Do the same on a touchscreen? That’s trickier.) Your own location sense plus knowing exactly what part of your hand (or, in another prototype, ear) you’ve touched could really improve usability for applications that are suited for it.

Apple and Amazon have settled their two-year legal dispute over the term “app store.”

It’s about time common sense prevailed. Even though Apple had the gall to deny it, “app store” is as obviously descriptive of a store selling apps as “book store” is of a store selling books, or “grocery store” of a store selling groceries. Insisting on trademark protection was ridiculous.

Actually, that reminds me of the time way back when that Barnes & Noble (I think it was B&N, anyway) tried to bring a false advertising claim against Amazon for saying that they were the world’s largest book store. The idea was that since Amazon didn’t have a physical storefront, they weren’t a book store, but a book seller. I seem to recall that didn’t stick either, but took a similarly ridiculous time to settle out.

I’ve been reading a Slashdot thread where people who don’t and won’t use tablets argue over why they don’t count as personal computers, because they supposedly aren’t useful for anything except consuming media (not that they’ve tried, I imagine, except maybe the 2 minutes they tried typing on an iPad that one time in Frys or Best Buy and didn’t allow themselves time to get used to the onscreen keyboard), and therefore can’t possibly have any valid use case. (And besides, if we admit that a tablet is a computer, then Apple wins!)

You can certainly make a distinction based on form factor. You can maybe make a distinction based on OS, but then you have to define what makes a PC operating system and what makes a tablet/smartphone/whatever operating system, and things are going to get blurry when you look at, say, Windows 8.

You can sort of make a distinction based on whether you can develop and install your own software, but even that isn’t hard and fast. You can write code in an editor. Compiling is a matter of whether a compiler is available, not something intrinsic to the device itself. Installing software from outside the walled garden is easy on Android, not so much on iOS. (Incidentally, this is the main reason I’ve chosen Android over iOS.) Both have large software ecosystems that developers can contribute to and the average user can install from, which is what actually matters to the average user. (The funny thing is, I remember plenty of arguments about how hard it is to install third-party software on Linux where the counter-argument was that with apt-get, you mostly don’t need to.)

But a lot of Slashdotters are spouting gems on the order of “It doesn’t have a keyboard!” OK, neither does your desktop until you plug one in. Which you can do with a lot of tablets. Or “It doesn’t have a mouse!” – Really? Are you serious? They’ve merged a trackpad with a screen. “I can’t upgrade the parts!” Well, that rules out a lot of consumer-focused desktops, doesn’t it? “PCs have applications, tablets have apps.” – Is there really any meaningful distinction between the two terms?

Pair a Bluetooth keyboard and mouse with your tablet. Hook it up to an external monitor. Or don’t, since the typical tablet already has a better screen than an SE/30. Now you’ve got a workstation, with no more hardware than you would have hooked up to your desktop box. Install an office suite, an image editor, a coding editor — heck, a tax program. At this point the key difference in what’s useful is which applications are available. Wow, I’m having a flashback to all those old Windows vs. Mac vs. Linux arguments.

And yet people insist that these devices are “only toys.”

I still can’t get over the fact that a tech discussion site like Slashdot is so full of neophobes…but then they’ve always been. Look back at the “who would want a touch screen?” debates from a few years ago, or the “wow, this iPod thing is lame” initial reviews.

There’s a bubble a lot of geeks live in where they don’t think about other people’s use cases or workflows. That touch screen debate was full of talk about arm strain from vertical monitors, not considering horizontal or handheld screens, and not considering touch as a complement to keyboard & mouse. (My two-year old wants to touch the screen on the desktop and laptop, and I keep having to explain that they don’t work that way.) There are people out there who consider GUIs to be useful only for opening multiple terminals. And let’s not even get started on the decisions driving Gnome 3, eliminating things like files on the desktop or the minimize button because who uses those?

I learned my lesson when the iMac came out and I thought it was ridiculous. Who would want such a limited computer? As it turned out, lots of people…because they wanted and needed different things from a computer than I did.

So these days, when I see a piece of technology I can’t fathom the use for, I try not to rant about how useless it is. Instead, I wait and see what other people come up with. Sometimes it really is useless (though even the CueCat found a second life as a scanner for LibraryThing), but sometimes the failure isn’t in the technology, but in my own imagination.