A few weeks ago, Szczezuja asked the GeminiSpace community: How you were using the Internet in 1991-1995 and 1995-2005?

This may be a bit longer than asked for, and I thought about breaking it into smaller pieces, but I decided it would be more appropriate for a Gemini post to be one single unit.

1991-1995: Discovery

By 1990 my family had moved on from Atari’s home computer line to what was then known as an “IBM Compatible” PC. I missed out on the BBS era, except for one time we had to download a software patch. My first taste of being online came through walled gardens during my last year of high school:

Prodigy, which I seem to remember having a GUI frame around a mostly text interface (except for banner ads in the frame). I think it even ran under DOS. I remember looking at some message boards about theater, but that’s about it.

AOL, which at the time was much friendlier to use, ran on Windows, and had its own system of message boards, email, etc. But again I don’t remember much about what I did with it until later on.

September

Then I got to college and discovered “Mosaic” at the computer labs. This web thing was really cool! There was a database of movies that I could search, I could find all kinds of sites on this collection of categorized links called Yahoo!, and people were posting things like fan pages collecting all of the Animaniacs cultural references!

Egad! Keeper’s Cartoon Files is still online!

There was a campus-wide Unix network that you could connect to through a dial-up terminal app, or the WYSE terminals scattered around campus. Windows and/or Mac computer labs at major departments. The engineering, computer science, etc. labs also had bullpens full of graphical UNIX terminals (I think they were the classic Sparc “pizza boxes” running SunOS and later Solaris), which was how I first encountered Mosaic and Netscape.

Back at my dorm, though, I had to dial up to a terminal. I could use text-based applications like Lynx for web browsing, or PINE for email. Sometimes I’d check my email (a string of auto-generated letters and numbers based on my major at a fourth-level domain based on the department that handled student email) at a text-based terminal in one of the computer labs or scattered around campus.

Continue reading

Once upon a time, the idea that “only the code mattered” was sold as a way to be inclusive. No one would be shut out if their code was good.

But building software is more than code. It’s design. Planning. Discussion. It’s figuring out use cases, misuse cases, and failure modes. It’s interacting with people.

And if you allow some people to treat others like crap because only the code matters, you end up causing harm and driving people away.

Which obviously isn’t inclusive.

If you mistreat people or violate ethics to make your “technically perfect” software, those people have still been mistreated. Those ethics have still been violated. People have created marvels of engineering and fantastic art by abusing or exploiting others. People have done the same while abusing or exploiting people on the side. And people have created wonders while trying very hard not to abuse or exploit others.

The accomplishment doesn’t erase the exploitation or abuse. And if you can accomplish something incredible without mistreating others, it obviously doesn’t justify the mistreatment.

But the culture of “only the code matters” turned into a culture of tolerating assholes because they were good at their job. The ends justify the means. From trying to enhance freedom, to embracing Machiavelli.

It certainly didn’t help that 90s hacker culture had a significant BOFH element to it, with its built-in disdain for those with less technical knowledge. The Free part tended to prioritize programmers and sysadmins over “lusers.” It was Animal Farm with computer users. Sure, we tried to throw off the corporate overlords who were dictating how people could use their computers. But some computer users were more equal than others.

So a lot of people who could have become part of the Free Software community found a hostile environment and left in disgust. Or fear. And even if you don’t care about the harm done to them, consider their potential contributions. Free Software has always had a problem with coverage: Programmers work on problems that they find interesting or useful. The boring parts, the use cases that they personally don’t use, tend to fall by the wayside.

Yeah, your code is good…but the spec’s incomplete because you pushed away the people who would have pointed out a common use case, or just how easy it would be for a feature to be misused. You didn’t think they were worth listening to because they weren’t rockstar coders. But they also had information you didn’t.

Not that throwing off the corporate shackles has worked out all that well. Every platform now has its own walled garden. Microsoft is less dominant than it once was, but we have new mega-corps who’ve managed to leverage an internet built on Free/libre and open-source software into their own positions of dominance. And trying to maintain services for people who’ve come to expect free/gratis has brought us to the point where adware is the norm, and surveillance is everywhere…to better target those ads. And the majority of computing devices out there are locked down, preventing ordinary users from tinkering with them and developing that technical competence that might bring them into the fold…

If we’ll even let them join.

The FCC wants to eliminate net neutrality, the principle that ISPs should treat all traffic the same, and not block, throttle, or promote data based on what service you’re using or who you’re connecting to. But we can stop them.

What’s Net Neutrality? Simple: your cable company shouldn’t decide where you get your news, what businesses you buy from, which video chat services and streaming services you use, or who you talk to.

Why do we need it? It used to be an unofficial rule, underlying the way the Internet was built over the years, until ISPs started to break it. For example:

  • Multiple ISPs intercepted search queries and sent them to their own portals.
  • AT&T blocked Skype on the iPhone.
  • Verizon blocked tethering apps.
  • Multiple carriers blocked Google Wallet in favor of their own payment services.

In 2015, after a public advocacy campaign, the FCC made it official: ISPs in the United States are now required to treat all traffic equally.

So what’s the problem? There’s a new chairman in charge, and he wants to remove the rule.

No doubt cable and phone companies will go back to their old tricks. Plus they could slow down access to news sites that disagree with them, or charge websites extra for the privilege of reaching their audience (when they already pay for their upload connection), or slow down services owned by competitors (consider: Verizon owns Tumblr and Flickr now, and Comcast owns NBC) in favor of their own.

That’s right: free speech, fair competition, and the price you pay for your internet service are all protected by net neutrality.

Rolling back net neutrality doesn’t help you, doesn’t help business, doesn’t help anyone but the existing carriers.

That’s why I’m joining the Battle for the Net — and you can, too. The FCC’s public comment period is still open. Contact the FCC and Congress (here’s a form), and tell them why Net Neutrality matters to you. Then spread the word.

Keeping the internet open is critical. Let’s work to keep it!

Back in the old days, before you could upload photos straight to Facebook or Twitter or Tumblr, if you wanted to share pictures online you had to host them yourself. Or if you used something like LiveJournal, you could use their limited image galleries. But with space and bandwidth at a premium in those days, you could run into limits fast.

That’s where sites like Photobucket and Imgur came in. You could upload your images there, and then put them on your fan site, or your journal, or whatever. They were also good for posting anonymously, as in communities like Fandom!Secrets. And they’re still good for posting images in places like Ebay listings, or online forums (yes, they still exist) that don’t provide their own hosting.

But you know the problem with hosting your stuff with a third party. You can’t guarantee they’ll stick around. And while Photobucket isn’t closing up shop yet like GeoCities did (taking with it an entire generation of online fandom), they’ve suddenly blocked hotlinking (the main way people used it!)…unless you pay up $399/year for an advanced account. BuzzFeed minces no words, calling it “ransom”.

So an awful lot of images across the internet have stopped working overnight.

I’m starting to think about all my photos that are hosted on Flickr, now that Verizon owns it. I don’t think they’re likely to do something similar, and Flickr’s paid service is a lot cheaper than Photobucket’s. But Yahoo was never quite sure what to do with it, and Verizon… well…

It might be time to move my “pull in remote Flickr embeds” project off the back burner, just in case.

Bart Allen as the FlashI’ve just launched SpeedForce.org, a companion blog to the website, Flash: Those Who Ride the Lightning.

Since I started adding news items to the front page of Ride the Lightning, it’s started to get a bit crowded. I thought about converting it to a Delicious feed, but then I realized it really ought to be a blog. There hasn’t been a major Flash-focused blog out there since Crimson Lightning shut down, so I figured I’d step in and fill the gap. And I could use the domain I picked up last year!

I’ll be posting Flash-related news there, including a weekly round-up of Flash comics, as well as articles that might not fit into the existing site structure, and (eventually) reviews as well. Some stuff that I would have posted here will end up on the new site. Certainly Flash news, but I may start shifting more comics-related commentary over there as well.

I’ll be refining the look and features over the next couple of weeks, and cross-linking it more into Ride the Lightning. I might keep the current theme with a few tweaks, or I might try to match Ride the Lightning, or I might build something else entirely.

So please, check it out and let me know what you think! I’m open to suggestions as to content, design, etc. And of course bug reports.

The WaSP Buzz’ article on a new mobile web browser test made mention of phones that can read QR Codes—one of several types of 2-D bar codes that you see on things like shipping labels. In this case, the idea is that you can point your phone’s camera at the QR code and it’ll decode it and send you to the appropriate URL.

My first thought was that this was just like the CueCat, which was a bar code scanner that you could plug into your computer’s USB port, then scan bar codes in magazines, or on cans of soda, or whatever, and it would tell your computer to bring up relevant information. It was marketed in the late 1990s, during the tech boom… and it was a total flop. No one wanted them. The company went under and had millions of the little scanners sitting around unsold.

But now there are multiple schemes in use for object hyperlinking. In addition to graphical codes, there are RFID tags, GPS coordinates, and short text codes that you can easily type into an SMS message or a web portal.

So why is this sort of thing working now, 10 years later? Is it a societal change? Was the CueCat ahead of its time?

I think there are two reasons:

  • CueCat was a single-purpose device. All the applications listed involve smartphones or other multi-purpose handheld devices. No one wanted a device that would only scan bar codes, but a phone/camera/browser/MP3 Player/bicycle that also scans bar codes? Sure, why not?
  • CueCat was tied to the desktop. Sure, you could plug it into a laptop computer, but you’d still have to take the object over to your computer to scan the bar code. Unless you’re a lousy typist, swiping the CueCat across your can of Coke isn’t that much easier than typing in www.coke.com. As a home user, you’re not likely to be scanning a dozen objects in a row (unless you’re cataloging all of your books for LibraryThing).

All the applications listed on that page are mobile. A tagging scheme does give you an advantage when you’re out walking down the street and see something interesting. It’s much easier to punch in a short number than to try to type a URL on most phones, easier still to point your camera at a graphic, and dead simple to pick up an RFID tag or pull in GPS coordinates.

Update 2024: It’s funny: in the early 2010s I remember jokes about how no one outside of a marketing department had ever scanned a QR code, but now they’re all over the place, both for linking objects (a sign on a fast food door to go to their online ordering service, a code on an instruction manual to open a site with jump to a site with any changes since printing) and for sending data between devices (communications apps, 2FA apps, starting a download on a mobile device using a QR code shown on a desktop display).