WP Tavern summarizes the conversation around WordPress losing CMS marketshare for the first time in ages, and what various people have cited as likely causes.

Personally, I’m finding its increasing complexity to be a major frustration.

  • Writing on WordPress has gotten somewhat more complicated.
  • Maintaining a WordPress site has gotten more complicated.
  • Developing for WordPress has gotten more complicated.
  • The resulting page code (including CSS and Javascript) has gotten a lot more complicated. As I’ve noted before, there’s no good reason to require 450K of data to display a 500-word post. Or a single link with a one-sentence comment.

The move towards Gutenberg blocks and full-site editing complicates things on several levels, and feels like an attempt at lock-in as well.

Ironically, I’ve been moving toward Eleventy, which has also been very frustrating…but only in building the layout I want.

On one hand…

  • I have to develop a lot of the components I want from scratch. More than would have thought. Though I suspect there are enough pre-built layouts out there for most people’s use cases.
  • The documentation is sorely lacking. (Eventually I’ll get around to helping with that.)
  • Dynamic features like comments need to be handled by another program.

But on the other…

  • I can fine tune things a lot more easily than fine tuning a WordPress theme.
  • Once I’m done building the layout, adding a new post is almost as easy as it is on WordPress.
  • My actual post content is portable.
  • There’s essentially no attack surface, so if I have a site that’s “done” I can just build it one last time and leave it as-is — and not worry about spam, maintenance or security (beyond general webserver security).
  • I don’t have to send extra JavaScript libraries along with every page, so it can use a tenth of the bandwidth and load faster on slow connections.

With Eleventy, setting up the layout and features has been super complicated…but once it’s set up, it’s smooth, easy to deal with, and does the job well. It’s kind of like running Linux back in the 1990s.

But with WordPress, there’s complexity in every layer.

Sometimes it’s worth it.

Sometimes it’s not.

Sometimes it takes longer to automate something than it would to just repeat it yourself. Calvin designing a robot to clean his room, for instance. The method of estimating how long it takes to do the thing, how many times you have to do the thing, and then how long it would take to automate doing the thing, is a pretty good guideline.

But there are other factors: Like, can you include it in a checklist? If not, what are the chances that you’ll forget to do the thing? And what happens if you forget? What if you might hand things over to someone else and three people down the line, the fact that you need to do the thing doesn’t get passed along?

Or what if you have a situation like Desmond at the Dharma Initiative numbers station, and they know the step is “required,” but don’t know why? (Not that you’re likely to have quite so severe a failure mode!)

Anyway, today I automated some post-processing on a site that I hardly ever change. Not because it’s a pain to do the post-processing. Not because it takes a long time. But simply because if I don’t build it into the process, the next time I change something a year down the line I’ll probably have forgotten that I need to do the post-processing!

Once upon a time, the idea that “only the code mattered” was sold as a way to be inclusive. No one would be shut out if their code was good.

But building software is more than code. It’s design. Planning. Discussion. It’s figuring out use cases, misuse cases, and failure modes. It’s interacting with people.

And if you allow some people to treat others like crap because only the code matters, you end up causing harm and driving people away.

Which obviously isn’t inclusive.

If you mistreat people or violate ethics to make your “technically perfect” software, those people have still been mistreated. Those ethics have still been violated. People have created marvels of engineering and fantastic art by abusing or exploiting others. People have done the same while abusing or exploiting people on the side. And people have created wonders while trying very hard not to abuse or exploit others.

The accomplishment doesn’t erase the exploitation or abuse. And if you can accomplish something incredible without mistreating others, it obviously doesn’t justify the mistreatment.

But the culture of “only the code matters” turned into a culture of tolerating assholes because they were good at their job. The ends justify the means. From trying to enhance freedom, to embracing Machiavelli.

It certainly didn’t help that 90s hacker culture had a significant BOFH element to it, with its built-in disdain for those with less technical knowledge. The Free part tended to prioritize programmers and sysadmins over “lusers.” It was Animal Farm with computer users. Sure, we tried to throw off the corporate overlords who were dictating how people could use their computers. But some computer users were more equal than others.

So a lot of people who could have become part of the Free Software community found a hostile environment and left in disgust. Or fear. And even if you don’t care about the harm done to them, consider their potential contributions. Free Software has always had a problem with coverage: Programmers work on problems that they find interesting or useful. The boring parts, the use cases that they personally don’t use, tend to fall by the wayside.

Yeah, your code is good…but the spec’s incomplete because you pushed away the people who would have pointed out a common use case, or just how easy it would be for a feature to be misused. You didn’t think they were worth listening to because they weren’t rockstar coders. But they also had information you didn’t.

Not that throwing off the corporate shackles has worked out all that well. Every platform now has its own walled garden. Microsoft is less dominant than it once was, but we have new mega-corps who’ve managed to leverage an internet built on Free/libre and open-source software into their own positions of dominance. And trying to maintain services for people who’ve come to expect free/gratis has brought us to the point where adware is the norm, and surveillance is everywhere…to better target those ads. And the majority of computing devices out there are locked down, preventing ordinary users from tinkering with them and developing that technical competence that might bring them into the fold…

If we’ll even let them join.

Kiddo’s been wanting to learn programming, with the ultimate goal of modding Minecraft. We’ve done some Ruby, but he’s impatient, so last night I we started Java with a simple program that repeats a println X times.

He wanted to pass it the integer limit.

After a few minutes, I suggested we watch a movie and check back later.

After dinner, he decided to stop it and we timed some shorter runs.

I think he has a better understanding of scale now!

I’ve written about the trouble with using mobile apps in dead zones before, so I’m happy to see that I’m not the only one thinking about the problem. Hoodie wants to design for offline first, and is starting a discussion project around the issue.

Offline reading is an obvious application. Most eBook readers handle that just fine, though it’s easy because you spend a lot of time in each book so it doesn’t need to predict what you’ll read next. It would be great if Feedly would sync new articles for offline reading. Heck, I’d like it if Chrome on Android would let me re-open recent pages when the connection dies.

Beyond reading, many actions can be handled offline too. Kindle will sync your notes and highlights. GMail will let you read, write, label, archive, delete, and even send messages without a network connection. All your actions are queued up for the next sync.

There’s no reason this approach can’t be taken with other communications apps for messages that don’t require an immediate response, even with services like Facebook and Twitter. Short notes of the “don’t forget to pick up milk” variety. Observations. Uploads to Dropbox. Photos going to Instagram or Flickr. Buffer would be perfect for this, since you’re not expecting the post to go out immediately in the first place. It shouldn’t give you an “Unable to buffer” error, it should just save it for later.

I’d like to be able to do work in a place where there’s no connection, have that work persist, and fire things off as I finish them instead of having to come back to all of them the next time I’m within range of a cell tower or a coffee shop with wifi. I’d also like to be able to post in the moment, hit “Send,” and move on with my life, instead of having to hang onto that extra context in my mind as I walk around.