A tech list is discussing EAGAIN errors, and I keep misreading it as EGEANIN.
Tag: programming
The Twitpocalypse Explained in Layman’s Terms
It’s a trending topic, but there’s a lot of misunderstanding about the Twitpocalypse. Here’s what’s going on, in layman’s terms (I hope).
What’s happening?
- Every Twitter post has an ID number that goes up by 1 each time.
- When a computer program stores a number, it sets aside a certain amount of space for it. Bigger numbers take more space because they have more digits.
- One common format is called a “signed integer.” It has 32 binary digits (1 or 0 only) with one digit set aside to indicate a minus sign. The biggest number it can store is 2,147,483,647.
- Twitter’s status IDs are approaching that number.
So what’s the likely impact?
- Twitter itself can handle bigger numbers and will be fine.
- Third-party apps that store the ID in a bigger format will be fine.
- Third-party apps that store the ID as text instead of a number will be fine.
- Third-party apps that store the ID in this particular format will end up with bad IDs as they try to cram a big number into a small space.
If I were to guess, the most likely breakage would be that replies might be attached to the wrong previous post — but again, only with apps that use this particular format for numbers.
Twitter itself will probably sail through cleanly (and has been planning to move up the schedule so that affected app developers don’t have to fix things in the middle of the night), so don’t expect any fail whales. Unless so many clients have problems that lots of people switch to the website.
Update: Not surprisingly, most Twitter clients are unaffected by the Twitpocalypse. I’ve used both Twidroid and Twhirl with no problems since Twitter passed the mark. I figured a few would get tripped up, but the real surprise is that it hit Twitterrific. One of the most popular clients on the iPhone? They do have an update, but a lot of people are unable to connect.
Improving Browser Reliability
The IEBlog recently posted about their efforts to improve reliability in Internet Explorer 8, particularly the idea of “loosely-coupled IE” (or LCIE). The short explanation is that each tab runs in its own process, so if a web page causes the browser to crash, only that tab crashes — not the whole thing. (It is a bit more complicated, but that’s the principle.) Combine that with session recovery (load with the same set of web pages, if possible with the form data you hadn’t quite finished typing in), and you massively reduce the pain of browser crashes.
I’d like to see something like this picked up by Firefox and Opera as well. They both have crash recovery already, but it still means restoring the entire session. If you have 20 tabs open, it’s great that you don’t have to hunt them down again. But it also means you have to wait for 20 pages to load simultaneously. It would be much nicer to only have to wait for one (or, if I read the IE8 article correctly, three).
Edited to add:
On a related note, I’ve run into an interesting conflict between crash recovery and WordPress’ auto-save feature. If you start a new post, WordPress will automatically save it as a draft. If the browser crashes, it will bring up the new-post page, but restore most of the form data you filled in. So the title, the text of your post, etc will all be there. But WordPress will see it as a new post, and you’ll end up with a duplicate.
This wasn’t a major problem when I encountered it — I had to reset the categories, tags, and post slug after I hit publish (since I hadn’t noticed that they’d been reset to defaults), and I just deleted the older, partial version of the post — but I can imagine if I’d uploaded an image gallery, I would have been rather annoyed, since there’s no way (that I’ve noticed) to move images from one post to another. Reuse them, sure, but not such that the gallery feature would work.
Cleaning up Firefox’s Memory Usage
One of the biggest complaints about Firefox since 1.5 was released has been its high memory usage. Go to a forum anywhere and you’ll get people griping about “have they fixed the leak yet?”
It is, of course, much more complicated than that. There are caches, fragmentation, places where memory is used inefficiently, bunches of small leaks, leaks that only happen under specific circumstances, leaks in extensions, leaks triggered by combination of extensions, etc.—not one single leak that can be fixed. And then there was the unfortunate post in which one Mozilla developer (I’m too lazy to look up who) pointed out that 1.5 stored more information in memory, and that probably had a bigger impact on total memory size than actual leaks, which many people on the Internet jumped on as “It’s not a bug, it’s a feature.” (Why should they bother to read what was actually stated, when they can just read a misleading but sensational summary?)
A lot of the small leaks were patched in bugfix releases for 1.5 and 2.0, but really big changes are coming in Firefox 3. Mozilla’s Pavlov has written a detailed post on Firefox 3 Memory Usage, describing the different categories of memory improvements that have been made in the Firefox 3 development cycle.
I wouldn’t be surprised to find that this is one of the big reasons Firefox 3 has taken so much longer than previous releases. I suspect it’s time well spent, though, and users will be happier with a later, lighter Firefox than with one that shipped earlier, but used just as much memory.
A little scripting humor
After updating some links, the following dialogue occurred to me:
Sallah: Indy, why does the web… move?
Indiana: Give me the URL.
(The location looks like a Python script)
Indiana: Snakes. Why did it have to be snakes?
Sallah. ASP. Very dangerous. You go first.
(Actually, I have to credit Katie for the Python reference. The first and last lines just popped into my head, though.)
Pixels as Magic Numbers
All the Linux desktop action these days is in KDE and GNOME, but on older hardware, servers, or anything else where you need to squeeze every last ounce of performance from the box, something lighter is needed.
My Linux box at work — a 300 MHz Pentium II — runs WindowMaker. It’s familiar, it stays out of the way, and it doesn’t tie up the memory or CPU that a modern version of KDE or Gnome (or Windows, for that matter) would. But you need to add applets like a clock or a desktop pager. You can find them easily enough — I ended up using the aptly-named wmclock and wmpager – but there’s a significant problem with both. WindowMaker lets you change the size of the dock icons, but when I shrank the dock to get more space I discovered that both applets have a hard-coded size of 64×64 pixels.
As you can see, a 64×64 applet just doesn’t work in a 48×48 space. It surprised me, though, since these dockapps are designed specifically for WindowMaker, and it’s WindowMaker itself that lets you change the size. You open up Preferences, change the size, and restart WM. Just menus and buttons. No config files, no registry, no third-party add-on. This isn’t an esoteric hack that takes serious effort to find, it’s a basic feature. You might as well design a Mac program that assumes the Dock is on the bottom of the screen. For most people it will be, but it’s not rocket science to move it.
In my ICS classes, they always discouraged us from using “magic numbers” — just throwing a number in the code without identifying or abstracting it. There are two very good reasons for this. The first is that you might forget what this 64
is doing. The second is that you might decide to change it later on, and it’s much easier to change one SIZE=64
definition than to track down every 64
and hope you’ve neither missed any you need to change nor changed any you need to leave alone.
Those dock applets are stuck at 64×64 pixels because the programmers were thinking in terms of the pixel grid, not in terms of actual display size. Continue reading