Dana Blankenhorn has thoughts about latest moves of news services behind paywalls.

While elite journalists bemoan the right-wing propaganda machine, seeing it as an existential threat to democracy and liberty, they’re defending an industry rapidly going behind paywalls. Nearly all daily newspapers are already there. Now the news services are doing. The latest is Reuters, which erected theirs this weekend.

The criticism is apt. The refusal to do anything about it is criminal.

Paywalls have been around in various forms for years, but I’ve noticed that they’re getting more and more in-your-face nowadays — sometimes not even letting you glimpse an article before demanding you sign up for a month.

Dana proposes a ‘day-pass’ model to allow readers access for a small fee to a time period or article limit before the paywall closes again.

Instead of demanding the commitment of a subscription, newspapers can offer day passes that let people behind the paywall for 24 hours or even less. When I tweeted this idea, however, a reporter for the Columbia Journalism Review specializing in digital journalism instantly dismissed it, calling it unworkable. (I felt like throttling the bastard.)

The technical means are there, all that is lacking is the will, it seems.

A news story that’s not read doesn’t exist. Every reporter knows that, yet we stand idly by while our bosses leave readers with nothing but propaganda to read. During my first lecture at Northwestern’s Medill School, in 1977, our teacher suggested we might prefer the nearby Kellogg School of Business. I should have taken him up on it.

Without a functioning business model that puts readers first, journalism doesn’t exist. Medill is now an “integrated marketing” program. I should throw my MSJ into the nearest garbage can.

Nilay Patel wrote a fun piece over at The Verge recently about the brief history of netbook computers, and pondered their effect on the technology landscape.

There were two products that arrived in 2007 that fundamentally changed computing: one, of course, was the iPhone. The second, obviously more important product was the $399 Eee PC 701. It originally ran a custom Linux operating system that reviewers loved (Laptop Mag’s Mark Spoonauer said it was “ten times simpler to use than any Windows notebook”) and was generally heralded as a new kind of computer with tremendous mass appeal. Spoonauer: “Pound for pound, the best value-priced notebook on the planet.” 

Again, this was a weirdo little two-pound plastic laptop that ran a custom Linux distro that was basically a front for various websites. (We hadn’t invented the phrase “cloud services” yet.)

I was sorely tempted by the Eee PC and the other netbooks that appeared later, but in the back of my mind I did wonder what the catch would be. It literally sounded too good to be true.

In hindsight, it didn’t help that the processors used for these netbooks traded performance for battery life. But what really did in the netbook market was Microsoft pushing manufacturers to make them run a cut-down version of Windows 7.

(Not mentioned in the linked article, but relevant to this discussion, are the Pocket PCs of the 1990s and 2000s which in many ways were the netbook’s forebears. Those were also something I’d pondered buying, but passed on due to concern over whether I actually needed one.)

Between 2007 and 2010, netbooks were seemingly the future of portable technology, at least in the eyes of technology journalists and bloggers. Nilay Patel’s article captures some of the fervour that was around at the time.

Then Apple launched the iPad in early 2010. I was initially unconvinced that it was worth buying, but as more apps became available to take advantage of it I could start to see uses for it, and got my first iPad in October that year.

Did any of this even happen? Is this real? I remember it all, but I can’t tell if it meant anything, or if we all just believed Microsoft and Intel were so mysteriously powerful that we had to live in their product frameworks and 160GB of maximum hard drive space. Did anyone actually buy a netbook? The only people I ever met who had netbooks were other tech writers; at one memorable trade show my colleague Adi Robertson showed up with both a gigantic gaming laptop and a tiny netbook, two laptops both perfectly ill-suited for the tasks at hand.

I asked Joanna, who is now a senior personal technology columnist at the WSJ, about all this, who replied: “Let’s be clear here. Apple’s coming event this week is actually about netbooks. The iPad Pro is an outgrowth of the netbook movement a decade ago.” Was she joking? I don’t know, and she wouldn’t tell me.

I wouldn’t describe the iPad as a spiritual descendant of the netbooks — while it’s a portable device, Apple approached things from a completely different angle. It really is the technology of the iPhone scaled up, along with the all the strengths and weaknesses. The past decade has seen Apple addressing those, to varying degrees of success, and today’s iPads and iPhones are more like evolutionary branches sharing a common ancestor.

A more compelling case could, I think, be made for Microsoft’s Surface devices being the netbook’s legacy, if only because both hardware and software issues have dogged it since the beginning.

The final paragraph of Nilay’s article suggests that perhaps netbooks did succeed, in as much as portable devices are all around us now. I think he’s stretching things there, as the portable devices he refers to don’t share much if any technology DNA with the netbooks of yore. In fact, I’d say that they have succeeded precisely because they avoided the mistakes and compromises that the netbooks made.

I didn’t watch Apple’s online event yesterday, preferring instead to digest the blogging that followed. The announcement of the new 24-inch M1-powered iMacs in multiple colours is what piqued my interest. My initial reaction is that it may not be what I would upgrade to, but it would be perfect for my Mum as an eventual replacement for her 2014 21.5-inch iMac.

The lack of USB-A ports will require some cable replacements, and perhaps a new USB hub, but otherwise wouldn’t be a dealbreaker. The absence of the SD card slot won’t matter much, as Mum has never used the one on her Mac and I can count on the fingers of one hand the times I’ve needed to plug in an SD card to mine.

The things that hold me back from considering these new Macs aren’t what you might expect. Since I no longer had a need to run virtual machines, I could probably get by just fine with 16GB of memory. And the folks at Rogue Amoeba had already sorted out their various audio apps for the M1 Macs.

The big question mark, for me, is how well Second Life will run on the M1 iMac and Big Sur. Since Apple announced several years ago that they’re deprecating, but not removing, OpenGL support in macOS, I know that Linden Lab have been evaluating how to address this. In theory, Rosetta 2 should allow me to continue accessing Second Life using my current viewer app, but I don’t know for certain if that’ll be the case.

In any case, my current 2017 27-inch iMac should be good for several more years yet, so it’s not a pressing concern.

There is every reason to believe that vaccination is making short work of the pandemic in the UK, but it is always worth learning lessons. I’ll remember to trust the competence of the government a little less, to trust mathematical models a little more and to have some respect for the decency of ordinary people.

Tim Harford

The idea that a focus group might have good ideas when no one has had the courage to put them out there first of all is absurd.

The world is not changed by asking for consent. It is always changed by those who refuse to give their consent to what the consensus might be.

Richard Murphy

Tom Fishburne

An unfortunate side-effect of ad-blocking and tracker-blocking is that I get confronted by various kinds of CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) a lot, particularly for site sign-ups or shopping check-outs.

I’ve recently started thinking about what to do with my current mobile devices — an original iPhone SE and an iPad Mini 4, both bought refurbished — once they stop receiving support from Apple. That might be happening very soon, depending on what the hardware requirements will be for iOS 15.

Continue readingNew Uses for Old Devices

Jacob Kaplan-Moss:

We’re an industry obsessed with automation, with streamlining, with efficiency. One of the foundational texts of our engineering culture, Larry Wall’s virtues of the programmer, includes laziness.

I don’t disagree: being able to offload repetitive tasks to a program is one of the best things about knowing how to code. However, sometimes problems can’t be solved by automation. If you’re willing to embrace the grind you’ll look like a magician.

Over the last few years I’ve realised that there’s a balance to be struck between automating tasks versus manually tackling stuff. For instance, I have rules set up in Hazel, and macros set up in Keyboard Maestro, to perform tasks when I hit a set of keys or apply a tag to some files, and those save me some time and make my life a bit easier. But sometimes I’ll make time to go in and organise files myself, transfer them elsewhere if need be, or delete them if they’re no longer required. And I will still print things out occasionally, for the simple reason that having it on paper beside me makes it easier for me to cross-reference between what’s on there and what’s on the screen.

I’ve also reduced the number of online services that I rely on, because their utility is contingent on an internet connection plus the servers staying up. Neither of those is an absolute certainty. Local resources may not be able to perform all of the magic, but at least if something goes wrong I’m in a position to fix them or find workarounds.

And while I have a lot of respect for the command line, I find the concept of wanting to not have to leave that environment to be almost fetishistic. I’m old enough to remember working at a physical terminal, and that is not a place I want to be 24/7!

Let’s Not Dumb Down the History of Computer Science

An edited transcript of a talk given in 2014 by Donald Knuth (of The Art of Computer Programming fame), where he discusses the paucity of study and analysis of how computers, the software that runs on them, and the development of said software, has progressed over the years.

GIVING THIS TALK might be the greatest mistake in my life, because I’m going to talk about controversial things. I generally go out of my way to avoid argument whenever possible. But I feel so strongly about this that I just have to vent and say it.

Although there is “history” in the title, I’m not going to tell you about the history of computer science. Instead, I’m going to talk about historians of computer science–about historiography. This is meta-history. I’m going to try to explain why I love to read works on history, and why I’m profoundly disturbed by recent trends in what I’ve been reading.

Knuth gives examples of the gaps that exist in our knowledge of this history, and offers up suggestions of where to look for the information to plug those gaps.

There are many wonderful algorithms and source codes whose histories are completely untouched. If we technicians can study and explain them in depth, then historians will at least have material to which they can later add the breadth.