As he described in a webinar last week, Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about the honest reporting of trials after a colleague asked if he knew that his systematic review showing the mannitol halved death from head injury was based on trials that had never happened. He didn’t, but he set about investigating the trials and confirmed that they hadn’t ever happened. They all had a lead author who purported to come from an institution that didn’t exist and who killed himself a few years later. The trials were all published in prestigious neurosurgery journals and had multiple co-authors. None of the co-authors had contributed patients to the trials, and some didn’t know that they were co-authors until after the trials were published. When Roberts contacted one of the journals the editor responded that “I wouldn’t trust the data.” Why, Roberts wondered, did he publish the trial? None of the trials have been retracted.
The background buzz of permission dialogues is deafening, deadening. Cookie notices, consent forms, allow/reject. Should I trust this? Can I install that? Do you want to try OneDrive? Dropbox needs to update again. You need to restart your browser. Are you sure you don’t want to try Edge? You really shouldn’t be installing untrusted dangerous software. This trusted approved software wants to know your location always. Allow/Decline?
Your privacy is very important to us. We would like to know what you are doing at all times. Accept / Ask me again later.
Yeah. I couldn’t give a good reason why anyone should trust, or like, software the way it typically works these days.
Ask me again later. Not right now. More information. Are you sure?
This article is from 2008, but if anything is even more relevant today. A history of how we’ve been persuaded to work longer hours and replace our leisure time with consumption.
As with most headlines that end in a question mark, the answer is a resounding ‘no’.
“Software research is a train wreck,” says Hillel Wayne, a Chicago-based software consultant who specialises in formal methods, instancing the received wisdom that bugs are way more expensive to fix once software is deployed.
Wayne did some research, noting that “if you Google ‘cost of a software bug’ you will get tons of articles that say ‘bugs found in requirements are 100x cheaper than bugs found in implementations.’ They all use this chart from the ‘IBM Systems Sciences Institute’… There’s one tiny problem with the IBM Systems Sciences Institute study: it doesn’t exist.”
Laurent Bossavit, an Agile methodology expert and technical advisor at software consultancy CodeWorks in Paris, has dedicated some time to this matter, and has a post on GitHub called “Degrees of intellectual dishonesty”. Bossavit referenced a successful 1987 book by Roger S Pressman called Software Engineering: a Practitioner’s Approach, which states: “To illustrate the cost impact of early error detection, we consider a series of relative costs that are based on actual cost data collected for large software projects [IBM81].”
Unlike the first link in this post, from the British Medical Journal, there’s no suggestion of fraud or dishonesty. This is more a case of sloppy writing and reporting, exacerbated by the web.