Imagine, if you will, a proud Isaac Newton holding a feather and a stone next to one another, at equal heights.
“Now watch carefully” he says, “see which one hits the ground first.”
Onlookers are far from surprised to find themselves watching the feather float lazily toward the ground where the stone landed several seconds previously.
“Well theoretically they land at the same time” says Newton, “your physics must be broken.”
“Should have bought a Mac.” Sniggers Robert Hooke.
Now notice how this (probably) didn’t happen. That’s because ‘theoretically, in this simplified model’ tends not to be much use when it differs so greatly from reality. Yet within Computer Science, nothing ever has to work; one merely has to say “Well it works in theory” and their unfinished work is instantly a work of pure genius. And you wonder why people say CS isn’t a ‘real’ subject.
I’m not saying simplified models aren’t useful, but there’s got to be a limit – there’s a point when a model is so naive that it ceases to be useful and starts to be ‘cool and interesting’. It seems that Computer Science is interested almost exclusively in the ‘cool and interesting’.
When you’re having some sort of software issue, perhaps the least useful remark anyone can offer is “Well it works at my end.” This particular remark is about as helpful as “it works in theory” and is basically the same thing. The fact that it’s a valid response to any problem is surely indicative of a larger problem with the attitude of software developers: if you write software, it’s not your users’ fault (and it certainly shouldn’t be their problem) if you failed to anticipate their hardware configuration or operating system version; it’s your fault, for not anticipating it. This is a concept which few developers seem to like, possibly because the Computer Science attitude to making things work (i.e. not making them work) is far easier than doing a thorough job.
And that, is why I prefer to think myself an engineer than a computer scientist.