One consequence of working with security software is an awareness of lots of the insecurities of the web. A worrying number of web sites and web apps have clear flaws.

image: montmartre

There are, I think, a number of reasons for this low quality:

  • There is a dangerous interdependency between independent websites, so that if just one of those websites is breached, many can fall. Website developers use cross site scripting via common libraries willy nilly. Many malign actors corrupt online libraries in consequence. This nightmare of irresponsibility is not fixed because that is the way things are done.
  • The web is a complex space, but that’s no reason not to properly test a web app even though the testing is a lot of work. Yes, there are dozens of browsers, yes, most people use browsers based on one designed by a company whose business model requires breaching (information) security, yet so many web developers are too lazy to ensure their site works on alternative browsers (for example, sncf, which doesn’t work on tightened firefox). Ironically, the one browser that seems to work with everything is the venerable Netscape, now called Sea Monkey.
  • Companies and organisations do attempt to help users to avoid the dangers that fall out of the awful quality of so much found on the web. But even those things fail. For example, an online course from SoSafe to help people improve their web security only works if they actually reduce their web security (by permitting third party cookies). I should be shocked that a supposéd security course doesn’t understand basic security, but, to be honest, it reflects the general dreadful standards of web software.
  • Finding and fixing flaws is difficult and resource intensive, and is without a reward that a user can see and praise. The only time the user sees the results of such testing is when it hasn’t been done sufficiently, when something horrible happens. So how can a company know when sufficient testing has taken place, and how can a company decide how much resources to place into such testing? It is so easy for them to take shortcuts, because the consequences may not be felt for years (often when those who took the shortcut have moved on). To be honest, I think the solution here is legal, not just technical.
  • The languages used are over-complex and, by design, encourage poor coding. As a very minor example, HTML encourages programmers to not bother to close elements, even though that’s a cause of inefficiency. The HTML standard itself even warns against this practice (“Errors that result in disproportionately poor performance”). There’s no easy way out of this; it would require the online world to retool.
  • Web developers often cannot independently (from a web editor) check their website is error free, because no tool existed to do so, at least no tool that actually checks everything rather than just certain surface details. At least, that was the case five years ago, which is why I started to write the static site checker. Unlike all the other tools I found then, it, for example, checks internal links promptly, it checks attribute values, and so on. That kind of tool should have been available from the beginning of the commercialisation of the web, yet it was not.

image: montmartre

As a user of the web, I can and do take precautions to avoid all the consequences of its general poor quality. I now find sites that are punishing people for doing just that. For example, adverts are a real problem. I don’t like them, not because I don’t like advertising (it’s useful to know which products are so poor they don’t sell themselves), but because the adverts are usually in themselves dangerous. They’re not controlled by the site owner, they include cross site scripting, they deliberately distract users security (and other forms of) attention through such things as infantilising animations—and they try and trick users into buying bad products.

And now many web sites punish people who apply basic security practices. It’s as though these sites want to corrupt the people who use them. The latest I’ve encountered is vide-greniers.org, which blocked users who block poor security practices. I know they’re paid to post information about brocantes, but I’m beginning to wonder if they’re also paid by identity thieves to make it easier for them to commit their crimes. The claim its all about advertising, yet if they cared about secure practice they’d offer security aware adverts. Note: since I first wrote that, the site has realised its mistake; now they ask people to reduce their online security, they no longer require it.

Even one of the paragons of computer security, openbsd, has a website with a myriad flaws. It’s far better than many, but it still has broken internal links, syntactically corrupt elements, and it drowns in inefficiency. None of those errors are security risks, so far as I know, but the fact that the errors remain despite being given access to the tool that can identify and even fix them (ssc), says a lot about the attitudes that lie behind state of the web.

The web is an essential tool for modern life, but so much is childishly and haphazardly prepared. This needs sorting out. I’m not sitting on my arse, I’ve built a tool that should help that happen, but that tool, in itself, can’t overcome the lazy attitudes that underlie the problems. The tool has to be used! The web will remain shoddy, ramshackle, and unpleasant to use, until web companies want otherwise—or, more like, are forced to do otherwise. Again, I believe the solution is legal, not technical.

Finally, I should point out that this commonality of bad practice is not everywhere. There are many good examples of properly constructed websites that work well. It seems to be mostly big corporates, with clear exceptions, suggesting the deep problem underneath it all is the difficulty and expense of produce good quality sites.