Category Archives: Testing

Posts relating to software testing, or to software development in general.

What are the parallels between music criticism and software testing?

Regular readers of this blog don’t really need to be told that I’m a very keen music fan and amateur rock critic. Writing about a small club-based scene I’ve come to know quite a few band members over the years. I’ve even had people suggest I should quit working in the IT industry and become a full-time music writer. But while being on the fringes of the music scene is can be a great experience, I’m not convinced I want to jump ship and join the circus.

But I can see a lot of parallels between music criticism and my professional career as a software tester.

Not that I’m suggesting that testing and reviewing are exactly the same. To start with music is inherently more subjective than software. But there just as it can be a judgement call as to whether or not a piece of software is fit for purpose, it’s never completely subjective as to whether a record or performance is good, bad or indifferent. There are those that claim all opinions are equally valid when it comes to reviews, and there is no such thing as an objectively good or bad record. If you believe that, you clearly haven’t heard Lou Reed’s appalling collaboration with Metallica. It seems to me that both testing and reviewing are something many people can attempt, and just about anyone can do badly, but take skill and experience to do well. You only have to look at the reviews on websites to which anyone can post without moderation to realise there are bad reviewers out there just like there are bad testers.

To review a record or concert requires both an understanding of what the artist is trying to achieve, and an honest assessment of how well they’ve succeed in achieving it. That in turn requires the equivalent of domain knowledge. Just like a lot of indie-pop reviewers come horribly unstuck attempting to review progressive rock or metal releases, ask me to review a dubstep or free-jazz record and I wouldn’t know where to start. But just as testers from different backgrounds will approach things from different angles and uncover different bugs, a reviewer with deep specialist knowledge of a specific genre will have a quite different perspective from one whose taste is far broader. Something that’s meant to have crossover appeal benefits from both viewpoints.

Then there is the issue of speaking truth to power, which can require both courage and diplomacy. Egos even bigger than those of developers go with the territory. When an artist has poured their heart and soul into making a record, they don’t always appreciate being told how their work could have been better. Much like the way developers don’t always appreciate being told the code they’ve slaved over is riddled with bugs they they really ought to have picked up in their own unit testing. And if you’ve ever had the misfortune to work in a dysfunctionally political environment where project managers surround themselves with yes-men and tend to shoot the messenger whenever those messengers are bearers of bad news, then you’ll recognise those over-zealous fans who sometimes try to vilify anyone that attempts constructive criticism.

It’s true that there are a lot of rock critics out there who exhibit exactly the same sort of adversarial behaviour that gives some testers a bad name. Yes, writing and reading excoriating reviews of mediocre records can occasionally be cathartic, but informed and honest constructive criticism is far more valuable in the long run. Just as software testing is a vital part of making sure software is fit for purpose, constructive criticism has a role in making music better.

Perhaps it’s my tester’s ability to see patterns, but what I hope the above goes to show is that sometimes what you do in your “day job” and an apparently unrelated activity you do in your spare time can have more in common than you think. Certainly there are transferable skills, especially those softer ones which are much in demand.

Posted in Testing | Leave a comment

Facebook’s New Look - A Tester’s Perspective

If you’re on any social network you’ll know that Facebook rolled out some major changes to their system over the last couple of days. To say it’s gone down like a lead balloon would be an understatement, Facebook users have always been a bit small-c conservative, and don’t like change. But the rage I’m seeing this time round is a lot more intense.

Having a background in software testing gives me some insight into how and why they’ve annoyed so many people so badly this time.

What appears to have happened is they’ve lauched some potentially powerful new features without really bothering to explain to anyone how they work or how they should be used.. Smart Lists are a good example; They’re similar to the circles in Google+, and almost certainly implemented as a response to that. But again, they haven’t made the implications of adding people to certain types of list clear. This probably explains why we’ve seen more than one rock band adding all their fans as employees. Once could be a mistake, twice looks like careless UI design.

As we’ve come to expect from Facebook by now, they’ve set the defaults for most things to values that aren’t the ones you’d have chosen. And it goes without saying that every new data-sharing is opt-out with the relevent option hidden in a rusty filing cabinet marked “Beware of the leopard”. Likewise, I don’t think they’ve bothered to test it properly before they rolled the changes out. Although in this case it’s not so much that the actual software is buggy, but the the design is not as intuitive to ordinary people as their designers seem to think it is.

Facebook’s problem is that a large proportion of its user base isn’t made up of tech-savvy computer nerds, but people like your mum. They’re not the least bit interested in performing unpaid exploratory testing of new and occasionally half-baked software products. They just want to share pictures of grandchildren.

Posted in Testing | Tagged | 2 Comments

Testing an Internet Radio Station

Last week I was invited to help test some themed internet radio stations over the past few days. The focus was more on the overall customer experience rather than bug-hunting. But I’m a software testing professional as well as a music fan, so that’s going to have an effect on how I approach things.

Being a huge progressive rock fan, I was naturally driven towards their Prog channel. I listened to it for several hours while doing other work on the PC. Most of the music clearly fell into that genre, even when it was artists I’ve never heard of, and it was a good mix of classic 70s music and more contemporary artists. So far, so good, and the feedback I gave was positive.

But the odd track sounded completely out of place, dance-pop acts or singer-songwriters whose music fell well outside even the broadest possible definition of progressive rock. On further investigation, all of them turned out to be obscure European artists who shared names with better-known prog-rock acts whose own music wasn’t in their library. It’s the same artist disambiguation issue that plagues last.fm once you get beyond household names signed to major labels.

Nice to be able to combined skills learned as software tester with knowledge acquired as a music fan.

Posted in Testing | Tagged , | Leave a comment