Tripleodeon

The evolution of quality measurement

I’ve worked for Argogroup for the last five years or so, and we’ve been focussed on trying to help mobile data serices take off – by helping operators and content providers improve quality.

We did that using ‘active test’ techniques. Basically you simulate real users accessing your services and see what experience they would have had. It’s a great improvement (or complement) to using passive techniques to measure data traffic within the network to deduce quality.

But what’s next? I think even ‘active test’ needs to evolve. After all, how can you truly represent millions of users synthetically?

So I’m thinking a lot about this. “Quality of Service v2.0” I’m calling it. Basically, why not use the power of the ‘crowd’ (your users) to measure your quality for you?

Instead of benchmarking your web site, or mobile application, you’d ‘crowdmark’ it.

Or to put it in grandiose terms, imagine every terminal, every web browser, every laptop, every phone, hooked up to a vast, virtual, peer-to-peer instrumentation space. Every application or service comes with an embedded quality deduction tool – that not only measures its own performance, but encourages the user to record their own experience, and so shares and collaborates in measuring the performance of this and other applications.

If I am using an application like Placeopedia, imagine if my session was be able to give something back to the underlying ‘Internet Operating System’ services that it uses. Record the user-experience, for example, and share it back through the mashed-up APIs, to Google maps, Digital Globe, NAVTEQ, Wikipedia, Yahoo broadband, BT Wifi etc etc. My simple behaviour has added a small drop to the QoS data ocean. But that’s an essence of web2.0: act locally, interact globally.

And it’s not just about tracking protocol-level quality. Talk to the user, ask them how they are doing. Ask them to rate what they see. Let them tag their user experience!