Tuesday, March 03, 2020

Testing, testing

The longest stint of my career was working for a small software company that, during the latter half of my nearly decade-long tenure, had one product: a tool that automated software testing. When the product came out, the market for testing software was new, and most software testing was done by hand. Quality assurance personnel walked down paths, walked through scenarios, trying to make sure they tested all possible nooks and crannies of the software. It was an exacting and tedious process, and QA folks were happy to have it automated. Even with a product like ours, which was unfriendly and unwieldy. 
But in marketing this product, I learned a ton about marketing - or at least about B2B (business to business) and T2T (my own coinage: techie to techie) marketing.

I learned that if you having a product that's more expensive than the competition, it isn't totally a bad thing. High price has a halo effect, conferring value on a product.

I learned that the value it conferred was worth an unearned and undeserved price premium of 10-15% over market. Anything more than that, and the market figured it out and went with a lower priced alternative. 

I learned that justifying your higher prices based on the fact that your cost structure was too high was a recipe for failure.

(Figuring how to market overpriced products and services stood me in good stead throughout my career, as for whatever reason I gravitated to companies that sold at the higher end of the price scale.)

I learned how to use the word "robust" without laughing. Okay, part of the reason we all laughed when we used the word robust was the "you must, you must, you must develop the bust" mantra of old. But the other part of the reason we had to resist laughing when we called our product "robust" was that we all know that "robust" was just a code word for "difficult to use."

And our product was spectacularly difficult to use. Unless you knew how to write code - and most QA people weren't coders - you would be hard pressed to do much of anything with our tool.

This led my company to develop a rather robust - hah! - consulting practice. We would sell a very pricey license to a company, and then send our consultants into write the code that would enable the company to actually get any use out of it. 

Having a product that was nearly impossible for anyone to use had one minor - but cherished - benefit. Just as having a high price automatically confers value, having a brutal to use product (i.e., a product that's "robust") confers that it's designed for really important, complex applications. So, if your software did something really important and complex you would, of course, need our really important and complex software to test it. Industrial strength testing, we'd say, for industrial strength applications.

Having a product that was nearly impossible for anyone use occasionally brought us into humorous marketing situations.

We had sold a hefty license to AT&T, and the senior fellow who signed off on the purchase was never going to have to do anything with it. This senior fellow was a great guy. As I recall - and it's been 25 years or so - Mike M. did a client case study with us and was a press reference, always willing to talk about how great our product was and how AT&T had gotten such a remarkable return on investment using it. Gold!

Anyway, we did a lot of informational (read: sales) seminars, and the AT&T story was one we always talked about in our presentation. Most of those who came to our seminars were prospects, but we always invited local customers to come, too. They got a free breakast, got to learn about what was coming in future releases, talk to our client services guys, etc. 

At a breakfast seminar in Dallas, after I'd just gone through the AT&T story - including a glowing quote from Mike M. - I asked the guy from AT&T who was supposedly deriving all the benefits of our product to talk about his experiencing using it. "Oh," he told me (and the crowd of prospects) "No one at AT&T has actually used it yet." Okey-dokey.

While I learned a ton about marketing (good, bad, and ugly; the pitfalls, the pratfalls), I also learned quite a bit about software testing.  And one of the things I learned is that you have to do end-to-end testing of an entire application. You just can't test the piece parts separately and assume it's all going to work together once you turn those piece parts into a whole.

So if I know this, you'd think that engineers and quality assurance professionals at Boeing would know this.

But, no.

Boeing has been getting quite a bit of (earned, deserved) bad press of late for the shoddiness of their 737 Max, which has experienced a couple of fatal crashes due to software errors on Boeing's part.

And now we're learning that they also screwed up the NASA astronaut capsule they launced last year. 
A software error left the Starliner capsule in the wrong orbit in December and precluded a docking with the International Space Station. Another software flaw could have ended up destroying the capsule, if not fixed right before reentry.
A Boeing vice president, John Mulholland, said both mistakes would have been caught if complete, end-to-end testing had been conducted in advance and actual flight equipment used instead of substitutes.
“We know that we need to improve,” he said. (Source: AT&T)
Yes, John, you most certainly do need to improve. 

Wish I could recommend a high-priced, robust software testing tool for you, but my old company is long gone (acquired) and its testing tool put out of its misery. But there are plenty of them out there. Maybe you can ask Mike M. of AT&T for a reference.

Best of luck!

No comments: