Test the software, debug the software, test the software, debug the software… repeat as often as necessary, until the software is clean and bug free –or at least as close to bug-free as you can get it.
Software testing is a chore – an essential chore, but a chore nonetheless. At least one company saw a gap in the market that could clean up the software before the testing had even begun. The company is Tricentis and CEO Sandeep Johri spoke with ZDNet to explain how it’s done.
ZDNet: You are rated very highly in software testing by Gartner and other analysts, what’s your secret?
Johri: Software testing is not a new thing. Every time you develop software you have to test it before you release it. But most of the time, believe it or not, the testing is done manually in large enterprises. About 50 percent is done offshore, but most of it is done manually.
And that doesn’t work when people are thinking about things like digital transformation and doing Agile development. Then you are building faster: you’re doing daily development and manual testing doesn’t keep up.
We offer a continuous testing platform that gets you highly automated, and we can pretty much automate any software. That gets you to the point where your testing runs at the same cadence as your advancement cycle, so you can do daily testing and testing doesn’t become a bottleneck.
We have what we think is a unique platform that will deliver a 10x improvement over a traditional manual testing solution.
We not only increase the speed by reducing the time to test but also, because it’s highly automated, it’s not too costly. So you can reduce the cost of testing, which usually occupies around 40 percent of the cost of an application platform.
Analysts estimate that enterprises spend more than $30 billion on testing, and most of it has been shipped off to India or the Philippines or some such place, where they put the software on computers and do the tests.
The main differentiator – the reason we can do it and others can’t – is that most of the commercial tools are script-based tools, and while you can automate anything you want with a script, scripts can be very fragile and break all the time. That means that if you are going through a rapid development cycle, your scrips will break so often that you won’t know if it’s the script or the software that’s not working.
So you give up on that testing, give up on that automation and go back to manual testing. We are a script-less solution – that is at the core of what we do – and because of that our solution is resilient.
There are other capabilities that you need to get to this level of automation. Those are all around test-data management – making sure that every scenario is covered – and that means you’ll end up having to prep a lot of data. Think of the banks who have to test every scenario, every circumstance for every type of customer. Test data becomes a big issue, but we have the capability to make it much more efficient.
For example, when you are trying to test an application, not every piece of that application is available all of the time. Say you’re an insurance company and you want to redo your claims processing. That business process test sits upon almost every application that sits in that insurance company. So, how do you test that? In this complex environment we can virtualise applications, so you can still test your little sub-application without having all the other applications available.
You personally have a very distinguished background, I notice. You were co-chair of President Bill Clinton’s National Information Infrastructure Advisory Council. What drew you to the testing area?
The main reason was that I spent about eight years at HP – 2003-2011 – and I had various roles in IT management where I was in charge, and I was also responsible for acquisitions. While I was there we did 14 acquisitions, valued at almost $7 billion, but the largest acquisition we did was a company called Mercury Interactive. It was one of the best acquisitions that HP did – and this predated Autonomy, which you will know turned out to be a complete disaster.
Mercury was a $4.5 billion company when we bought it and it was dominant in its space. Every company in the world was using its software.
I wound up driving the acquisition of Mercury; I got to know the business and by 2010/11 HP stopped investing in Mercury. They had lost interest and I felt that Mercury, while it was still dominant, was not keeping up with the more recent trend, which was mainly around automation.
Customers were adopting Agile and testing was becoming a bottleneck, and my thesis was that there was an opportunity for a next-generation testing platform. That sort of opportunity often happens in technology.
Agile development and automatic testing were becoming ever-more important with the advent of DevOps. So, that’s what kind of brought me here: I was in the business and I was aware of all the vendors. And I was aware of what the customers were looking for that Mercury was not delivering.
I notice that one of the areas you are interested in is the automation server Jenkins. Why the interest in what many would see as a relatively obscure choice?
Jenkins is really interesting. They’re an open-source company and it’s the open-source tool of choice for doing builds and orchestrating their release. We actually integrate with Jenkins; what ends up happening is you get a bunch of people sitting around developing stuff, and once they submit all the code it gets taken and then you do a new build and move onto the next cycle.
We integrate with them and we have customers that integrate with them; what customers do is pick-up the code, which triggers our application, Tosca, to kick-off automated tests. Now, you can have Jenkins basically say, ‘a new build is ready so let’s kick it off’.
Now we have customers like WorldPay, which is a UK company that’s now a global company, and is the largest credit-card processor. Their US arm used to be a customer of ours, and what they have done is go from what were six- to eight-week release cycles for all platforms to one every two weeks.
They develop on Agile but now, every day at the end of the day, when developers have done all their code, Jenkins picks it up and kicks off the automated tests. And this happens every day.
What’s happening is that every day you are doing a new build that, even though you’re not releasing, you run a bunch of automated tests on, and as you do this it means that by the time you get to the release date you’ve dealt with most of the bugs.
Otherwise, what used to happen was that every day the developers would do their little bit of testing and everything would look good; then, at the end of two weeks, when they were about to release for the first time, everything would come together and all hell would break loose.
And that’s where Jenkins is so good. Think of it as a release automation – or release orchestration – tool which on the one hand is helping to do the build and on the other, for us, is triggering us to kick off a bunch of tests.
What do you see happening as you move forward? Are there other areas that you are going to expand into?
We are now recognised as a leader in the space. We have over 1,000 customers, from very small to some of the largest enterprises – some of the largest banks, some of the largest insurance companies, the largest telcos. We are growing really nicely and we hope to maintain the leadership position.
Also, we have been doing acquisitions – three over the last 18 months – to fill out the portfolio.
We, like others, are building an AI capability and machine learning, just to make it smarter. As you make the tests more resilient you can automatically test to see which tests are not needed, which tests are the most important – there is some really cool innovation going on around machine learning and AI.
We have a whole set of capabilities we have developed on that front.
Could you tell us about any customers that you want to highlight?
HSBC is a customer, as is Exxon. In the UK we have Centrica, British Gas; and down in Australia, Telstra, along with three of the four largest banks in Australia.
You briefly mentioned DevOps: how do you think that whole area is panning out? Do you think DevOps is the way to go?
What has happened there is that it is predominantly being used as a methodology in an enterprise – what people call ‘Enterprise DevOps’. When people think about DevOps, they think of a Netflix doing 20 releases a day or whatever it is they do.
When you think about large enterprises, it’s not about doing 20 releases a day, but the ideas are the same: how do you get the releases out?
So, on the Dev side people are using other methodologies like behaviour-driven development, test-driven development, and then there are some newer methodologies. But when they have done all that, they still have issues on the test side of things – and that’s where we come in.
We are developing faster, but we don’t need to release faster and the next bottleneck is we have to fix that. Because with fully automated testing you can’t really release any faster.
And the off-site people are moving into the cloud and putting in real-time monitoring, which means when you push things out into production, you have better insights into how things are being used.