Effectively Measure the Quality of Outsourced Software Development Vendors with These Key Metrics
In an earlier blog I touched on how to successfully introduce DevOps enabled change within multi-vendor landscapes. I highlighted some of the mechanisms for empowering resources to adopt DevOps practices and behaviours, whilst also stressing the importance of role-specific metrics in order to ensure your vendors are delivering high quality software solutions. The latter is a personal area of interest for myself as I am huge advocate of data-driven decision making in order to drive DevOps adoption in enterprise organizations.
More often than not, many organizations that try to effectively manage their outsourced vendors are either measuring the wrong data sets or, in some cases, not measuring them at all! Indeed, many organizations try to measure/optimize the elements of cost and time first and foremost. However, if they instead focused on measuring and enhancing quality in the first instance, time and cost will invariably be improved, albeit indirectly.
This is chiefly prevalent in the development stages of the software development lifecycle (SDLC), where it is particularly subjective what ‘good code’ looks like and how to measure ‘great service delivery’. However, there are ways to extract, analyse and quantify data in order to effectively manage vendors such that you can sweat your assets and ensure your IT function is getting true value for money in the development domain. This is all the more applicable when applied in a Service Integration and Management (SIAM) framework.
By their very nature, software engineers are traditionally analytical, conscientious but introverted characters who have strong cognitive abilities. Indeed, some might argue that the onset of the DevOps movement has now created the need for them to be more open, collaborative and agreeable than they had been prior to the age of connectivity.
Nonetheless, in order to deliver innovative technical solutions, software development engineers need to be able to be trusted as well as to have the autonomy and the capacity to try bleeding-edge approaches. This makes measuring their output all the more difficult: as soon as a developer catches wind that ‘Big Brother’ is watching them there is a tendency for them to stop sharing information and become less transparent about their modes of delivery.
In short, how can organizations maintain a degree of quality control around the source code they are receiving from their vendor’s software engineers? As I touched on earlier, many organizations tend to use the wrong metrics to measure their development engineer’s productivity and quality throughput. In doing so, they engender the wrong behaviours and principles, which in turn can lead to sub-optimal coding practices and, ultimately, poor software solutions.
Listed below are some suggestions on the type of metrics your organization should consider utilizing, when trying to measure the delivery quality of your software development vendors.
What Metrics Should You Measure Your Software Development Vendors On?
Prevention of Defect Leakage:
Vendors should be measured on improving your organization’s capacity to identify defects during development, where they are typically cheaper and faster to fix. They should be charged with instilling a culture of shifting testing left, which can be achieved by adopting test driven development practices (TDD) or utilizing technology solutions like Service Virtualization in order to enable simulated end-to-end testing during the development stages of the SDLC.
Simply put, the more complex your code, the more likely you are to experience a significant number of defects across the SDLC. A number of studies have shown a positive correlation between the number of paths through code and the proportion of defects in the code. A by-product of these studies was the establishment of the Cyclomatic Complexity Number (CCN). Essentially, the greater the CCN, the more complex your code is. This will mean further difficulties with the on-going maintenance of your code base once it has been delivered into a customer-facing environment. As such, vendors should be measured on limiting the methods and paths within your source code so that on-going maintenance is easier to execute and more cost effective.
Limit Code Duplication:
Code duplication is often the result of lazy development practices whereby the good old ‘copy and paste’ function has been used to deliver a software solution. As such, code duplication can lead to increased maintenance costs due to the need to discover, analyse and report on software defects multiple times. Indeed, if you are simply duplicating code, you are also going to have to test each function multiple times. This means more money being spent on testing, which will also hinder your organization’s time-to-market. Duplicate code can also prohibit the performance of your application, leading to sluggish responses and sub-optimal speed, which is absolutely critical in today’s application centric ecosystem. Subsequently, vendors should be measured on limiting code duplication within the software solutions they deliver for your organization.
Target Dead Code Removal:
Dead code is code that can never be physically called upon during an application’s lifecycle. In essence, dead code idly sits within your codebase, taking up space and, ultimately, limiting the performance of your application. Indeed, every line of code within your application requires maintenance and the more code you have to maintain, the longer it will take and the more expensive it will become. Regarding time and materials contracts, especially, dead code is very much an area that you want to focus on in order to ensure you are getting value for money from your software vendors. Simplifying the codebase and removing dead code will help reduce cost overheads in maintaining your applications, ensuring you are not being taken to the cleaners on time and materials engagements!
Code Standards and Compliance:
Code standards are surprisingly an easy and effective way to measure the quality output of your software development vendors. You can utilize standards and static code techniques as a mechanism to identify coding violations so that your organization can proactively manage vendors to resolve poorly developed code. By implementing OWASP, SQALE and other common sense principles into the codebase standards, your organization can then feed these into the contractual SLAs and OLAs agreed with your vendors.
In the event that coding violations are flagged, you can then start to see which vendors are offering value for money and which are questionable in their delivery. For instance, if you are paying higher resource rates for certain vendors but seeing poor code being delivered, then you can actively manage that vendor more effectively or remove them from your preferred suppliers list! Code analytics tools like SonarQube and Fortify can help in this space. I have also seen some interesting cases using Semmle in order to enable software engineering analytics within multi-vendor landscapes specifically.
Optimal code coverage is a touchy subject area when it comes to writing and building source code for software applications. Some studies have suggested that increasing code coverage above 70-80% is time consuming and therefore leads to a relatively slow bug detection rate. Indeed, your organization’s goal should depend on the risk assessment and economics of the project or the application which you are making changes to. As such, vendors should collaborate closely with your internal risk and test management functions to understand the optimal code targets for each initiative and then be measured accordingly to ensure they are adequately testing your applications before formal system test and system integration test cycles commence.
Problems per Change Introduced:
By the very nature of the work they do, developers introduce huge amounts of change across the SDLC. When writing source code, you want to find bugs and identify defects. However, it is imperative to ensure this is very much focused during the development phase of the SDLC, whilst preventing defect leakages into latter stages of the testing cycle. Subsequently, in order to ensure you are getting the right level of delivery from your development vendors, you should explore avenues of correlating the number of problems introduced per change executed. This would enable your organization to more thoroughly audit your vendors and make more informed decisions around the effectiveness of your partnerships. This could help aid increases in quality, enhance time-to-market and manage cost more effectively.
The cohesion of your codebase can be simply described as how tidily the codebase is organized. To give this more substance, cohesion is the principle of grouping components with common interests together, whilst those without associated common interests are grouped elsewhere. By tracking the aggregate cohesion score of your vendor’s delivery throughput, you will have a better understanding of your code cohesion. Ultimately, more cohesive code is easier to debug than unstructured source code. This, in turns, means a decrease in the effort required to de-bug, maintain and troubleshoot code issues, which are timely and cost your organization money.
What Metrics Should You Not Measure Your Vendors On and Why?
Having provided some examples of suitable metrics to utilize in order to effectively measure your software development vendors, let's now look at some not-so-good metrics!
More Lines of Code Doesn’t Mean Good Quality!
Just because someone is writing lots of code, doesn’t mean what they are delivering is high quality. As we spoke of earlier, copy and paste techniques can limit the performance of an application and really highlight poor development practices. Therefore, we tend to steer clear of measuring your vendor’s throughput and rate of delivery purely on the number of lines of code they are writing and committing to your source code repositories.
Number of Unit Tests Written:
We spoke about the number of optimal unit tests earlier being key to ensuring that high quality software solutions are being delivered by your vendors. Conversely, merely measuring your vendors on the number of unit tests they write can lead to poor engineering practices being applied. Namely, developers begin to write meaningless unit tests, which take up valuable time during the software delivery lifecycle and will cost your organization more money.
By looking at code commits in isolation, organizations can being to build an inaccurate view of their vendor’s delivery throughput. Again, there is no guarantee that what your vendors are committing is of a high quality. As such, by merely executing unnecessary commits as a means of illustrating productivity and throughput, your organization will have no tangible reference point by which to validate true business value.
Some Final Thoughts
The key thing to emphasize is that there is no single key metric that can be applied in isolation, in order to able to flawlessly measure outsourced development vendors. Most importantly, many of the metrics highlighted in this blog focus, first and foremost, on driving quality improvements across the SDLC. The list provided is by no means complete and it is worthy of mention that the types of quality metrics utilized will vary across industries, depending on data protection and auditing needs, application performance prerequisites and business requirements.
Furthermore, in order to effectively manage your vendors, the use of these metrics should not be applied as means of critiquing their delivery capabilities. It’s rather that they are seen as a transparent method by which to continuously improve quality of service during the development stages of the SDLC. Again, by changing from ‘blame game’ culture between customer and supplier, the relationship, language and measurement practices should talk more around partnership, coaching and delivery of change to continuously improve on quality.
However, it is important to stress that metrics alone will not help to ensure software engineering excellence across your SDLC. A combination of standards, methods, process controls and technology solutions must be combined in order to drive a more stringent focus on quality during development cycles.