top of page
Writer's pictureadmin81073

How DevOps technologies influence developer productivity and software delivery performance

DevOps as an operational philosophy emphasises collaboration between development and operations teams to automate and optimise production - thereby increasing productivity. Developer teams of all sizes can find useful principles and practices embedded in the DevOps tradition. However, not all practices are equally appropriate for all situations and unselective implementation can have minimal effects and lead to overcomplicated processes.


Determining which of these practices is appropriate for a specific team is not a straightforward task. In late 2023 we published a report on developer productivity where we used multiple measurements and metrics to examine what factors were associated with productive software development teams. In this report, we used data from our Developer Nation survey run in Q3, 2023 and found that productivity can be extremely context-dependent. Around this time, Google published its 2023 State of DevOps report that additionally highlighted how the productivity of teams is contingent on several factors.


The focus of this blog post is to offer insight as to how select DevOps technologies are associated with productivity across various developer team sizes. Below we first outline how we quantify productivity, and then delve into how contextual factors such as team size, and tool/technology use affect developer productivity.


How we measure developer productivity


In both aforementioned reports – SlashData’s productivity report and Google’s DevOps report – the DORA metrics were used, among others, as a measure of developer productivity. Specifically, the DORA metrics measure software delivery performance and while the metric should not be used to directly compare two distinct teams, it can act as a useful benchmark to offer insight into how successful teams working in similar contexts are organising their workflow. More information about DORA metrics can be found at the end of this blog post.

a data graph showing how developers are split into software delivery performance clusters

How developer team size and technology choices affect software delivery performance


We select six DevOps technologies that vary in application and popularity and explore how their use in developer teams of different sizes is associated with software delivery performance.

a data graph showing global usage of select DevOps technologies

We independently model, by team size, how the use of the aforementioned technologies affects the odds (ratio of the probabilities) of a developer having better software delivery performance (compared to those who don't use the technology). To acknowledge varying contexts, we account for different experience levels, geographical locations, and the type of projects developers are involved in within all the models. 


We segment professional developers involved in at least one DevOps activity and working at companies with at least one other developer into three groups:  


  • Small teams (5 or fewer developers)

  • Medium teams (6 - 50 developers) 

  • Large teams (more than 50 developers)


For several of the DevOps technologies, there is not a significant difference and/or distinct pattern across team sizes and for all but one technology (discussed below) the odds are likely positive or have no tangible effect. 


Examining the top technology, the use of agile project management tools is significantly associated with increased odds of having superior software delivery performance across teams of all sizes. Likewise, the use of managed CI/CD services and self-hosted CI/CD tools also generally improves the odds of a team being productive.  


The use of agile project management tools is significantly associated with greater odds of having greater software delivery performance across teams of all sizes.

The three remaining technologies – incident management tools, application performance monitoring/observability tools, and collaboration/ knowledge sharing tools – have a greater impact on increased productivity if the team is larger. These improved odds are significantly higher when the team is large and has more than 50 developers. 


Hence, vendors of incident management tools, application performance monitoring/ observability tools, and collaboration/knowledge-sharing tools should take note of these results and highlight this in their messaging when targeting enterprise-size developer teams. It is in these larger teams where their product will have the greatest impact. 


Taking incident management tools (applications designed to support developer teams in managing unexpected circumstances) as an example, we see the biggest difference across team sizes. For small teams, according to our analysis, the use of this technology slightly decreases the odds of being as productive in software delivery. This is likely because if a team of five or fewer developers require an incident management tool, they are either tackling very complex projects or have possibly over-complicated their workflow and services. Both of which would necessitate more time for software delivery. 


Vendors of incident management, application performance monitoring/observability, and collaboration/knowledge-sharing tools, when targeting enterprise-size developer teams should emphasize that their product has significant odds of positively impacting the team's software delivery performance.

For larger teams, however, we see the tremendous positive impact that incident management tools have on the odds of being more productive. Larger teams typically work on more extensive and complex projects, resulting in a higher likelihood of encountering incidents. Incident management tools help in handling the increased volume and intricacy of issues by providing a structured and scalable approach. This is clearly shown in our analysis, where the use of this tool significantly increases the odds (by 1.82 times) of developers and their teams having better software delivery performance compared to other large teams that do not use this technology.      

    

a data graph showing odds of having greater software deliver performance when using various DevOps technologies

In this blog post, we examined how the use of various DevOps technologies correlates with software delivery performance based on team size. If you are interested in other tools or technologies or other team differences get in touch with us! We would be happy to delve into the data with you.


Extra Information: Consolidated DORA metrics


Below are the four DORA metrics and the classification of exceptional, good, intermediate, and poor performers. Using these metrics, a practitioner can be classed, for example, as exceptional in terms of lead times but a poor performer in deployment. Our consolidated classification provides a more general overview by awarding points based on the category that a practitioner falls into for each of the metrics.


DORA Metric

4 points (exceptional)

3 points

(good)

2 points

(intermediate)

1 point 

(poor)

Lead time for changes

Less than one day

One day to one week

One week to one month

More than one month

Deployment frequency

Multiple deploys per day

Once per hour to once per week

Once per week to once per month

Less frequently than once per month

Mean time to restore service

Less than one hour

One hour to one day

One day to one week

More than one week

Change failure rate

15% or less

16-30%

31-45%

More than 45%


The maximum score for a practitioner is 16, and the lowest is 4. The following cumulative scores then characterise our high, medium, and low performer categories:

  • Low = 4-7 points;

  • Medium = 8-12 points;

  • High = 13-16 points.

Comments


Commenting has been turned off.