How we’re adapting the Harvard Kennedy School’s ‘Model for Digital Maturity’ for use in Australian government

Lara Stephenson
Code For Australia
Published in
8 min readNov 29, 2019

--

This post is about the context and methodology behind Code for Australia’s Digital Maturity Indicator program. We have also published a post capturing our findings and recommendations from the first six months of the program, which you can read here.

Recently, researchers from Code for Australia have been pioneering the Harvard Kennedy School of Government’s Model for Digital Maturity in Australia, under our Digital Maturity Indicator (DMI) program.

The model was designed by civic tech and government experts, in order to capture best practices and enable benchmarking of digital capability in the public service worldwide.

What is the model?

The model “captures the organisational development of digital services [and helps] provide a shared set of definitions for maturity” in the public sector.¹ It was created through a recognised need to “contextualise progress made and milestones yet to achieve,”² in new spaces such as digital transformation.

It helps agencies to see where they are currently at and the next steps to progress to greater digital and delivery capability, with the end goal being delivering more value to citizens.

By assessing agencies and teams against the model, we can compare their results against others in the public sector, and reflect on their progress over time. Celebrating and recognising success, and identifying next steps, is a key part of the journey to continued improvement of services and products.

How is it being used?

The model has been built around digital service units, such as the Digital Transformation Agency in Australia, the GDS in the UK, and 18F in the USA.

We are adapting it to suit departments and agencies in all spheres of government, giving teams a snapshot of their current digital maturity and actionable recommendations for improvement.

Our pilot work has been with smaller agencies (up to about 150 staff in each) and we have subsequently run the program with larger teams as well. The Public Service Commission in New South Wales commissioned the pilot, and are partnering with us to roll out the DMI and subsequent recommendations across the state.

We are using the DMI to measure the level of digital maturity government agencies are currently experiencing as well as assessing their aspirations and works in progress. The results from our assessments are shared back to the agency and research participants, to provide a baseline of current digital maturity, along with a roadmap of ‘next steps’, tailored to each agency's needs or priorities. For example, we may list actions that could be implemented easily and allow for larger recommendations to flow on from, or list some ‘quick wins’ to help show success and gather momentum to implement further recommendations.

The recommendations we provide, and the qualities and operations we assess, are all related to the six key areas of the model, as explained below:

A short rundown on what the Digital Maturity Indicator measures:

Political Environment: Support from political or executive sponsors for projects and digital transformations, and codifying standards for delivery of digital services.

Institutional Capacity: Availability of budget for digital projects, support for carrying out large-scale replicable digital projects, sharing best practices and changing the way other government agencies work.

Delivery Capability: Measures access to digital tools, allowance of working in the open and sharing progress as you create, and if the agency is able to deliver alpha-beta-live versions of new products with staff and citizens.

Skills and Hiring: Ability to create new roles for new kinds of working, strong partnerships with private sector and university to bring in staff with new skills, proactively allowing for growth and skills development across the workforce.

User-Centred Design: Involving user experience (UX) design in projects, clear product manager roles, use of shared design patterns (such as digital design standards or pattern libraries).

Cross-Government Platforms: Availability of data in an accessible format to citizens, ability to share private data between departments, shared digital platforms between government agencies, shared governance and ownership of digital platforms.

Putting it into practice: rolling out the model for the first time in Australia

As researchers, it took a bit of thought and conceptual mapping to get to know these six areas, and how to translate their aspirational markings into questions to ask participants. In order to get meaningful answers, we have created questions that will open up stories about the interviewees’ working practices and this will start to indicate the nature of the agency’s environment, and where they are in the digital transformation journey.

There has been some translation between literal meanings of the gradings into questions which explore day to day practices and operations.

An example of one of the markers is below, with the grading illustrating an example of progress. Sometimes, they map to real experiences and sometimes they don’t apply to the operations where we’re researching.

This image shows one of the markings under Cross-Government Platforms, and the gradings between Low to Future State, with a description of what might happen at each point in the digital maturity journey.

This matrix is the base for formulating questions, but once we start working with an agency we further tailor questions to their specific environment, estimated digital maturity from their context provided, and our interviewees’ responses.

While we worked, we also:

  • Simplified concepts into their parts, particularly when the understanding of this concept varied — when assessing ‘sharing data’ we broke it into the ability to receive data, send data, or simply communicate across agencies.
  • Sometimes used differing language for familiarity within agencies and consistency with other Code for Australia programs — User-Centred Design or UX became Human-Centred Design.
  • We didn’t centre individual technical indicators in our questions, or the language used in the model, so that people didn’t get bogged down in jargon or concepts with different meanings in different places (for example, how ‘agile’ may work in private sector vs public sector).
  • Attempted to weight or prioritise recommendations which would have good flow-on effects — such as prioritising staff up-skilling in digital work practices, and using a shared definition of ‘digital’ with staff, before taking a bigger or more abstract step.

An ecosystem of digital maturity

As we worked, we found that it was important to illustrate how all the components work together and the importance of those relationships.

The original model did not address the interrelations or any hierarchies in their functionality, and we have found there to be strong interrelations, and orders of operation.

We found that Skills and Hiring is an important central or first focus, focusing on people being the drivers of change — from there Delivery Capability and Cross-Government Platforms flow outwards. Building on this is Institutional Capacity, and at that point Human Centred Design can flow. Those five components of the model are all supported by the Political Environment framing.

In our recommendations that we present to agencies, we break this into concrete examples and steps for their specific situation, based on this idea and also their own environment, which makes the interrelated model more practical, actionable and understandable.

Reflections on the model itself

While working with the model as published by HKS, our experience and some of our participants responses brought up relevant questions that the model didn’t directly address. These included:

  • Accessibility, and in particular refining the ‘access to tools’ under Delivery Capability to include accessible tools
  • Data ethics
  • Data literacy
  • Digital literacy
  • Defining ‘digital’ or illustrating by example some meanings of ‘digital’
  • Broadening the definition of Human-Centred Design to a business methodology for identifying and solving problems, rather than just testing services/products with participants
  • Broadening the inclusion of ‘users’ to agency staff, as well as citizens and other public sector actors. The model prioritises citizens as users and we were keen to capture engagement between digital and non-digital team members.
  • Interviewing a mix of senior, mid and operational staff for a more holistic interpretation of the agencies’ digital maturity.
  • Renaming ‘future state’ to ‘advanced state’, because any progress from current baseline could be viewed as a future state.
  • Reconsideration of indicator titles: we found that the titles of some of the 6 indicator sections were hard to grasp, even after months of intensive work — for example, Delivery Capability vs Institutional Capacity. It’s still easy to get them confused or the wording switched around. We would love clear and easy to understand titles of each of the indicators.

There will always be a certain level of flexibility needed in adapting the model to an agency and their participants, particularly with individual staff roles not encompassing everything from project management to budgeting to user-testing, and some agencies being very early stage on aspects of digital maturity.

We conducted research questions on the preparatory groundwork for each indicator, and in our assessment we counted clear, communicated aspirations the agencies have, as well as actual work in progress.

Some things that haven’t translated to agency teams (in our experience so far)

  • The ‘digital units’ who informed the model don’t always exist within or directly support the agencies we worked with so we couldn’t comment on their capability to influence things such as budgets, role definitions, project scale or others. In our work we changed the concept of ‘digital unit’ to ‘the agency’s digital function’ or ‘the delivery team’ when we were talking to individual project owners, and assessed from there.
  • Sharing data can be so limited or non-existent it can be hard to accurately comment on cross-government platform. There may be a discrepancy on what the DMI means by data and what government categorises as data — do we need a more specific definition for “data” that reflects how this term is defined in government legislation?

What’s next?

We believe that an agency conducting a DMI assessment is a positive step towards digital maturity in itself — they are modelling the change they want to see happen and become widespread.

In six months, we have already conducted several assessments and are starting more, in different states in Australia and across different levels of government. We’re working on the repeatable framework to make the assessment even smoother to roll out in different contexts, and building on our data bank of results, which becomes more and more useful the more assessments are completed.

So far, the program has been received successfully, with plans to check in with participants further down the track from receiving their recommendations, to evaluate how they have been applied and what the results have been.

You can read about the initial findings here, and we’ll be sharing further updates from the program on a regular basis. Stay tuned!

References

1 and 2: Part 2: Proposing A Maturity Model for Digital Services”, From the Harvard Kennedy Digital School blog

About the Authors

This piece is written by Lara Stephenson and Alex Crook, researchers with Code for Australia.

Get in touch

If you work inside Government and would like to learn more about the Digital Maturity Indicator, you can register your interest here.

We’re keen to hear from researchers with experience in digital government who are interested in helping us roll out the DMI across Australia in coming months. Please contact us directly.

Curious about something in this post? Leave us a comment or question in the comments or via Typeform, and we’ll be happy to answer all your burning questions!

--

--