Gonski and other government programs should be piloted first

30 Apr Gonski and other government programs should be piloted first

James Paterson — Australian Financial Review — 30 April, 2018

 

When it comes to taxpayers’ money, good intentions are not enough. Australia needs a more rigorous process for testing the rollout of new, big-spending plans.

Even the most carefully designed policy ideas often fail when put into practice.

One famous example from the United States is the Cambridge-Somerville Youth Study. In 1939, 506 disadvantaged boys from eastern Massachusetts were placed in two groups: one which was left alone, and one which received significant extra support in the form of counselling, tutoring and summer camps. Follow-up studies of the participants in 1978 and 1981 found the interventions had no positive impact on juvenile arrest rates – one of the objectives of the program. Remarkably, the rates of alcoholism, mental health, stress-related diseases and employment were worse for participants who received extra support.

Modern governments don’t seem to have learnt from this study – or the litany of others failed policy interventions like it. They continue to roll out national programs costing billions of dollars with the assumption they will work as planned.

Instead of continuing with this approach, we should learn from Silicon Valley and do small-scale testing before rolling out initiatives on a national scale.

Google engineers ran their first so-called “A/B test” in the year 2000 on the number of search results that were displayed on each page. Since then, A/B testing has become ubiquitous throughout Silicon Valley, with companies like Google, Amazon and Netflix constantly testing everything from page margins to the order that tabs are displayed.

The idea behind A/B testing is to have a small-scale roll-out of a change to test how it performs against an alternative. In the online world, this means users are randomly shown different versions of the same webpage and data about their activity is then used to determine which version is better.

This data-driven approach was brought to the political world when a former Google project manager named Dan Siroker jumped ship to the Obama campaign. As Wired magazine reported in 2012, Siroker went about implementing A/B testing with spectacular results. Their test of the name and design of the “sign up” button helped lead to a click increase of 18.6 per cent: “Learn More” being more effective than “Join Us Now” and “Sign Up Now”. By the end of the campaign it was estimated that the practice of A/B testing had netted the Obama campaign an additional $75 million in fundraising and 4 million additional email addresses – almost a third of their total of 13 million.

Often, the version they expected to perform better in fact performed worse. Human beings are complex and respond to interventions in unexpected and unanticipated ways.

Unsurprisingly, this A/B testing is now standard practice in all US political campaigns. And many traditional businesses are picking it up, as well.

Sadly, however, this kind of small-scale testing remains rare in the government sector. And it’s not hard to see why. Government bureaucracies are notoriously slow and resistant to change. And many politicians have a preference for imposing top-down solutions to solve society’s ills – failing to recognise the complexity of society, the limits of their own human rationality, and their inability to alter the choices other people make.

There are some exceptions; the Turnbull government’s cashless welfare card, which has been progressively rolled out on a trial basis since 2016, starting in the remote communities of Kununurra and Wyndham, WA, and Ceduna, SA. So far, the results have been impressive, and a new trial is being rolled out in the WA Goldfields region.

This is an approach that should be emulated across far more areas of policy.

Take the Gonski education reforms, for example. Rather than a nationwide rollout of a $23.5 billion package, the government could have instead tested whether the extra funding would actually lead to the desired results. This could have been done by randomly selecting a small percentage of Australian schools and adjusting their funding commensurate to what they’ll now receive. If the funding reforms lead to improvements in educational outcomes the programs could be rolled out nationwide; if they didn’t, then the policy could be modified and tested again, so $23.5 billion of taxpayers’ money is spent most effectively.

Rather than government programs being rolled out on a national scale, Australian governments should learn from Silicon Valley and the Cambridge Somerville Youth and run more pilot programs. They will save taxpayers money and we’ll have far more effective policies in the long run.


This article originally appeared in the Australian Financial Reciew.

No Comments

Sorry, the comment form is closed at this time.