By Indra de Lanerolle
First published in The Conversation – Africa

Civic technology initiatives are on the rise. They are using new information and communication technologies to improve transparency, accountability and governance – faster and more cheaply than before.

In Taiwan, for instance, tech activists have built online databases to track political contributions and create channels for public participation in parliamentary debates. In South Africa, anti-corruption organisation Corruption Watch has used online and mobile platforms to gather public votes for public protector candidates.

But research I recently completed with partners in Africa and Europe suggests that few of these organisations may be choosing the right technological tools to make their initiatives work.

We interviewed people in Kenya and South Africa who are responsible for choosing technologies when implementing transparency and accountability initiatives. In many cases, they’re not choosing their tech well. They often only recognised in retrospect how important their technology choices were. Most would have chosen differently if they were put in the same position again.

Our findings challenge a common mantra which holds that technological failures are usually caused by people or strategies rather than technologies. It’s certainly true that human agency matters. However powerful technologies may seem, choices are made by people – not the machines they invent. But our research supports the idea that technology isn’t neutral. It suggests that sometimes the problem really is the tech.

Code is law

This isn’t a new discovery. As the technology historian Melvin Kranzberg put it: “Technology is neither good nor bad; nor is it neutral.”

US legal professor Lawrence Lessig made a similar case when he argued that “Code is Law”.

Lessig pointed out that software – along with laws, social norms and markets – can regulate individual and social behaviour. Laws can make it compulsory to use a seat belt. But car design can make it difficult or impossible to start a car without a seat belt on.

Our study examined initiatives with a wide array of purposes. Some focused on mobile or online corruption reporting, others on public service monitoring, open government data publishing, complaints systems or public data mapping and budget tracking.

They also used a range of different technological tools. These included “off-the-shelf” software; open-source software developed within the civic tech community; bespoke software created specifically for the initiatives; and popular social media platforms.

Less than one-quarter of the organisations were happy with the tools they’d chosen. They often encountered technical issues that made the tool hard to use. Half the organisations we surveyed discovered that their intended users did not use the tools to the extent that they had hoped. This trend was often linked to the tools’ specific attributes.

For instance: if an initiative uses WhatsApp as a channel for citizens to report corruption, the messages will be strongly “end-to-end” encrypted. This security limits the behaviour of governments or other actors if they seek to read those messages. If Facebook Messenger is used instead, content will not be encrypted in the same way. Such decisions could affect the risks users face and influence their willingness to use a particular tool.

Other applications, like YouTube and Vimeo, may differ in their consumption of data. One may be more expensive than the other for users. Organisations will need to consider this when choosing their primary platform.

It’s not always easy to choose between the many available technologies. Differences are not transparent. The effects of those differences and their relevance to an initiative’s aims may be uncertain. Many of the people we spoke to had very limited technical knowledge, experience or skills. This limited their ability to understand the differences between options.

One of the most common frustrations interviewees reported was that the intended users didn’t use the tool they had developed. This uptake failure is not only common in the civic tech fields we examined. It has been noted since at least the 1990s in the worlds of business and development.

Large corporations’ IT departments introduced “change management” techniques in answer to this problem. They changed employees’ work practices to adapt to the introduction of new technologies. In civic tech, the users are rarely employees who can be instructed or even trained. Tech choices need to be adapted for the intended users, not for a structured organisation.

Try before you buy

So what should those working in civic technology do about improving tool selection? From our research, we developed six rules for better tool choices. These are:

  • first work out what you don’t know;
  • think twice before building a new tool;
  • get a second opinion;
  • try it before you buy it;
  • plan for failure; and
  • share what you learn.

Possibly the most important of these recommendations is to try or “trial” technologies before making a final selection. This might seem obvious. But it was rarely done in our sample.

Testing in the field is a chance to explore how a specific technology and a specific group of people interact. It often brings issues to the surface that are initially far from obvious. It exposes explicit or implicit assumptions about a technology and its intended users.

Failure can be OK. Silicon Valley’s leading tech organisations fail regularly. But if transparency and accountability initiatives are going to improve their use of technology, they are going to need to learn from this and from other research – and from their own experiences.

Indra de Lanerolle runs the Network Society Lab in the Journalism and Media Programme of the University of the Witwatersrand.