Contact tracing apps, part of the solution or adding to the problem?
Colombo Beach, Sri Lanka, 2016

Contact tracing apps, part of the solution or adding to the problem?

Our collective challenge?

One of our key challenges with Covid-19 is to control this infectious disease that is at its highest level of transmissibility before its symptoms become visible.  In such a context, early warning of a potential contamination is critical, assuming of course that the warning is heeded afterwards. Contact tracing apps are presented as the solution to enable early warning (and early quarantine) which would help to keep the disease under control and by doing so, also provide a more secure feeling to society.

What are these apps about?

Contact tracing apps are often mentioned as a solution for the above challenge. The concept is relatively simple to understand. Your phone/app/dedicated device emits a unique but anonymous key through its bluetooth signal (GPS is not accurate enough) and whenever your signal bumps into the signal of someone else, the key of the other person is compared with a list of “known contaminated” keys in a central (government?) database. You would then be warned of a potential contact and the government would get valuable insights to influence its safety, health or social policies.

Great idea, right?

In the ongoing societal debate, the above application often gets confused with the OTHER application of such apps, which aim to give people a green or red rubberstamp, based on their presumed immunity and by doing so, giving them access to people, places or services. It does not help that the press, nor solution providers, add to this confusion, depending on whichever agenda suits them commercially.

So the first caveat is to NOT mix up the two applications because they have a different objective and a different level of intrusiveness into privacy.  In the remainder of this article, I focus on the first application, i.e. attempting to reduce the Ro (spreading) rate of the pandemic.

The debates

Focussing on the first application, there are two major debates to be had. The first is whether this is really a solution to the problem that we are trying to solve. The second is whether the solution can really be achieved and whether it can be achieved in a way that is not disproportionally intrusive into people’s privacy.

The right solution to the right problem?

“Contact tracing” is not a new practice. It has been going on permanently in a manual way to combat other infectious diseases (e.g tuberculosis, ebola, measles,...). Trust in the tracing authority and reliability of the data is crucial. I'm talking about trust in the ability to protect the data but even more so, trust in the ability to collect data accurately and to handle and interpret it with the required expertise.  It is expected that automated solutions will generate a lot of “noise”, i.e. false data, which will generate in the first instance a lot of false reactions. These so called "false positives" will induce a distrust into the solution, resulting in a reduced adoption rate.  Scientists argue that the “noise” in the data might have strong adverse societal effects, ranging from panic to scepticism to negligence and finally resulting in non-adoption (we all know the collection of "dead applications" on our smart phones). 

Causes for noise in the data could be the fact that bluetooth cannot really interpret distance, reception depends on obstacles, on the position of the phone, time gaps exist between the detection of signals bumping into each other, etc. So there is no conclusive proof that bluetooth “bumps” are a sufficiently solid proxy for (infectious) social contact. 

In typical "big data" applications, we do not care about a few thousand false positives out of millions of true observations, because typically, these applications do not trigger direct behaviour or conclusions. In this case, we are using "approximative" technology to trigger individual actions with potentially far reaching consequences, such as people putting themselves in quarantine.

The innovation department of the UK NHS states that 80% of mobile phone users or 55-60% of citizens (ratio valid for UK - Financial Times April 20, New Scientist April 17) should participate in order for this technique to have any effect (other than a placebo/perceived safety effect). If the adoption rate is lower, the health effect can be neglected. All what remains then is a false sense of security at best and an unnecessary data collection on the other hand.

From a macro perspective, the collection of these data can still be relevant. But then we need to be transparant about the purpose of the apps and not role out a massive data trawling net under the guise of a personal protection instrument.

Can it be done, in a proportional way?

On the second debate of trust from a privacy point of view, there is a lot of confusion. One side (the technologists) will argue that the real identity of people does not need to be known, only the anonymous unique key. Technically speaking, this is correct. The tech providers would know the key (even more so, they claim not to know this, it would remain local to your device from their point of view) and they would not be the guardians of the database of the interactions. Another instance (e.g. the government) knows the interactions but does not know who the persons behind these interactions are. Privacy activists argue that the jump to combining these elements is very small, once a government finds the appropriate societal narrative (if it even bothers to do so).  As data on social interaction is most likely to be one of the most powerful instruments of mass surveillance (on top of analysing social media interactions), the fear for abuse can be easily understood.  So we must differentiate between an optimistic belief in a clean technological architecture, with encrypted keys, etc. and a more pessimistic belief of abuse. Where each member of society puts his or her belief, is up to him or her.

Adding to the second debate, is the fact that the public does not always understand the above choices and architectures. So a real or perceived hacking or data leak, even if the compromised data that are useless and/or anonymous, is likely to lead to a massive distrust in contact tracing in general. Scientists are weary of squandering this precious trust, which would put them in an even darker spot than before. The Ada Lovelace institute in the UK estimates that currently 65% of the public is favourable (I mentioned that 80% is needed to be relevant) to such a solution but also underlines that this is a very fragile number that could quickly be reduced if solutions are rolled out in a hurry and prove to have real OR perceived privacy deficiencies. And if trust evaporates, adoption rates will become too low and the technique will generate a false feeling of safety (see above paragraph).

Sufficient regulatory context?

On top of the two previous elements, it should be pointed out that there is no specific (and relevant for the specific context that we are all in these days) clear regulatory guidance on what type of application is allowed (contact tracing, anonymised or not, system of access to services or not, retention time of data, etc.). Also, there is no clear view on a separation of information (see above). If one party holds both the identity as well as the tracings, there is an unacceptable concentration of knowledge. There is no clear answer on who would be the trusted “second party”.

It's not about deep state or other conspiracies

To conclude. This should not be a debate about pro or con, optimist or pessimist, belief or non-belief, conspiracy or techno-philantropy. When making abstraction of all the privacy debates, it is much more a scientific debate about the essence of data management and analytics and human behaviour. Data must be reliable, the level of noise must be low, we are not dealing with a solution where "getting it almost right" is good enough. The laws of statistics must be respected, i.e. you need a relevant adoption rate to combat the virus effectively. Finally, solutions should not make societal problems bigger than the problems they are trying to solve in the first place.

Isabelle Detienne

Senior Project & Change Manager | Transformation Program | Human dynamics & transition | Learning & Development | Organizational culture & strategy| Innovation …& AI gamification apprentice

4 年

Very good article. Make things clear. However, because privacy is fundamental right and legally binding, this is part of the question. Not only the scientific approach. Science, society and how we make it are one.

回复

Thanks for this article Jo. It really clarifies the debate and show where focus should be and surely where it shoudn t be.

回复

要查看或添加评论,请登录

Jo Coutuer的更多文章

  • Doing business while taking care of clients' fundamental rights

    Doing business while taking care of clients' fundamental rights

    Later today, I'll be doing a short presentation at FIMA (Financial Information Management) in London, edition 2019. I…

    3 条评论
  • 1000 days of Data - Reflect and Project

    1000 days of Data - Reflect and Project

    31 August 2019, 3 years of Data transformation, a bit more than 1000 days..

    9 条评论
  • Why data science is (also) people science

    Why data science is (also) people science

    Once you move beyond the challenge of getting hardware, software and scarce data scientists and once you have…

    7 条评论
  • Data, 24 months in. Challenges ahead.

    Data, 24 months in. Challenges ahead.

    On the #thalys towards Brussels, returning from the quarterly international BNP Paribas #CDO meeting, and two years…

  • Can you (should you?) stop being a data product?

    Can you (should you?) stop being a data product?

    One week after starting a #bigdata #detox, I am still alive. I am testing whether a digital addict and data freak like…

    2 条评论
  • Highschool technological - 2037

    Highschool technological - 2037

    I graduated highschool in 1987. At our last gatherings in 1987, we talked about where our college or university studies…

    4 条评论
  • Seven Golden Practices of the CDO or CAO - vote your best practice!

    Seven Golden Practices of the CDO or CAO - vote your best practice!

    On September 27th, 2017, my colleague Lucas Quarta, Chief Data Officer of BNP Paribas Personal Investors and myself…

    17 条评论
  • CDO's and the value they deliver

    CDO's and the value they deliver

    Traveling to the Chief Data Officer Europe conference in London, I already know that a big topic will be: "VALUE"…

    20 条评论
  • 7 ingredients for the perfect analytics symphony

    7 ingredients for the perfect analytics symphony

    People ask me: "what does it take to build that perfect analytics team in my organisation?". I often answer: "Build a…

    2 条评论
  • What the bank's CEO should bank on.

    What the bank's CEO should bank on.

    I remember being taught the concept of "transactions" and how the entire world revolved and would continue to revolve…

    1 条评论

社区洞察

其他会员也浏览了