Coronavirus? There's a for That.

The term “Contact Tracing” has become mainstream due to the global pandemic of a novel coronavirus SARS-CoV-2, which causes COVID-19.

What is Contact Tracing?

Contact tracing is a public health practice used for infectious disease response. State and local public health systems have performed contact tracing for diseases such as tuberculosis and syphilis. Traditionally, public health authorities are informed of a positive test for an infectious disease and are provided with contact information for the person who tested positive. Public health workers perform contact tracing by reaching out to those persons, usually on the phone, asking how they’re doing, and interviewing them to identify persons they have been in close contact with given the particular characteristics of the pathogen. Authorities then notify close contacts and ask them to isolate or seek care appropriately. In this way, contact tracing can reduce the spread of infectious diseases by identifying, informing, and isolating persons who have been potentially infected before they contribute to further spread of the pathogen.

Digital Contract Tracing

Proposals to combat coronavirus using smartphones largely focus on facilitating the process of “contact tracing.” Contact tracing involves working backward from infected cases to identify people who may have been exposed to disease, so that they can be tested, isolated, and—when possible—treated. Traditional contact tracing is a labor-intensive process of interviews and detective work. Some countries such as Singapore, South Korea and Israel have enlisted technology, including mobile apps, to facilitate contact tracing of coronavirus cases, and this idea is now catching on in the United States. North Dakota and Utah have released voluntary contact-tracing apps that rely on tracking users’ location as they move about, and the consulting firm PwC has begun promoting a contact-tracing tool to permit employers to screen which employees can return to work. Several American technology companies and institutions of higher learning are developing the infrastructure that would permit automated contact tracing of a sort, while also avoiding certain privacy concerns.

The lure of automating the painstaking process of contact tracing is apparent. But to date, no one has demonstrated that it’s possible to do so reliably despite numerous concurrent attempts. Apps that notify participants of disclosure could, on the margins and in the right conditions, help direct testing resources to those at higher risk. Anything else strikes me as implausible at best, and dangerous at worst.

Apps that notify participants of disclosure could, on the margins and in the right conditions, help direct testing resources to those at higher risk. Anything else strikes me as implausible at best, and dangerous at worst.

Source: https://blog.ncase.me/onestepahead

Why You Should Be Concerned

Apple and Google have proposed an application programming interface (or “API”) for conducting contact tracing using mobile phones, which they describe as a system to provide “exposure notification” to users once they’ve been diagnosed or self-report as infected. The Apple-Google API supports the specific functionality of warning participants if their phone has been near the phone of a person who reported being COVID-19 positive. To be clear, the companies are not planning to develop an app themselves, which would require addressing some of the more challenging issues around how to verify that a user has been infected and what policies to suggest when individuals are alerted to being “in contact” with an infected individual. Ultimately, they have left it up to public health officials, or whoever else develops the apps, to determine their functionality and uses—subject, of course, to the constraints of the platform.

Many others have pointed out a host of pitfalls for voluntary, self-reported coronavirus apps of the kind Apple, Google, and others contemplate. First, app notifications of contact with COVID-19 are likely to be simultaneously both over- and under-inclusive. Experts in several disciplines have shown why mobile phones and their sensors make for imperfect proxies for coronavirus exposure. False positives (reports of exposure when none existed) can arise easily. Individuals may be flagged as having contacted one another despite very low possibility of transmission—such as when the individuals are separated by walls porous enough for a Bluetooth signal to penetrate. Nor do the systems account for when individuals take precautions, such as the use of personal protective equipment, in their interactions with others.

Even among true contact events, most will not lead to transmission. Studies suggest that people have on average about a dozen close contacts a day—incidents involving direct touch or a one-on-one conversation—yet even in the absence of social distancing measures the average infected person transmits to only 2 or 3 other people throughout the entire course of the disease. Fleeting interactions, such as crossing paths in the grocery store, will be substantially more common and substantially less likely to cause transmission. If the apps flag these lower-risk encounters as well, they will cast a wide net when reporting exposure. If they do not, they will miss a substantive fraction of transmission events. Because most exposures flagged by the apps will not lead to infection, many users will be instructed to self-quarantine even when they have not been infected. A person may put up with this once or twice, but after a few false alarms and the ensuing inconvenience of protracted self-isolation, we expect many will start to disregard the warnings. Of course this is a problem with conventional contact tracing as well, but it can be managed with effective direct communication between the contact tracer and the suspected contact.

At least as problematic is the issue of false negatives—instances where these apps will fail to flag individuals as potentially at risk even when they’ve encountered someone with the virus. Smartphone penetration in the United States remains at about 81 percent—meaning that even if we had 100 percent installation of these apps (which is extremely unlikely without mandatory policies in place), we would still only see a fraction of the total exposure events (65 percent according to Metcalf’s Law). Furthermore, people don’t always have their phones on them. Imagine the delivery person who leaves her phone in the car. Or consider that the coronavirus can be transmitted via the surfaces on which it lingers long after a person and their phone has left the area. The people in the highest risk groups—the aging or under-resourced—are perhaps least likely to download the app while needing safety most. Others may download the app but fail to report a positive status—out of fear, because they are never tested, or because they are among the significant percentage of carriers who are asymptomatic.

Ultimately, contact tracing is a public health intervention, not an individual health one. It can reduce the spread of disease through the population, but does not confer direct protection on any individual. This creates incentive problems that need careful thought: What is in it for the user who will sometimes be instructed to miss work and avoid socializing, but does not derive immediate benefits from the system?

This “decentralized” architecture isn’t completely free of privacy and security concerns, however, and actually opens apps based on these APIs to new and different classes of privacy and security vulnerabilities. For example, because these contact-tracing systems reveal health status in connection with a unique (if rotating) identifier, it is possible to correlate infected people with their pictures using a stationary camera connected to a Bluetooth device in a public place.

Malicious Use is Paramount

The apps built on top of Apple and Google’s new system will not be a ‘magic bullet’ techno-solution to the current state of the pandemic.

The issue of malicious use particularly given this current climate of disinformation, fake news, and political manipulation. Imagine an unscrupulous political operative who wanted to dampen voting participation in a given district, or a desperate business owner who wanted to stifle competition. Either could falsely report incidences of coronavirus without much fear of repercussion. Trolls could sow chaos for the malicious pleasure of it. Protesters could trigger panic as a form of civil disobedience. A foreign intelligence operation could shut down an entire city by falsely reporting COVID-19 infections in every neighborhood. There are a great many vulnerabilities underlying this platform that have still yet to be explored.

There is also a very real danger that these voluntary surveillance technologies will effectively become compulsory for any public and social engagement. Employers, retailers, or even policymakers can require that consumers display the results of their app before they are permitted to enter a grocery store, return back to work, or use public services—is as slowly becoming the norm in China, Hong Kong, and even being explored for visitors to Hawaii.

Taken with the false positive and “griefing” (intentionally crying wolf) issues outlined above, there is a real risk that these mobile-based apps can turn unaffected individuals into social pariahs, restricted from accessing public and private spaces or participating in social and economic activities. The likelihood that this will have a disparate impact on those already hardest hit by the pandemic is also high. Individuals living in densely populated neighborhoods and apartment buildings—characteristics that are also correlated to non-white and lower income communities—are likelier to experience incidences of false positives due their close proximity to one another.

Electronic Frontier Foundation (EFF) warned that as it stands now, there’s no way to verify that the device sending the contact-tracing information out is actually the one that generated it. Thus, malicious actors could potentially harvest the data over the air and then rebroadcast it, undermining the system entirely.

Technical Details of These Apps

When two people who have opted into the contact tracing are in close contact for a certain period of time, their phones will exchange their anonymous identifier beacons, otherwise rolling proximity identifiers (RPIDs). If one of the two is later diagnosed with the coronavirus, that infected person can enter the test result into an app, such as a compatible app from a public health authority.

Then, the infected person can consent to uploading the last 14 days of his or her broadcast beacons to the cloud. Any other person who has been in close proximity to someone infected will then be notified via the phone that an exposure to someone who has tested positive for coronavirus took place.

A top security issue at this point, according to the EFF, is that there is currently no way to verify that the device sending an RPID is actually the one that generated it, so trolls could collect RPIDs from others and rebroadcast them as their own.

Imagine a network of Bluetooth beacons set up on busy street corners that rebroadcast all the RPIDs they observe.  Anyone who passes by a ‘bad’ beacon would log the RPIDs of everyone else who was near any one of the beacons. This would lead to a lot of false positives, which might undermine public trust in proximity-tracing apps—or worse, in the public-health system as a whole.

Another concern about the proximity-tracking system proposed by Apple and Google is that it leaves open the possibility that the contacts of an infected person will figure out which of the people they encountered is infected. This poses a security risk.

Taken to an extreme, bad actors could collect RPIDs en masse, connect them to identities using face recognition or other tech, and create a database of who’s infected.

The plan to have infected users publicly share their once-per-day diagnosis keys – instead of just their every-few-minute RPIDs – also could expose people to what are called linkage attacks, according to the EFF.

A well-resourced adversary could collect RPIDs from many different places at once by setting up static Bluetooth beacons in public places, or by convincing thousands of users to install an app.  With just the RPIDs, the tracker has no way of linking its observations together…But once a user uploads their daily diagnosis keys to the public registry, the tracker can use them to link together all of that person’s RPIDs from a single day.

Linking together multiple RPID pings could expose users’ daily routines, such as where they live and work, leaving this information open to exploitation.

At the end of the day, no clever technology—standing alone—is going to get us out of this unprecedented threat to health and economic stability.  At best, the most visible technical hope we have is that these apps DO NO HARM. 

 

Apple and Google API
Google & Apple's Contact Tracing API Explaner
Buy me a coffeeBuy me a coffee

All Rights Reserved coronafact.us Copyright 2020

Made with hackistic.com

Created by Dustin Foster