Guest Post: “Using Digital Surveillance To Combat COVID-19: What the 9/11 Era Can (and Cannot) Teach Us”

Today’s post is by my colleague, Prof Shane Stansbury, and his post could not be more timely: can technology be used to facilitate contact tracing in a way that gives public health authorities the information that need to stem the COVID-19 pandemic, and at the same time protect civil liberties?  Is there anything to be learned from the post-9/11 surveillance efforts?

Prof. Stansbury

Shane is the Robinson Everett Distinguished Fellow in the Center and a Senior Lecturing Fellow at the Law School. He’s a former federal prosecutor in the Southern District of New York, where he focused on counterterrorism and national security matters and worked closely with the intelligence community.

Here’s Shane’s essay (photos and illustrations added):

Using Digital Surveillance To Combat COVID-19: What the 9/11 Era Can (and Cannot) Teach Us

by

Shane Stansbury

I recently had the pleasure of speaking on a virtual panel about the use of surveillance tools and personal data collection to assist in mitigating the effects of COVID-19.  (On May 12, I will be speaking on a similar panel, and I invite you to join the discussion.  Details and registration information are available here.)

Our discussion focused in part on the lessons we could draw from the government’s surveillance efforts in the years after 9/11, and I thought it was worth sharing and expanding on some observations in writing.

Some Background

Much has been written in recent weeks about the use of mobile phones for the purpose of achieving faster and more comprehensive contact tracing.  The idea has gained traction following the introduction of such measures in South Korea, Singapore, and elsewhere.

Apple and Google have added to the momentum with their recent announcement that they are cooperating to facilitate interoperability between their operating systems (which will allow apps released by public health authorities to interact with most mobile phones) and to develop a default tracing functionality into their underlying platforms.

The move toward apps or other technology-based solutions for contract tracing­—or even more aggressive measures like enforcing quarantine orders—is understandable.  Most of the country has been operating under shelter-in-place orders for the better part of a month now.  Our economy is in a free fall, and the pressure to get people back to work—at least in a limited capacity—is mounting.

The hope is that by combining some form of enhanced contact tracing with other measures (e.g., more traditional contact tracing measures, increased testing, etc.), we will be able to pave a quicker path to a more functioning society.  And, so the reasoning goes, why not make use of a device that most people carry with them every hour of every day?

There is much to be discussed about the feasibility of such an approach, but that is not the purpose of this post.  I’ll just say for now that the devil, as always, is in the details.  The objectives and methods of the tools being developed vary widely, and those details matter in evaluating their efficacy and the equities involved.

Some apps are being used for the purpose of contract tracing, while others are being used for case management or quarantine compliance.  Some rely on location data (e.g., GPS coordinates or cell site location data), while others rely on Bluetooth technology.  Some are being created by governments, while others are being created by private parties.  You get the picture.

Coronavirus
Centers for Disease Control

Shadowing over all of this are some pretty harsh realities.  We still know relatively little about how this virus operates, and we have yet to employ testing at a scale that would help such tech-based solutions succeed.  Nor will a mobile app or similar software eliminate the important, painstaking work of traditional contact tracing and other measures like social distancing.  In other words, we need to be sober in our expectations.

Notwithstanding these caveats, there are some general lessons that we can glean from the past.  If we are thinking of adopting some form of surveillance, particularly one that might involve government (whether through the limited collection of anonymized data or through more invasive measures like mandating the use of apps or using electronic surveillance for quarantine enforcement), we are not working from a blank slate.  We have been learning in real time for the last two decades.

The 9/11 Experience in Context

I’ll start with a couple of observations about the context in which we find ourselves.  As I have been considering the question of how surveillance can help us combat the pandemic, I’ve been struck by two conditions that inform our current crisis and which resemble those from 19 years ago.

First, we are facing what I refer to as a crisis of information.  We have an imminent threat that is calling for quick and decisive action.  We need information fast, and we need it at scale.  Just as we needed to know as much as possible in the days after 9/11 to prevent the next attack, we need to know as much as possible, and as quickly as possible, about who is being affected by the coronavirus and how we can prevent its spread.

Second, if we are thinking about employing data to help us resolve that crisis of information, we must face the reality that private parties, not the government, control much of the relevant information we need.  This may be an obvious point, but it drives the whole discussion of how we resolve our information deficit.

After 9/11, we spent the better part of two decades trying to resolve how to work within these dual constraints.  By September 11, 2001, a good portion of the global telecommunications traffic that could help fill our information void was being routed through the U.S., and information that might traditionally have been collected overseas was now in the hands of U.S. providers.  So, if we wanted to collect communications data at scale to prevent another terrorist attack—and to do so in a way that preserved our democratic values—we would have to revisit our existing laws and our assumptions about how information is collected.

Nowhere was this clearer than in the context of the Foreign Intelligence Surveillance Act (FISA).  This law, which was premised on notions of individualized suspicion and 1970s-era technology, was a helpful tool in the fight against terror, but it could not do all of the work that was required at the speed and scale that we needed.  Our surveillance tools had to adapt to the times, and in fits and starts, they did so, as evidenced by measures introduced in the FISA Amendments Act and other laws.  (Of course, we are still finding our way, as demonstrated by debates over several recently expired surveillance tools.)

I provide this brief framework because it helps us remember that conducting effective surveillance in a democratic society can be done, but it is often messy and may take time to get right.  We have to grapple with the needs and expectations of three very different constituencies:  the government agencies that seek to protect society from harm; the entities that hold relevant information; and the citizens whose liberties must be protected.

Of course, there also are important differences between this crisis and the one we faced after 9/11, and we should keep them in mind.  Here are a few:

  • The data. If we are considering surveillance for the purpose of contact tracing, infection tracking, or quarantine enforcement, the information we want is not communications data; it is location data, or at least data that serves as a proxy for location.  (I am putting to the side sensitive health data that might be used for case management or epidemiological purposes, which present a separate set of issues.)  That difference matters both as a legal matter (collection of more precise location data for a longer period of time is more likely to trigger Fourth Amendment concerns) and as a logistical matter (particularly in the context of GPS or cell site location data, which may not provide the kind of precision necessary for effective contact tracing).
  • The agencies. The most relevant government stakeholders are not federal law enforcement and intelligence agencies, but rather state and local health authorities (and, depending on the objectives, select federal bodies such as the Centers for Disease Control and Prevention (CDC)).  Although this makes life easier in some ways (e.g., sources and methods are not a concern), a decentralized collection process with multiple, independent stakeholders poses unique challenges for achieving scale at speed with the needed level of accountability.
  • The targets. Unlike in the 9/11 era, when a good portion of our intelligence collection focused on foreign-based organizations and targets, the target of surveillance for contact tracing or enforcement purposes is essentially the entire U.S. population—or at least a substantial enough portion to have a public health benefit.  (One leading researcher estimates that about 60% of a country’s population would have to download a contact tracing app for it to be effective.)  That changes the stakes for the American public and, as a legal matter, impacts the type of data government entities would able to collect.

A final note on context.  As was the case two decades ago, scalability and speed are essential.  But unless we are talking about enforcement measures (e.g., collecting data for enforcing quarantine orders), there may be room for a smaller government footprint in our current crisis.  Unlike in the counterterrorism context, where identities and communication details are critical to understanding and countering the threat, if all we want to do is keep infected people away from those who are not infected, anonymized and less intrusive data collection could conceivably still be helpful.

It is for this reason, and to avoid more intrusive measures like those taken in places like China and South Korea, that contact tracing apps combining Bluetooth technology, voluntary consent, and anonymized data have gained traction in recent weeks.

Again, the apps take different forms, but the basic idea is that if two people who have downloaded the app come within a certain distance of one another, the phones use Bluetooth to exchange unique, anonymized identifiers and the phones log the information locally (typically in encrypted form).  If one of the individuals becomes infected by the virus, a notification can be sent to those with whom the person has been in proximity.

I do not want to get into the merits of this or that approach.  I simply want to point out that, even if one prefers an approach that is anonymized and highly decentralized (e.g., identifiers are stored locally on individual phones), some form of government involvement may be inevitable.

Source: CDC

Public health authorities will need some meaningful amount of data with some meaningful level of specificity for the information to be helpful and to provide notifications.  And if widespread voluntary adoption of an app is necessary to make it effective, a government mandate to adopt the app is not out of the question.  I also would not dismiss the possibility of a heavier government hand down the road if more decentralized approaches do not pan out or if effective enforcement of quarantine orders is needed.

What the 9/11 Era Can Teach Us

The following is not meant as a comprehensive list.  Rather, these are a few themes that came up in our panel discussion as my colleagues and I were thinking of how our experiences from the 9/11 era might be relevant to the use of digital surveillance to combat the pandemic—particularly if we were to consider a role for the federal government.  (I also recommend this piece by Peter Swire, in which he thoughtfully lays out his own list of lessons learned, some of which overlap with the themes below.)

  1. More information is not always better.  And when it is, consider the costs.

Before we start mandating or even incentivizing data collection from U.S. citizens, we of course have to make sure that the data will actually be helpful to public health officials.  That means we have to ensure that the information collected is accurate and of a type that can be easily converted to combat the virus (not a given in the case of location tracking information, as most data held by third parties has been collected for commercial purposes, not public health purposes).  It also means verifying that the costs of collection will not outweigh the benefits.

After 9/11, the common refrain was that government needed the proverbial “haystack” of information to find the needle that could help protect American lives.  That mindset was understandable.  On September 11, 2001, our country experienced the deadliest attack on U.S. soil in its history, partly because of intelligence failures.  We needed as much information as possible, as quickly as possible, to prevent another attack.

This mindset is still understandable.  The fact that our country has not experienced another major attack since 2001 is no accident.  It is a testament to the hard work and diligence of thousands of professionals who study the intelligence and connect the proverbial dots on a daily basis.  Those connections cannot be made without a vast amount of information.

In a moment of crisis, it is indeed often the case that more information is better.  But we must also recognize that more information has costs.  Consider the following cost categories drawn from the counterterrorism context:

  • Civil liberties and privacy. Any data collection at a comprehensive level, particularly when done in the United States, will necessarily increase the risk that individuals’ civil liberties and privacy rights will be violated.  We struggled early on to get this balance right in the counterterrorism context, and our surveillance policies are still a work in progress.
  • Risk of abuse and overreach. Even if we are able to develop what most agree are the right policies, there is always the risk of poor execution or abuse.  And even if everyone is acting in good faith, there is the risk of unintended consequences, such as the data being used by government or third parties for purposes other than what was originally intended (or other than what society is prepared to accept).
  • Resource allocation. Minimizing collection (i.e., monitoring only what is legally permissible) and sifting through irrelevant or unhelpful information can be hard to do, and more data makes the job harder.  Doing the job right takes time and resources away from other priorities.  The process may be worth it, but sometimes it is not.

The last two decades are filled with examples of our society trying to weigh each of these types of costs against the security benefits of increased surveillance.

Readers of this blog are amply familiar with the civil liberties and privacy debates that have animated the 9/11 era, so I do not need to rehash them here.  Suffice it to say that since October 2001, when Congress passed the USA PATRIOT Act and expanded the government’s surveillance capabilities, we have been in a constant push-and-pull to ensure that law enforcement and intelligence agencies can get the information they need to keep the country safe while protecting Americans’ constitutional and statutory rights.

That is not a bad thing.  It is the sign of a healthy democracy.  But, as I said before, it can be messy, so when we expand surveillance powers we need to be prepared to address the potential risks that those powers present and also recognize that it may take time to get the balance right (time that we may not have in the current crisis).

As for the second category—guarding against abuse and overreach—the 9/11 era has shown that we frequently have to operate by trial-and-error.  Abuses will occasionally occur, and we have to build guardrails to prevent those abuses.  But even if everyone is acting in good faith and in accordance with the letter of the law, we cannot predict all of the downstream consequences of a policy until it is executed.

Consider Section 215 of the USA PATRIOT Act, which recently expired in March.  Prior to its enactment, FISA authorized orders that directed a very limited category of third parties (e.g., common carriers, storage facilities, vehicle rental facilities) to provide business records to the government.  Section 215 eliminated restrictions on the types of businesses that could be subject to such orders, and also eliminated the requirement that the person to whom the records pertain be an agent of a foreign power.  Most importantly, Congress expanded the scope of what could be collected by authorizing orders for the production of “tangible things,” not just business records.

In 2006 (following debates over whether Section 215 was too permissive), Congress amended the statute to require, among other things, a statement of facts in any application to the Foreign Intelligence Surveillance Court (FISC) that there are reasonable grounds to believe that the tangible things sought are “relevant” to an authorized national security investigation.

That language would quickly be interpreted by the executive branch to justify applications for orders compelling the production of telephone records of millions of Americans.  Under the NSA’s bulk telephone metadata program—which years later became a subject of leaks by Edward Snowden—telecommunications providers were directed to turn over certain non-content detail records on an ongoing daily basis.  The NSA then analyzed such data to attempt to identify communications among known and unknown terrorism suspects.

In justifying the program (which gained the blessing of the FISC), the government construed broadly the Section 215 requirement that the production of tangible things be “relevant” to an authorized investigation.  The reasoning was that even if the requested information could not be directly connected to a specific investigation, because bulk collection of metadata could potentially lead to identifying individuals calling or receiving calls from suspected terrorists, all such metadata—i.e., the entire haystack—was “relevant.”  After all, the relevance of any particular phone number might not be known until later, and one needed a comprehensive database to conduct contact-chaining (e.g., mapping the contacts of a given phone number).

After Edward Snowden revealed the program’s details to the public in 2013, an intense public debate ensued, ultimately resulting in the USA Freedom Act of 2015, which, among other things, modified Section 215 so as to end the NSA’s existing bulk collection program and established a procedure for more limited collection of so-called “call detail records” in terrorism investigations.

The point here is not to relitigate the merits of the telephone metadata program, but rather to emphasize that placing boundaries on collection and use is hard to do.  When information becomes available and restrictions are not explicit, agencies acting in good faith will want to use it to accomplish their missions.  These downstream consequences don’t always match up with the public’s expectations.

The final cost category—the logistical realities of effective surveillance—often does not receive enough attention.  Protections and safeguards can be put in place to avoid overreach by the government (or the private sector), but the effort needed to make good use of the data can still outweigh the benefits.  If it is too cumbersome or time consuming to separate the wheat from the chaff, the task may not be worth it.

Consider again the history of Section 215.  One of the results of the changes imposed by the USA Freedom Act was to place much of the data retention and contact-chaining in the hands of telecommunications companies, with the government required to seek authorization with respect to specific selectors (e.g., a phone number).  In essence, the NSA would have to rely on certain telecommunications providers to do some of the work that it had traditionally done in-house.

The transition was not, shall we say, seamless.  In June 2018, the NSA announced that it would have to delete the massive amount of data it had collected since 2015 because of “technical irregularities.”  Some of the telecommunications providers had improperly produced call detail records that the NSA was not authorized to receive and, critically, the NSA found it “infeasible to identify and isolate properly produced data.”  Erring on the side of caution, the NSA decided to throw in the towel by deleting the whole database.

Although the NSA’s announcement assured the public that it had addressed the “root cause” of the problem for purposes of future collection, the agency continued to struggle by failing to purge all of the unlawfully acquired data.  All of this was on top of other compliance headaches that the NSA faced as it tried to implement that program and other programs.

Again, responsible intelligence collection in a democratic society can be hard.  That is not to say it is not worth it, but we must be honest in our assessment of the costs and benefits, particularly when a surveillance program must be implemented quickly.

If we are talking about collecting data from cell phones to battle COVID-19, the details matter.  For example:

  • Precisely what data will be transferred to public health authorities, and how?
  • If the data sent to government agencies is anonymized, what measures will need to be implemented to protect that anonymization?
  • If contact tracing apps rely on anonymized and self-reported data, will health authorities be able to accurately and efficiently eliminate false negatives and false positives without burdening our already-strained health care system?
  • If health authorities do not receive sufficiently detailed or personalized information from the apps, will they feel the need to combine the data with information from other sources, such as third-party data brokers, in an effort to make use of the data (e.g., to follow up with potentially infected individuals)?

Implementation-related issues like these are hard enough when done at the federal level with generous funding.   They take on a new meaning if we are talking about a highly decentralized system in which cash-and-resource-strapped states and localities are establishing their own apps and collection protocols.

This is not to say that digital surveillance should not be done.  We should just be sure that it will provide our experts with data they can use, and we should be realistic about the price of doing it right.

  1. Transparency and public buy-in are essential.

There are many reasons why the public (rightly or wrongly) might trust a company like Google with their data more than they trust the government, but one of those reasons is Edward Snowden.  Suspicion of government is built into our republic’s DNA, and few things will spark public outrage like learning that the government has been snooping around in a way that the public did not anticipate.

We learned this several times over in the 9/11 era.  The news of the NSA’s bulk telephone metadata program was one example, but of course there were others, such as the revelation in late 2005 that the executive branch had secretly authorized what would become known as the Terrorist Surveillance Program (TSP).  In that program, the NSA, without FISA authorization, had monitored the contents of U.S. communications where one end of the communication was outside the United States and there was believed to be a link to al Qaeda or an associated terrorist organization.

Although the public blowback following such revelations tells us something about the level of government intrusion that Americans are willing to tolerate, it may tell us more about the importance of transparency and public buy-in.

Following the 2005 revelations, our country began a lengthy public debate that led ultimately to the enactment of the FISA Amendments Act (FAA).  The FAA, through Section 702, created a new framework that allowed the government to seek FISC authorization for up to one year to conduct programmatic electronic surveillance targeting non-U.S. persons located outside the United States.

Unlike in the case of traditional FISA, the new framework would not require the government to identify an individual target of surveillance or establish probable cause that a particular target is an agent of a foreign power.  This framework allowed the government to fill some of the surveillance gaps that had led to the TSP.

Although the FAA is not without controversy, its enactment served as a release valve of sorts for much of the public anger that followed revelations of the TSP.  To be sure, the 2013 Snowden leaks (which included unauthorized disclosures about how Section 702 operates) stirred additional controversy, but involvement by the legislature provided a certain legitimacy to specific types of intelligence collection that had not previously existed.

We saw these lessons play out again in the case of the Section 215 bulk telephone metadata program.  Although, like the TSP, the program was destined to be controversial given that it threatened the privacy of ordinary Americans (not just terrorism suspects), learning of the secret program after the fact added to the public’s resentment.

The Section 215 experience also shows us that legislation is not always a proxy for public buy-in:  a legal framework for surveillance can add legitimacy, but it has to be crafted and executed in a way that matches public values.  I described above the example of well-meaning government officials interpreting the language in Section 215 (“relevant” to an authorized investigation) to justify a program that the public had not embraced, but the lesson goes further than that.

Both the executive branch officials implementing the bulk telephone metadata program, and the FISC that authorized it, built in numerous safeguards to ensure that the program was not unnecessarily intrusive or expansive.

For example, an NSA analyst could query metadata only if one of 22 designated NSA officials had agreed that there was “reasonable, articulable suspicion” that a phone number or other selection term was associated with terrorism.

Analysis was also limited to up to three “hops” away from the original selection term, or “seed” (e.g., an approved phone number).  In simplified terms, the first “hop” referred to the set of numbers directly in contact with the seed; the second “hop” referred to the set of numbers in contact with the first “hop” numbers; and the third “hop” referred to numbers in contact with the second “hop” numbers.

These and other constraints were imposed by well-meaning government officials seeking to act in the public’s best interest, but as the fallout from the 2013 revelations showed, the public had a different understanding of what level of surveillance was acceptable.

So far, the digital surveillance measures being contemplated in the United States to combat COVID-19 do not envision a level of government intrusion on the scale of the TSP or the bulk telephone metadata program.  But one does not need to contemplate the most extreme scenario (e.g., a massive collection of location data in a centralized, NSA-style database) for the lessons of those programs to be relevant.

In part because of our experiences since 9/11, the public will be suspicious of any government collection.  And even the smallest instance of misuse of data could have an outsized effect on the public’s willingness to trust what could be vital tools in our public health arsenal.  The bargain with the public must be struck at the outset, and its terms must be transparent.

These lessons also apply to the private sector.  The efforts being made by Google, Apple, and other tech companies to combat the pandemic are commendable, and their expertise and infrastructures will be essential if we are to deploy effective digital surveillance tools.  But they will need to maintain public trust to be successful.

There are two uncomfortable truths about the current moment.  The first is that, for all of the public’s concerns about government as Big Brother, the entities controlling our most sensitive and personal data—including location-related information—are corporations with relatively little public accountability.  The second is that, in the absence of government directives, these corporations are taking the lead in addressing the most significant public health threat of our time.

These conditions call out for smart public policy, even if it is not in the form of a comprehensive, national surveillance strategy.  Legislatures can and should help ensure corporate accountability and build public confidence in digital surveillance methods (e.g., by enacting federal data protection legislation or by otherwise expanding privacy protections in the context of public health emergencies).

But corporations also have a role to play.  As Facebook recently learned in the wake of the Cambridge Analytica scandal, the public’s trust is not a given.  It has to be earned and preserved, and transparency is an essential part of that process.

  1. Oversight actually works.

Oversight can mean different things to different people, and in government it can sometimes mean little more than bureaucratic rear-end protection.  But oversight can also be effective and meaningful, particularly when it involves multiple branches of government and public participation.  The 9/11 era taught us that.

Perhaps the best example is the work of the Privacy and Civil Liberties Oversight Board (PCLOB).  The PCLOB, a product of the 9/11 Commission’s recommendations, originally was created as part of the Intelligence Reform and Terrorism Prevention Act of 2004.  It was later reconstituted as an independent agency as part of the Implementing Recommendations of the 9/11 Commission Act of 2007.  Essentially, its job is to ensure that privacy and civil liberties concerns are properly considered when the government takes actions to protect against terrorism.

The body provides advice to the President and executive branch agencies, and also reports to Congress.  All five board members are appointed by the President with the advice and consent of the Senate, for staggered six-year terms.  No more than three of the five board members may be from the same political party.

Although the PCLOB’s operations have not been without difficulty, such as when its vacancies have not been filled, the agency provides a model for how effective oversight can work.  Nowhere was its work on better display than in its 2014 reports and recommendations relating to the government’s operations pursuant to Section 215 of the USA PATRIOT Act and Section 702 of the FAA, both described above.  One does not have to agree with all of the PCLOB’s recommendations (many of which have made their way into legislation or executive policy) to agree that it provides an important vehicle for maintaining public trust and accountability.

There is also an important role for inspectors general and other traditional audit mechanisms.  As demonstrated by the recent reports in December and March by the Justice Department’s inspector general regarding irregularities in numerous FBI applications for FISA authorizations, an active watchdog can call attention to practices that suffer from carelessness, malfeasance, or both.

  1. Sunset provisions are your friend.

It is not lost on me that several important foreign intelligence surveillance tools with sunset provisions were allowed to lapse in March due to congressional inaction.  But sunset clauses, which are common in national security laws enacted after 9/11, can serve important purposes.

Building in an expiration date for legislation can serve as a check against rushed and ill-conceived laws.  It guards against panic-induced decision-making by forcing a legislature to revisit a law at a future (and potentially calmer) date.  It also arms lawmakers with more information.  They can consider what has and has not worked and account for changed circumstances, which can result in better policy.

The USA PATRIOT Act and the FAA both provide examples of provisions that we were forced to reconsider due to sunsets.  The former, enacted in the days after 9/11, introduced sweeping changes to American surveillance law and was revisited in 2006, when Congress made permanent some provisions but not others (e.g., Section 215) and increased oversight over certain surveillance programs.

As noted above, the FAA was a legislative byproduct of the TSP.  The FAA’s precursor, the Protect America Act, was enacted in 2007 to provide a solution to the surveillance gap that had led to the TSP.  It had a six-month sunset clause, which effectively made the law a placeholder while Congress could debate a more permanent legislative solution.

The FAA that was ultimately enacted had its own sunset clause, and key provisions were set to expire at the end of 2012.  It has since been extended (most recently two years ago to 2023).  Although the debates surrounding the FAA’s reauthorization have not resolved all of the law’s lingering controversies, sunset provisions have at least forced us to periodically reconsider whether and how we want programmatic foreign surveillance to be conducted.

Any law that is enacted in response to the current pandemic would benefit from a clear, short-term sunset provision.  As in October 2001, when Congress was enacting the USA PATRIOT Act, we are operating in the middle of a crisis with limited information about the threat we face and with little experience to guide us.

As in the case of counterterrorism surveillance, we are considering employing technology in ways it has not previously been used.  Although all signs point to a form of surveillance that would involve less intrusive techniques than the tools employed in the counterterrorism context, the privacy and civil liberties concerns are magnified in our current crisis given that the targets of surveillance are our fellow citizens, and on a potentially massive scale.  These factors weigh in favor of a rapid sunset.

We should also consider measures to limit data retention.  Unlike in the counterterrorism context, where arguments could be made for longer-term retention to assist in understanding and identifying targets, the justification for retaining data such as the identifiers used in Bluetooth-enabled contact tracing apps is less obvious.  There may be a case for retaining anonymized and aggregated data for epidemiological purposes, but data at the individual level should be expunged as quickly and as permanently as possible.

The COVID-19 threat will hopefully carry its own sunset provision if an effective vaccine is developed and if other, more traditional (and less intrusive) public health measures can take effect.  Until that day, we should proceed cautiously and humbly with the lessons of the last two decades to guide us.

Still, remember our Lawfire® mantra: gather the facts, examine the law, evaluate the arguments – and then decide for yourself!

 

You may also like...