For the last three months, I’ve been working on a white paper, “Government-issued Credentials and the Privacy Landscape.” This paper aims to inspire thought and provoke useful conversations about enhancing online privacy between people setting privacy laws and regulations and people writing technical standards. The paper is still a Work In Progress, though realistically speaking, the topic will never be finished; I’ll just find a suitable stopping place where the next paper(s) can pick up.
This is essential work that I’m honored to be a part of. It’s also work that makes me glad I am neither a policymaker nor a standards architect. It’s not so much that their work is cut out for them as it is that their work is like a never-ending game of Whac-A-Mole.
Privacy Laws
Governments large and small are including privacy in their legal frameworks. Most, if not all, of these laws and regulations have been influenced by the Organisation for Economic Co-operation and Development (OECD) Privacy Guidelines. These were originally published back in 1980 in their Guidelines on the Protection of Privacy and Transborder Flows of Personal Data and updated in 2013. They codify everything from collection limitation (don’t collect what isn’t directly needed) to openness (make sure people know your policies regarding their data). They also include words like “appropriate,” “reasonable,” and “to the extent necessary.”
Those words are judgment calls, which both make sense and make it impossible to apply consistently. There must be room to adapt to circumstance; privacy is always contextual. Anything less will marginalize individuals and groups that don’t fall into a given norm. However, the counterbalance is that where there is room for interpretation, people and organizations will interpret the rules in a manner most beneficial to them. That, in turn, leads to inconsistent and marginalization of individuals and groups.
Technical Standards
Where policy provides room for legal recourse, technology supports functionality and automatic protections. The fact that an individual can log in to a service and have no information shared beyond the fact that they are a valid user on that system is a result of careful standards development. (That feature isn’t always used, but it does exist.) That supports the data minimization requirements found in privacy legislation.
The technical standards can do a lot to help enable privacy, but they can’t do everything. For every attempt to restrict information, another “appropriate,” “reasonable,” or “necessary” requirement comes into play where additional information must be shared and collected about an individual on the network.
Where Morality and Technology (Don’t) Meet
As soon as one enters the land of “it’s ok in this real-world context but not that one,” one starts to slide down the razor blade of life (thank you, Tom Lehrer, for that imagery). Let’s take a brief look at the concept of Data Protection Officers (DPOs) in the European Union and Trusted Referees as proposed in the U.S.
DPOs in Europe
The European Union has what is sometimes considered the gold standard of privacy regulations, the General Data Protection Regulation (GDPR). The whole point of the GDPR is to protect personal data online. One of the provisions of the GDPR (Article 38) has established the position of the DPO. The DPO is expected to be an expert in all things covered by the GDPR. They are the first stop for all questions about whether an organization is doing the right thing regarding data protection. And no one can tell a DPO what to do.
But what happens when data protection experts disagree? Dig into that and you realize that whatever functionality the technology-enabled, someone believed that it was the wrong thing to do. Tie that back to the people developing the technology standards and you have quite the quandary. Technology has to have the ability to do everything. Still, the decision as to when “everything” is appropriate will change from organization to organization and region to region. I have no idea how to standardize technology such that it can make that kind of judgment call.
Trusted Referee in the U.S.
But wait, there’s more! Last week I wrote about the draft of NIST Special Publication 800-64-4. It’s absolutely trying to do the right thing by emphasizing the importance of equity in technology. NIST 800-63-4 recognizes that technology can only go so far before human judgment must come into play. And so they’ve described the role of the Trusted Referee.
Equity isn’t the same as privacy, but they tie together in many ways. Personal data collected about a person might result in inequitable decisions. That’s why it’s held as private, sensitive data and protected under so many governments’ privacy laws.
Trusted Referees will be the DPOs of the Diversity, Equity, and Inclusion (DEI) space. Every government entity (and their contractors, and their subcontractors) that provides credentials will be required to assign and train one. More likely, there will be departments of them. And each Trusted Referee will need to be an expert on all things related to digital identity and the correct ways to prove a person is who they say they are when they can’t meet the technical requirements for that proof.
The trusted Referee will need a common, clear playbook describing what they can and can’t do. They will also need the flexibility to handle cases that no one expected when putting together that manual. And, of course, there will need to be an arbitration process for when an individual believes the Trusted Referee made the wrong decision.
The whole reason to go down this challenging path is that technology can’t always get it right. It has limitations and cannot make all necessary judgment calls.
Wrap Up
The scope of the white paper is limited to just government-issued credentials; credentials issued by your favorite social media giant are out of scope. And yet, that still covers so much. Can privacy legislation be written with a better understanding of what’s possible with technology? Can technology support stronger privacy protections, but only in the appropriate context and without confusing the individual trying to use that technology?
Honestly, I don’t know. I hope that I will ultimately frame the discussion so that others with more knowledge, experience, and vision in this space will see what I do not and take those necessary steps toward closing these uncomfortable gaps between policy and technology when it comes to privacy.
Thank you for reading my post! Please leave a comment if you found it useful.
If you want to start your own blog or improve your writing, you might be interested in: The Writer’s Comfort Zone.