Does having published my first Forrester research report and done my first quarterly teleconference mean I’ve made my analyst bones? Hmm. You can read about my identity assurance coverage here. (Regular readers may recall that I wrote about identity assurance on Pushing String last fall, batting around ideas with Paul Madsen and others.)
Tag Archives: assurance
In the absence of any other controls, relying parties for identity info would like to be handed as much user data as they can get. It can’t hurt to have a little extra, right? But as we pointed out in the UMA webinar a few weeks ago, when web apps think they’ve gotten something valuable out of us, sometimes they’re just mistaken. When a site wants too much info and makes us give it to them in a self-asserted fashion (oh, those asterisked fields!), we just…lie. In fact, you can tell the site doesn’t do anything really important with that info if you can lie and get away with it.
Case in point: The crap that fills the fields of 77% of domain name registrations. (The Register’s headline: Whois in charge? ICANN’t tell. Heh.)
This is where “attribute assurance” could come in, involving a federated identity system that arranges for the data to be supplied by trusted issuers in some fashion. Attribute assurance is akin to identity assurance (as discussed previously here), except that it’s about the quality of specific types of information and their binding to the individual in question. The world hasn’t yet come up with a generic way of handling such assurance, though it’s been a topic of serious discussion. The Tao of Attributes workshop was a great start.
In the domain name registration case, one of the big reasons why people don’t like to supply their real information is that it’s published far and wide — anyone can learn what your address is if you provide your real one. Hence the lying, at least in quite a lot of the cases. This is a real-world situation where needs for level of assurance (LOA) are in a tug-o’-war with needs for level of protection (LOP).
What’s LOP? In short, it’s the reciprocal of LOA. Whereas relying parties want to ensure that the data they’re getting is good when they get it, data subjects and their identity providers want to ensure that the data will be protected and treated with respect when it gets there.
(You can read more about LOP, and some of the elements that need to be lined up to solve it in an Internet-scale way, in The Open Identity Trust Framework (OITF) Model, a white paper I was honored to co-author along with Tony Nadalin, Drummond Reed, Don Thibeau, and our illustrious managing editor Mary Rundle. The proposed model suggests some ways to organize the Pushmi-pullyu nature of federated identity partnerships to raise the quality, and possibly tamp down the quantity, of identity attributes floating around.)
Everybody’s talking about identity assurance these days, meaning, generically, the confidence a relying party needs to have in the identity information it’s getting about someone so that it can manage its risk exposure.
A lot of the conversation to date has revolved around NIST Special Publication 800-63 (newer draft version here) and its global cousins, which boil down assurance into four levels — hence all the loose talk of LOA (for “level of assurance” or sometimes AL for “assurance level”), even when people aren’t focusing on specific levels or even systems of assurance numbering. NIST 800-63 is intended to answer the use cases defined in OMB Memo 04-04, which deals with making sure users of the U.S. Federal government’s online systems are who they purport to be. Here’s an example given in OMB M-04-04 for one particular need for level 3 assurance:
A First Responder accesses a disaster management reporting website to report an incident, share operational information, and coordinate response activities.
And here’s how NIST 800-63 defines assurance (I’m quoting the Dec 2008 draft here; strangely, the official Apr 2006 version doesn’t include a formal definition):
In the context of OMB M-04-04 and this document, assurance is defined as 1) the degree of confidence in the vetting process used to establish the identity of an individual to whom the credential was issued, and 2) the degree of confidence that the individual who uses the credential is the individual to whom the credential was issued.
So there’s an identity proofing component at registration time that nails down the precise real-world human being being referred to, and there’s a security/protocol soundness/authentication component at run time that establishes that the credential is being waved around legitimately. These get added up into four levels defined roughly like this (leaving aside the security and protocol soundness factors):
(Here, “same unique user” means that the same user can be correlated by the RP across sessions. And “verified name provided” means that the user’s real-world name is exposed to the RP, versus some sort of pseudonym; level 1, where no proofing is done, is implicitly pseudonymous, while level 2 offers a choice.)
I don’t mean at all to criticize this rolled-up four-level approach. It seems to have met the needs set out in M-04-04, and it predated both the “user-centric” movement (Dale Olds has a nice rundown of its use cases here) and truly modern notions of online privacy.
But I think we need more clarity about assurance use cases and terminology, for two reasons: One is to help ensure that identity providers can give RPs what they need, rather than what might just be a poor approximation based on NIST 800-63’s fame. The other is to help ensure that IdPs give RPs only what they need, since more assurance is likely to involve more personal information exposure.
To that end, let me explain some assurance use case buckets I’m seeing in the wild, and their relationship to the NIST requirements and each other. First, here are some use case buckets hiding in plain sight in the NIST levels:
Simple cross-session correlation: While NIST 800-63 doesn’t formally include “same unique user” as a goal, it’s in there:
Level 1 – Although there is no identity proofing requirement at this level, the authentication mechanism provides some assurance that the same claimant is accessing the protected transaction or data.
Funnily enough, cross-session correlation (without the baggage of proofing) is a key requirement of many enterprise and Web federated identity interactions. Lots of sites don’t need or want to know you’re a dog; they just need to know you’re the same dog as last time. This way, they can authorize various kinds of ongoing access and give you something of a personalized experience across sessions. Though NIST treats this as an also-ran and couples it with weak authentication in level 1, other use cases may have reason to match up “mere correlation” with higher authentication.
Identity proofability: If an RP can trust that it’s dealing with a human being who has some level of serious representation in civil society, it’s a powerful kind of assurance for lots of purposes. More about this below.
Real-world identity mapping: When level 3 or 4, or verified-name level 2, is used, this means a user’s real name is used to build up the unique identifier that the RP sees, and this verified name leaks PII like crazy, even if it’s not itself unique. (As far as I know, I’m the only Eve Maler out there…) This is strong stuff, and in a modern federated identity environment, it is to be hoped that most RPs simply don’t need this information. (John Bradley — that is, the John Bradley who works with the U.S. government on its ICAM Open Identity Solutions program — tells me he believes pseudonyms should be an acceptable choice all up and down the four levels, indicating that this use case bucket is fairly rare.)
Now things get really interesting, because there are other use case buckets that you can sort of see in this matrix if you squint, but really they’re just different:
Anonymous authorization/personalization: This is the flip side of cross-session correlation. OMB M-04-04 talks about “attribute authentication” and the potential for user attributes to serve as “anonymous credentials” (where an RP simply can’t know if this is the same unique user coming back but can still base its authorization decisions and personalization actions on the veracity of the attributes it’s getting). The attributes in question can range from “this user is over 18” to “this user is a student at University ABC” to “this user is of nationality XYZ”.
Ultimately M-04-04 puts the whole area of attribute authentication firmly out of scope, but lots of folks have been picking at the general problem of attribute assurance in the last several months — like Internet2 in its Tao of Attributes workshop, and the Concordia group in a forthcoming survey (stay tuned for more on that).
This bucket often requires being able to check who issued some assertion or claim, and considering whether they’re properly authoritative for that kind of info. The way I think about this is: Who has the least incentive to lie? That’s why you can be said to be truly authoritative for self-asserted preferences such as “aisle vs. window”. Any other way lies madness (“What is your favorite color?” “Blue. No yel– Auuuuuuuugh!”).
Of course, there are cases where an RP really does need attribute assurance along with other kinds, like correlation or identity mapping. And don’t forget that it takes precious little in the way of personal information for an RP to figure out “who you really are” anyway. (Check out this cool Tao of Attributes diagram, which touches on all these points.)
Financial engagement: Sometimes an RP just just wants some assurance they’re dealing with someone who has sufficient ties to the world’s legitimate financial systems not to screw them over entirely. It turns out that identity proofability can often be a serviceable proxy for this kind of confidence. (Financial account numbers are one kind of proofing documentation in NIST 800-63.) And the reverse is also true: financial engagement can sometimes give a modicum of confidence in identity proofability.
Interestingly, this bucket can be useful even without any of the other kinds, partly because the parties can lean on a mature parallel financial system instead of just lobbing identifiers and attributes all over the place. For example, users often “self-assert” credit card numbers (which RPs then validate out of band with the card issuer), or use third-party payment services like PayPal (where the service provider does a lot of the risk-calculation heavy lifting).
No doubt there are other assurance use cases. Understanding them more deeply can, I think, help us get better at sharing the truth and nothing but the truth about people online — without having to expose the whole truth.