The notion of “user-centric identity” has been getting a real blog-workout lately, as Paul Madsen notes. Like Paul, I suspect it’s been used in buzzword fashion more than as a crisp concept. I can think of two different obvious ways to define it: either direct user hosting of their own identity information, or (perhaps indirect) user control over another party’s use of it.
As Robin Wilton has said a number of times, insisting on having users personally manage and host all of their own identity services is unrealistic and unnecessary — like hiding one’s cash under the mattress. This isn’t to say that various architectures that give a user this choice (for example, using your phone’s sim card as a source of credentials and attributes) would be a bad idea — on the contrary, they’re quite interesting and useful. But since this definition would clearly draw too hard a line, let’s concentrate on the softer form: user control over another party’s use. However, even this can’t be absolute.
Paul points out an irrefutable case of identity information about a user that, by rights, the user doesn’t control:
Other than the enterprise deployments above, where it can be argued that the user’s control over their identity are scoped by their employment contract, I believe all of the above can be user-centric. [emphasis mine]
There are many such cases. Some traits and characteristics (iris pattern, favorite color) are truly your own, but many of your characteristics (employee ID, role at work) are not yours to change or to obscure from others’ view, and others (uh, felony convictions?) you might only have partial control over, since some others will always have a right to access them.
So I’ve begun thinking about the proposition of user-centric identity as just the natural other half of the usual sorts of access control people talk about: you’re authenticating and authorizing applications and services to access information about you, just as they need to authenticate and authorize you to use them.
Now, because you’re a human, the technical methods for achieving this other half don’t necessarily look exactly like the methods used in the traditional half. For example, Liberty’s identity web services framework (which is normally a back-channel, machine-to-machine sort of thing) has what it calls an interaction service, which allows an identity service to check with a human to gain their consent in synchronous fashion before releasing information about them. Robin’s post linked above quotes Kim Cameron, who is commenting on the legal aspects of circles of trust:
Now, perhaps I am just a man with a hammer who sees everything in the world as a nail, but the paper reinforced my thinking that the more our systems are built to guarantee that the user is the conscious agent of information release (rather than having this done on his behalf), the better privacy is served, and the simpler our lives become from a legal and policy point of view.
I certainly agree — user consent is key, through synchronous interaction if necessary, and through application of user policy in other cases. (The Liberty paper he’s commenting on is here.)
So, a modest terminological proposal: We’re used to talking about mutual authentication in the context of setting up an online (machine-to-machine) session. Can we think in terms of mutual authentication and authorization when it comes to users and the applications and services they use?