Tag Archives: UMA

Aiming for data usage control

Earlier this week, W3C held a workshop on privacy and data usage control. Among the submitted position papers are quite a few interesting thoughts, and though I couldn’t attend the workshop, it will be good to see the eventual report from it.

I did manage to submit a paper that explores the contributions of User-Managed Access (UMA) to letting people control the usage of their personal data. It was a chance to capture an important part of the philosophy we bring to our work, and the challenges that remain. From the paper’s introduction:

…UMA allows a user to make demands of the requesting side in order to test their suitability for receiving authorization. These demands can include requests for information (such as “Who are you?” or “Are you over 18?”) and promises (such as “Do you agree to these non-disclosure terms?” or “Can you confirm that your privacy and data portability policies match my requirements?”).

The implications of these demands quickly go beyond cryptography and web protocols and into the realm of agreements and liability. UMA values end-user convenience, development simplicity, and web-wide adoption, and therefore it eschews such techniques as DRM. Instead, it puts a premium on user visibility into and control over access criteria and the authorization lifecycle. UMA also seeks at least a minimum level of enforceability of authorization agreements, in order to make the act of granting resource access truly informed, uncoerced, and meaningful. Granting access to data is then no longer a matter of mere passive consent to terms of use. Rather, it becomes a valuable offer of access on user-specified terms, more fully empowering ordinary web users to act as peers in a network that enables selective sharing.

Some of the challenges are technical, some legal, and some related to business incentives. The paper approaches the discussion with what I hope is a sense of realism, along with some justified optimism about near-term possibilities.

(Speaking of which, I like the realism pervading Ben Laurie’s recent criticism of the EFF’s suggested bill of privacy rights for social network users. He cautions them to stay away from implicitly mandating mechanisms like DRM — and, in focusing on broader aims, to be careful what they wish for.)

If you’re so inclined, I hope you’ll check out the paper and the other workshop inputs and outputs.

Making identity portable in the cloud

Yesterday I had the opportunity to contribute to BrightTALK’s day-long Cloud Security Summit with a webcast called Making Identity Portable in the Cloud.

Some 30 live attendees were very patient with my Internet connection problems, meaning that the slides (large PDF) didn’t advance when they were supposed to and I couldn’t answer questions live. However the good folks at BrightTALK fixed up the recording to match the slides to the audio, and I thought I’d offer thoughts here on the questions raised.

“Framework provider – sounds suspiciously like an old CA (certificate authority) in the PKI world! Why not just call it a PKI legal framework?” Yeah, there’s nothing new under the sun. The circles of trust, federations, and trust frameworks I discussed share a heritage with the way PKIs are managed. But the newer versions have the benefit of lessons learned (compare the Federal Bridge and the Open Identity Solutions for Open Government initiative) and are starting to avail themselves of technologies that fit modern Web-scale tooling better (like the MDX metadata exchange work, and my new favorite toy, hostmeta). PKI is still quite often part of the picture, just not the whole picture.

“How about a biometric binding of the individual to the process and the requirement of separation of roles?” I get nervous about biometric authentication for many purposes because it binds to the bag of protoplasm and not the digital identity (and because some of the mechanisms are actually rather weak). If different roles and identities could be separated out appropriately and then mapped, that helps. But with looser coupling come costs and risks that have to be managed.

“LDAP, AD, bespoke, or a combination?” Interestingly, this topic was hot at the recent Cloud Identity Summit (a F2F event, unlike the BrightTALK one). My belief is that some of today’s tiny companies are going to outsource all their corporate functions to SaaS applications; they will thrive on RESTfulness, NoSQL, and eventual consistency; and some will grow large, never having touched traditional directory technology. I suspect this idea is why Microsoft showed up and started talking about what’s coming after AD and touting OData as the answer. (Though in an OData/GData deathmatch, I’d probably bet on the latter…)

Thanks to all who attended, and keep those cards and letters coming.

Where web and enterprise meet on user-managed access

Phil Hunt shared some musings on OAuth and UMA recently. His perspective is valuable, as always. He even coined a neat phrase to capture a key value of UMA’s authorization manager (AM) role: it’s a user-centric consent server. Here are a couple of thoughts back.

In the enterprise, an externalized policy decision point represents classic access management architecture, but in today’s Web it’s foreign. UMA combines both worlds with the trick of letting Alice craft her own access authorization policies, at an AM she chooses. She’s the one likeliest to know which resources of hers are sensitive, which people and services she’d like to share access with, and what’s acceptable to do with that access. With a single hub for setting all this up, she can reuse policies across resource servers and get a global view of her entire access landscape. And with an always-on service executing her wishes, in many cases she can even be offline when an access requester comes knocking. In the process, as Phil observes, UMA “supports a federated (multi-domain) model for user authorization not possible with current enterprise policy systems.”

Phil wonders about privacy impacts of the AM role given its centrality. In earlier federated identity protocol work, such as Liberty’s Identity Web Services Framework, it was assumed that enterprise and consumer IdPs could never be the authoritative source of all interesting information about a user, and that we’d each have a variety of attribute authorities. This is the reality of today’s web, expanding “attribute” to include “content” like photos, calendars, and documents. So rather than having an über-IdP attempt to aggregate all Alice’s stuff into a single personal datastore — presenting a pretty bad panoptical identity problem in addition to other challenges — an AM can manage access relationships to all that stuff sight unseen. Add the fact that UMA lets Alice set conditions for access rather than just passively agree to others’ terms, and I believe an AM can materially enhance her privacy by giving her meaningful control.

Phil predicts that OAuth and UMA will be useful to the enterprise community, and I absolutely agree. Though the UMA group has taken on an explicitly non-enterprise scope for its initial work, large-enterprise and small-business use cases keep coming up, and cloud computing models keep, uh, fogging up all these distinctions. (Imagine Alice as a software developer who needs to hook up the OAuth-protected APIs of seven or eight SaaS offerings in a complex pattern…) Next week at the Cloud Identity Summit I’m looking forward to further exploring the consumer-enterprise nexus of federated access authorization.

SMART UMA application: call for testers

The SMART project (Student-Managed Access to Online Resources) at Newcastle University has issued a call for user experience testers for the smartam component of the UMA-based applications they have been building. Participation should take less than a half-hour; if you’re interested, check out the flyer for instructions. To keep up with general news on the project (there’s lots), follow the SMART JISC blog.

This is an exciting milestone in UMA development. Congratulations to the SMART team!

UPDATE: The SMART blog now has an entry that describes the exciting directions smartam is going. Don’t miss it.

Tofu, online trust, and spiritual wisdom

At the European Identity Conference a little while back, Andre Durand gave a downright spiritual keynote on Identity in the Cloud. His advice for dealing with the angst of moving highly sensitive identity information into the cloud? Ancient Buddhist wisdom.

All experiences are marked by suffering, disharmony, and frustration.

Suffering and frustration come from desire and clinging.

To achieve an end to disharmony, stop clinging.

(I can’t wait to hear his pearls of wisdom at the Cloud Identity Summit later this month… I’ll be there speaking on UMA. You going?)

This resonated with another plea I’d just heard from Chris Palmer at the iSEC Partners Open Security Forum, in his talk called It’s Time to Fix HTTPS.

Chris’s message could be described as “Stop clinging to global PKI for browser security because it is disharmonious.” He reviewed the perverse incentives that fill the certificate ecosystem, and demonstrated that browsers therefore act in the way that will help ordinary users least.

Why, he asked, can’t we convey more usable security statements to users along the lines of:

“This is almost certainly the same server you connected with yesterday.”

“You’ve been connecting to almost certainly the same server all month.”

“This is probably the same server you connected with yesterday.”

“Something seems fishy; this is probably not the same server you connected with yesterday. You should call or visit your bank/whatever to be sure nothing bad has happened.”

Perhaps I was the only one not already familiar with his names for the theory that can make these statements possible: TOFU/POP, for Trust On First Use/Persistence of Pseudonym. Neither of these phrases gets any serious Google search love, at least not yet. But I love TOFU, and you should too. (N.B.: I’m not a big fan of lowercase tofu.) The basic idea is that you can figure out whether to trust the first connection with a nominally untrusted entity by means of out-of-band cues or other met expectations — and then you can just work on keeping track of whether it’s really them the next time.

The neat thing is, we do this all the time already. When you meet someone face-to-face and they say their Skype handle is KoolDood, and later a KoolDood asks to connect with you on Skype and describes the circumstances of your meeting, you have a reasonable expectation it’s the right guy ever after. And it’s precisely the way persistent pseudonyms work in federated identity: as I’ve pointed out before, a relying-party website might not know you’re a dog, but it usually needs to know you’re the same dog as last time.

Knowing of the desire to cling to global PKI in an environment where it’s simply not working for us, Chris proposes letting go of trust — and shooting for “trustiness” instead. If it successfully builds actual Internet trust relationships vs. the theoretical kind, hey, I’m listening. There’s a lot of room for use cases between perfect trust frameworks built on perfect certificate/signature mechanisms and plain old TOFU-flavored trustiness, and UMA and lots of other solutions should be able to address the whole gamut.

Surely inner peace is just around the corner.

OpenID and OAuth: As the URL Turns

In Phil Windley’s initial IIW wrap-up, he alluded to the soap-opera nature of the OpenID wrangling that went on last week. It’s an apt description.


In the spirit of real ones:

Margo wanted Parker to get an attorney before making a confession but he insisted on telling the truth anyway. Margo quickly called Jack with the latest development so he and Carly rushed to the station. Jack ordered his son to keep quiet but Parker said he was going through with his confession. Carly was brokenhearted that Parker couldn’t be silenced and Margo took Jack off the case. [ATWT]

…I present the soap-opera synopsis of the goings-on:

David showed up at the Mountain View party with OpenID Connect, which had been hanging around with OAuth in a way that seemed promiscuous. Having insisted last year that it was ready to change, OpenID quickly got busy. OpenID Artifact Binding was brokenhearted that its quiet yet effective nature wasn’t enough to get it noticed. UMA and CX couldn’t help putting in their two cents when they heard what the problem was.

The OpenID specs list discussion is now hopping, and so far it’s been relatively free of pique and getting more productive as people understand each other’s use cases and requirements better. Now we just need to come up with a list of in-scope ones…and realize that the best ideas for solving each one could come from anywhere.

So: Can we try and combine the grand vision and breadth of community of the OpenID.next process, the rigor and security of OpenID AB, and the speed and marketing savvy of OpenID Connect — rather than (ahem) the speed and rigor of the OpenID.next process, the grand vision and marketing savvy of OpenID AB, and the security and breadth of community of OpenID Connect?

UPDATE on 10 July 2010: This post has been translated into Belorussian by PC.

Comparing OAuth and UMA

UMA logo

The last few weeks have been fertile for the Kantara User-Managed Access work. First we ran a half-day UMA workshop (slides, liveblog) at EIC that included a presentation by Maciej Machulak of Newcastle University on his SMART project implementation; the workshop inspired Christian Scholz to develop a whole new UMA prototype the very same day. (And they have been busy bees since; you can find more info here.)

Then, this past week at IIW X, various UMAnitarians convened a series of well-attended sessions touching on the core protocol, legal implications of forging authorization agreements, our “Claims 2.0″ work, and how UMA is being tried out in a higher-ed setting — and Maciej and his colleague Łukasz Moreń demoed their SMART implementation more than a dozen times during the speed demo hour.

In the middle of all this, Maciej dashed off to Oakland, where the IEEE Symposium on Security and Privacy was being held, to present a poster on User-Managed Access to Web Resources (something of a companion to this technical report, all with graphic design by Domenico Catalano).

Through it all, we learned a ton; thanks to everyone who shared questions and feedback.

Because UMA layers on OAuth 2.0 and the latter is still under development, IIW and the follow-on OAuth interim F2F presented opportunities for taking stock of and contributing to the OAuth work as well.

Since lots of people are now becoming familiar with the new OAuth paradigm, I thought it might be useful to share a summary of how UMA builds on and differs from OAuth. (Phil Windley has a thoughtful high-level take here.) You can also find this comparison material in the slides I presented on IIW X day 1.


UMA settled on its terms before WRAP was made public; any overlap in terms was accidental. As we have done the work to model UMA on OAuth 2.0, it has become natural to state the equivalences below more boldly and clearly, while retaining our unique terms to distinguish the UMA-enhanced versions. If any UMA technology ends up “sedimenting” lower in the stack, it may make sense to adopt OAuth terms directly.

  • OAuth: resource owner; UMA: authorizing user
  • OAuth: authorization server; UMA: authorization manager
  • OAuth: resource server; UMA: host
  • OAuth: client; UMA: requester


I described UMA as sort of unhooking OAuth’s authorization server concept from its resource-server moorings and making it user-centric.

  • OAuth: There is one resource owner in the picture, on “both sides”. UMA: The authorizing user may be granting access to a truly autonomous party (which is why we need to think harder about authorization agreements).
  • OAuth: The resource server respects access tokens from “its” authorization server. UMA: The host outsources authorization jobs to an authorization manager chosen by the user.
  • OAuth: The authorization server issues tokens based on the client’s ability to authenticate. UMA: The authorization manager issues tokens based on user policy and “claims” conveyed by the requester.

Dynamic trust

UMA has a need to support lots of dynamic matchups between entities.

  • OAuth: The client and server sides must meet outside the resource-owner context ahead of time (not mandated, just not dealt with in the spec). UMA: A requester can walk up to a protected resource and attempt to get access without having registered first.
  • OAuth: The resource server meets its authorization server ahead of time and is tightly coupled with it (not mandated, just not dealt with in the spec). UMA: The authorizing user can mediate the introduction of each of his hosts to the authorization manager he wants it to use.
  • OAuth: The resource server validates tokens in an unspecified manner, assumed locally. UMA: The host has the option of asking the authorization manager to validate tokens in real time.


UMA started out life as a fairly large “application” of OAuth 1.0. Over time, it has become a cleaner and smaller set of profiles, extensions, and enhanced flows for OAuth 2.0. If any find wider interest, we could break them out into separate specs.

  • OAuth: Two major steps: get a token (with multiple flow options), use a token. UMA: Three major steps: trust a token (host/authorization manager introduction), get a token, use a token.
  • OAuth: User delegation flows and autonomous client flows. UMA: Profiles (TBD) of OAuth flows that add the ability to request claims to satisfy user policy.
  • OAuth: Resource and authorization servers are generally not expected to communicate directly, vs. through the access token that is passed through the client. UMA: Authorization manager gives host its own access token; host uses it to supply resource details and request token validation.

Much work remains to be done; please consider joining us (it’s free!) if you’d like to help us make user-managed access a reality.

Munich fuel

To get through the intense European Identity Conference last week in Munich (thanks, Kuppinger Cole folks!), I had to make sure to drink lots of fluids. I’m referring, of course, to coffee, beer, and one extraordinary whisky (thanks, Ping Identity folks!).

Bavarian coffee cup – gift from a local friend

The 2010 edition of the conference was lively and valuable. Here are just a couple of stories about encounters I had there, with more thoughts and info to come.

I had the good fortune to meet Christian Scholz in person for the first time; we participate in the Data Without Borders podcast series together, but in the way of the modern world, had never occupied the same room. Christian was serving as a credentialed event blogger. We hung out together during many EIC sessions, and I learned a lot by seeing the enterprise IdM world through his eyes; we seem to share a strong interest in the idea of radically simplifying IT. (I also learned how he came by the moniker Mr. Topf…) Don’t miss his conference musings.

And I had the great pleasure of meeting UMA’s own Graphics/UX Editor, the talented Domenico Catalano — though I already felt I knew him well! Domenico’s graphical and intellectual work graces a lot of the UMA material (and if you’re going to IIW next week, you’ll see even more of it). What a delight to cement friendships by meeting IRL.

The erudite and prolific author Vittorio Bertocci kindly gave me a copy of his new book, A Guide to Claims-Based Identity and Access Control — and I couldn’t resist asking for an autograph. (Though I was forced to sleep off the week’s excesses on the plane rather than read, this tome is next on my list.)

Finally, I had the opportunity to participate in three panels (data portability, privacy-enhancing technologies, and trust frameworks), and really appreciated the skillz and charm of moderators Dave Kearns and John Hermans.

Thanks and congratulations again to KC+P gang; it was a heck of a show, and they were ever the gracious hosts. Stay tuned here for more about the week’s events from my perspective.

UMA learns how to simplify, simplify

It seems like a good time to review where we’ve been and where we’re going in the process of building User-Managed Access (UMA).

The introduction to our draft protocol spec reads:

The User-Managed Access (UMA) 1.0 core protocol provides a method for users to control access to their protected resources, residing on any number of host sites, through an authorization manager that makes access decisions based on user policy.

For example, a web user (authorizing user) can authorize a web app (requester) to gain one-time or ongoing access to a resource containing his home address stored at a “personal data store” service (host), by telling the host to act on access decisions made by his authorization decision-making service (authorization manager). The requesting party might be an e-commerce company whose site is acting on behalf of the user himself to assist him in arranging for shipping a purchased item, or it might be his friend who is using an online address book service to collect addresses, or it might be a survey company that uses an online service to compile population demographics.

While this introduction hasn’t changed much over time, the UMA protocol itself has undergone a number of changes, some of them dramatic. The changes came from switching “substrates”, partly in response to the recent volatility that has entered the OAuth world with the introduction of WRAP and partly in an ongoing effort to make everything simpler.

Here’s an attempt to convey the once and future UMA-substrate picture:


Back in the days of ProtectServe, we did try to use “real OAuth” to associate various UMA entities, but in one area we “cheated” (that is, used something OAuth-ish but not conforming) to get protocol efficiencies and to ensure the privacy of the authorizing user. Hence the wavy line.

Once the UMA effort started in earnest at Kantara, we discovered a way to use three instances of “real OAuth” instead, and decided to (as we thought of it) “get out of the authentication business” entirely — thus theoretically allowing people to build UMA implementations on top of OAuth as a fairly thin shim. However, the trick we had discovered to accomplish this, along with (it must be admitted) an over-enthusiasm for website metadata discovery loops, made the spec pretty dense and hard to understand and we weren’t satisfied with that either.

As of last week, we have moved to a WRAP substrate on an interim basis, and that approach is what’s in our spec now. Although WRAP’s entities sound an awful lot like UMA’s entities (authorization manager/authorization server, host/protected resource, requester/client, authorizing user/resource owner), it took us a while to disentangle the different use cases and motivations that drove the development of each set, and in retrospect I’m glad we don’t have the same exact language. It allows us to have a meaningful conversation about the ways in which WRAP can gracefully assume a “plumbing” role for UMA (and other) use cases, and the ways in which we need to extend and profile it.

Here’s a diagram that summarizes the current state of the UMA protocol and the role(s) WRAP plays in it (click to enlarge):

UMA protocol summary

Given our use cases, it pretty much flows naturally, and it solves the problems we were previously tying ourselves in knots over. (Do check out the details if you have that sort of inquiring mind. Though reading our spec now benefits from an existing understanding of the WRAP paradigm — we plan to add more detail to make it stand better on its own — it’s now a lot more to-the-point.)

So all in all, I’m very pleased about our direction, and about the increasing interest we’re seeing from all over the place. But we’re not there yet, and it’s partly because over at the IETF they’re still working on OAuth Version 2.0, and that’s what we’re really after. We have been active in contributing use cases and feedback to the OAuth WG, and I think next week’s meeting at IETF 77 will be productive for all.

Come to the Kantara UMA workshop at the beginning of the European Identity Conference on May 4! (fixed; used to say May 3)

Though UMA isn’t fully incubated yet, it also seems like a good time to give a shout-out to all the UMAnitarians (join us… don’t be afraid…) who are helping with the process, with special thanks to our leadership team: Paul Bryan, Domenico Catalano, Maciej Machulak, Hasan ibne Akram, and Tom Holodnik.

The Economist and “ecto gammat”

Remember in The Fifth Element when Leeloo threatens to shoot Korben Dallas for stealing a kiss, saying “ecto gammat”? Turns out it means “never without my permission”. A good rallying cry for personal data sharing in today’s world!

The Economist has a thoughtful article called The Data Deluge on the benefits, and the privacy risks, of making better use of the torrent of data (it mostly focuses on, but doesn’t ever say, “personal” data) being generated in all kinds of business and marketplace endeavors. My favorite part, ’cause I share this assumption with the author:

The best way to deal with these drawbacks of the data deluge is, paradoxically, to make more data available in the right way, by requiring greater transparency in several areas. First, users should be given greater access to and control over the information held about them, including whom it is shared with.

This article makes a great companion to this meaty blog post by Iain Henderson laying out a serious vision for the notion of a personal datastore as a personal data warehouse. Iain knows whereof he speaks; he’s been in the CRM business a long time, and runs the Kantara InfoSharing work group (along with Joe Andrieu, another thoughtful guy who’s passionate about this stuff). I’m lucky to have both of them on my entirely complementary User-Managed Access group, UMA serving as a technological match for InfoSharing use cases.

I tried to add a comment to the Economist article about an aspect it didn’t cover: the quality of the personal data that’s floating around. Either this commenting effort completely failed, or in the fullness of time three copies of the same comment will appear — sigh. In the spirit of using this blog as my pensieve, here’s the main bit:

Volatile data goes stale. Excessive data collected directly from people is often larded with, to put it bluntly, lies. (To acquire a comment account on this site, I was required to provide my given name, surname, email address, country of residence, gender, and year of birth. If everyone were totally honest when signing up, that’s a powerful set of facts with which to locate and track them pretty precisely. You can tell which fields are excessive by looking at which ones people lie to…) And data collected silently through our behavior is, at best, second-hand and can never know our true intent.

Privacy is not secrecy (says digital identity analyst Bob Blakley). It is context, control, choice, and respect. Ideal levels of personal data sharing may actually be higher in total than now — but more selective. And they won’t be interesting to people without offering convenience at the same time.

Wouldn’t it be great to get out of the defensive crouch of “never without my permission” and turn it into “with my permission, sure, why not, it’ll help me just as much as it will help you”?

(Any bets on whether I told the truth and nothing but the truth when I registered at the Economist site?)