Tag Archives: OAuth

Where web and enterprise meet on user-managed access

Phil Hunt shared some musings on OAuth and UMA recently. His perspective is valuable, as always. He even coined a neat phrase to capture a key value of UMA’s authorization manager (AM) role: it’s a user-centric consent server. Here are a couple of thoughts back.

In the enterprise, an externalized policy decision point represents classic access management architecture, but in today’s Web it’s foreign. UMA combines both worlds with the trick of letting Alice craft her own access authorization policies, at an AM she chooses. She’s the one likeliest to know which resources of hers are sensitive, which people and services she’d like to share access with, and what’s acceptable to do with that access. With a single hub for setting all this up, she can reuse policies across resource servers and get a global view of her entire access landscape. And with an always-on service executing her wishes, in many cases she can even be offline when an access requester comes knocking. In the process, as Phil observes, UMA “supports a federated (multi-domain) model for user authorization not possible with current enterprise policy systems.”

Phil wonders about privacy impacts of the AM role given its centrality. In earlier federated identity protocol work, such as Liberty’s Identity Web Services Framework, it was assumed that enterprise and consumer IdPs could never be the authoritative source of all interesting information about a user, and that we’d each have a variety of attribute authorities. This is the reality of today’s web, expanding “attribute” to include “content” like photos, calendars, and documents. So rather than having an über-IdP attempt to aggregate all Alice’s stuff into a single personal datastore — presenting a pretty bad panoptical identity problem in addition to other challenges — an AM can manage access relationships to all that stuff sight unseen. Add the fact that UMA lets Alice set conditions for access rather than just passively agree to others’ terms, and I believe an AM can materially enhance her privacy by giving her meaningful control.

Phil predicts that OAuth and UMA will be useful to the enterprise community, and I absolutely agree. Though the UMA group has taken on an explicitly non-enterprise scope for its initial work, large-enterprise and small-business use cases keep coming up, and cloud computing models keep, uh, fogging up all these distinctions. (Imagine Alice as a software developer who needs to hook up the OAuth-protected APIs of seven or eight SaaS offerings in a complex pattern…) Next week at the Cloud Identity Summit I’m looking forward to further exploring the consumer-enterprise nexus of federated access authorization.

OpenID and OAuth: As the URL Turns

In Phil Windley’s initial IIW wrap-up, he alluded to the soap-opera nature of the OpenID wrangling that went on last week. It’s an apt description.

soap

In the spirit of real ones:

Margo wanted Parker to get an attorney before making a confession but he insisted on telling the truth anyway. Margo quickly called Jack with the latest development so he and Carly rushed to the station. Jack ordered his son to keep quiet but Parker said he was going through with his confession. Carly was brokenhearted that Parker couldn’t be silenced and Margo took Jack off the case. [ATWT]

…I present the soap-opera synopsis of the goings-on:

David showed up at the Mountain View party with OpenID Connect, which had been hanging around with OAuth in a way that seemed promiscuous. Having insisted last year that it was ready to change, OpenID quickly got busy. OpenID Artifact Binding was brokenhearted that its quiet yet effective nature wasn’t enough to get it noticed. UMA and CX couldn’t help putting in their two cents when they heard what the problem was.

The OpenID specs list discussion is now hopping, and so far it’s been relatively free of pique and getting more productive as people understand each other’s use cases and requirements better. Now we just need to come up with a list of in-scope ones…and realize that the best ideas for solving each one could come from anywhere.

So: Can we try and combine the grand vision and breadth of community of the OpenID.next process, the rigor and security of OpenID AB, and the speed and marketing savvy of OpenID Connect — rather than (ahem) the speed and rigor of the OpenID.next process, the grand vision and marketing savvy of OpenID AB, and the security and breadth of community of OpenID Connect?

UPDATE on 10 July 2010: This post has been translated into Belorussian by PC.

Comparing OAuth and UMA

UMA logo

The last few weeks have been fertile for the Kantara User-Managed Access work. First we ran a half-day UMA workshop (slides, liveblog) at EIC that included a presentation by Maciej Machulak of Newcastle University on his SMART project implementation; the workshop inspired Christian Scholz to develop a whole new UMA prototype the very same day. (And they have been busy bees since; you can find more info here.)

Then, this past week at IIW X, various UMAnitarians convened a series of well-attended sessions touching on the core protocol, legal implications of forging authorization agreements, our “Claims 2.0″ work, and how UMA is being tried out in a higher-ed setting — and Maciej and his colleague Łukasz Moreń demoed their SMART implementation more than a dozen times during the speed demo hour.

In the middle of all this, Maciej dashed off to Oakland, where the IEEE Symposium on Security and Privacy was being held, to present a poster on User-Managed Access to Web Resources (something of a companion to this technical report, all with graphic design by Domenico Catalano).

Through it all, we learned a ton; thanks to everyone who shared questions and feedback.

Because UMA layers on OAuth 2.0 and the latter is still under development, IIW and the follow-on OAuth interim F2F presented opportunities for taking stock of and contributing to the OAuth work as well.

Since lots of people are now becoming familiar with the new OAuth paradigm, I thought it might be useful to share a summary of how UMA builds on and differs from OAuth. (Phil Windley has a thoughtful high-level take here.) You can also find this comparison material in the slides I presented on IIW X day 1.

Terms

UMA settled on its terms before WRAP was made public; any overlap in terms was accidental. As we have done the work to model UMA on OAuth 2.0, it has become natural to state the equivalences below more boldly and clearly, while retaining our unique terms to distinguish the UMA-enhanced versions. If any UMA technology ends up “sedimenting” lower in the stack, it may make sense to adopt OAuth terms directly.

  • OAuth: resource owner; UMA: authorizing user
  • OAuth: authorization server; UMA: authorization manager
  • OAuth: resource server; UMA: host
  • OAuth: client; UMA: requester

Concepts

I described UMA as sort of unhooking OAuth’s authorization server concept from its resource-server moorings and making it user-centric.

  • OAuth: There is one resource owner in the picture, on “both sides”. UMA: The authorizing user may be granting access to a truly autonomous party (which is why we need to think harder about authorization agreements).
  • OAuth: The resource server respects access tokens from “its” authorization server. UMA: The host outsources authorization jobs to an authorization manager chosen by the user.
  • OAuth: The authorization server issues tokens based on the client’s ability to authenticate. UMA: The authorization manager issues tokens based on user policy and “claims” conveyed by the requester.

Dynamic trust

UMA has a need to support lots of dynamic matchups between entities.

  • OAuth: The client and server sides must meet outside the resource-owner context ahead of time (not mandated, just not dealt with in the spec). UMA: A requester can walk up to a protected resource and attempt to get access without having registered first.
  • OAuth: The resource server meets its authorization server ahead of time and is tightly coupled with it (not mandated, just not dealt with in the spec). UMA: The authorizing user can mediate the introduction of each of his hosts to the authorization manager he wants it to use.
  • OAuth: The resource server validates tokens in an unspecified manner, assumed locally. UMA: The host has the option of asking the authorization manager to validate tokens in real time.

Protocol

UMA started out life as a fairly large “application” of OAuth 1.0. Over time, it has become a cleaner and smaller set of profiles, extensions, and enhanced flows for OAuth 2.0. If any find wider interest, we could break them out into separate specs.

  • OAuth: Two major steps: get a token (with multiple flow options), use a token. UMA: Three major steps: trust a token (host/authorization manager introduction), get a token, use a token.
  • OAuth: User delegation flows and autonomous client flows. UMA: Profiles (TBD) of OAuth flows that add the ability to request claims to satisfy user policy.
  • OAuth: Resource and authorization servers are generally not expected to communicate directly, vs. through the access token that is passed through the client. UMA: Authorization manager gives host its own access token; host uses it to supply resource details and request token validation.

Much work remains to be done; please consider joining us (it’s free!) if you’d like to help us make user-managed access a reality.

UMA learns how to simplify, simplify

It seems like a good time to review where we’ve been and where we’re going in the process of building User-Managed Access (UMA).

The introduction to our draft protocol spec reads:

The User-Managed Access (UMA) 1.0 core protocol provides a method for users to control access to their protected resources, residing on any number of host sites, through an authorization manager that makes access decisions based on user policy.

For example, a web user (authorizing user) can authorize a web app (requester) to gain one-time or ongoing access to a resource containing his home address stored at a “personal data store” service (host), by telling the host to act on access decisions made by his authorization decision-making service (authorization manager). The requesting party might be an e-commerce company whose site is acting on behalf of the user himself to assist him in arranging for shipping a purchased item, or it might be his friend who is using an online address book service to collect addresses, or it might be a survey company that uses an online service to compile population demographics.

While this introduction hasn’t changed much over time, the UMA protocol itself has undergone a number of changes, some of them dramatic. The changes came from switching “substrates”, partly in response to the recent volatility that has entered the OAuth world with the introduction of WRAP and partly in an ongoing effort to make everything simpler.

Here’s an attempt to convey the once and future UMA-substrate picture:

uma-history

Back in the days of ProtectServe, we did try to use “real OAuth” to associate various UMA entities, but in one area we “cheated” (that is, used something OAuth-ish but not conforming) to get protocol efficiencies and to ensure the privacy of the authorizing user. Hence the wavy line.

Once the UMA effort started in earnest at Kantara, we discovered a way to use three instances of “real OAuth” instead, and decided to (as we thought of it) “get out of the authentication business” entirely — thus theoretically allowing people to build UMA implementations on top of OAuth as a fairly thin shim. However, the trick we had discovered to accomplish this, along with (it must be admitted) an over-enthusiasm for website metadata discovery loops, made the spec pretty dense and hard to understand and we weren’t satisfied with that either.

As of last week, we have moved to a WRAP substrate on an interim basis, and that approach is what’s in our spec now. Although WRAP’s entities sound an awful lot like UMA’s entities (authorization manager/authorization server, host/protected resource, requester/client, authorizing user/resource owner), it took us a while to disentangle the different use cases and motivations that drove the development of each set, and in retrospect I’m glad we don’t have the same exact language. It allows us to have a meaningful conversation about the ways in which WRAP can gracefully assume a “plumbing” role for UMA (and other) use cases, and the ways in which we need to extend and profile it.

Here’s a diagram that summarizes the current state of the UMA protocol and the role(s) WRAP plays in it (click to enlarge):

UMA protocol summary

Given our use cases, it pretty much flows naturally, and it solves the problems we were previously tying ourselves in knots over. (Do check out the details if you have that sort of inquiring mind. Though reading our spec now benefits from an existing understanding of the WRAP paradigm — we plan to add more detail to make it stand better on its own — it’s now a lot more to-the-point.)

So all in all, I’m very pleased about our direction, and about the increasing interest we’re seeing from all over the place. But we’re not there yet, and it’s partly because over at the IETF they’re still working on OAuth Version 2.0, and that’s what we’re really after. We have been active in contributing use cases and feedback to the OAuth WG, and I think next week’s meeting at IETF 77 will be productive for all.


Come to the Kantara UMA workshop at the beginning of the European Identity Conference on May 4! (fixed; used to say May 3)


Though UMA isn’t fully incubated yet, it also seems like a good time to give a shout-out to all the UMAnitarians (join us… don’t be afraid…) who are helping with the process, with special thanks to our leadership team: Paul Bryan, Domenico Catalano, Maciej Machulak, Hasan ibne Akram, and Tom Holodnik.

Discovery and OAuth and UMA – oh my

If you saw the ProtectServe status update from the Internet Identity Workshop in May, but haven’t taken a look since then, you’ll want to check out our progress on what has become User-Managed Access (“UMA”, pronounced like the actress…).

The proposition still centers on helping individuals gain better control of their data-sharing online, along with making it easier for identity-related data to live where it properly should — rather than being copied all over the place so that all the accuracy and freshness leaks out.

On our wiki you’ll now find a fledgling spec that profiles OAuth and its emerging discovery mechanisms XRD and LRDD. We’re also starting to collect a nice little bunch of diagrams and such, to help people understand what we’re up to. Click on the authorization flowchart to get to our “UMA Explained” area:

Access flowchart

Thanks to the Kantara Initiative participation rules, it’s easy and free to join the UMA group. If you’re interested to contribute use cases or thoughts on design or implementation talents, consider coming on board.

A Venn of identity in web services, now with OAuth

In the past week, several people approached me with the idea of incorporating OAuth somehow into the Venn view of identity. Feels like more of that “destiny” Ashish invoked a couple of weeks ago — especially since I had already developed just such a Venn for my XML Summer School talk last week.

My very first Venn of Identity blog post also included a second diagram, covering something like “identity in web services”. It was little-noticed, I think, because the deployment of the more esoteric pieces of WS-* and ID-WSF was pretty low. I’ve been itching to add OAuth to it, given its wildfire-esque spread. Last week gave me my excuse, and with further feedback (thanks Paul and Dom!), I’ve continued to revise it. So here’s a new version for your perusal (click to enlarge).

VennOfBCID-Oct2009

As with the original version, the relative heights and sizes are significant: they indicate roughly how voluminous, vertically applicable, and far away from “plumbing” each solution gets. (Unlike the original, however, this one seems to give off a Jetsons vibe.)

Some thoughts from space-age 2009:

OAuth is helping many app developers meet their security and access goals with minimal fuss (80/20 point, anyone?), and by providing for user mediation of service permissions, it is easily as “user-centric” as any other technology claiming the title. It’s these lovable qualities that led the ProtectServe/User-Managed Access effort to use OAuth as a substrate.

ID-WSF still provides identity services functionality that nothing else does, and some folks I’ve been talking to lately still chafe at the lack of more widespread support for these features. But obviously it’s still a “rich” solution vs. a “reach” one.

WS-*, ah yes, what to say?… It uniquely solves certain issues, but do all of them really need solving? My Summer School trackmate Paul Downey had some choice words about this, and his WS-TopTrumps class exercise proved that the star in WS-* really does match everything possible — that’s too much. And trackmate Marc Hadley pointed out lots of benefits you get “for free” with a REST approach, which it was hard not to notice when we all chose to design REST interfaces for his class exercise despite having a SOAP option.

To be fair, Paul and Marc and also trackmate Rich Salz — who has an uncanny ability to explain complex security concepts simply — stressed the value of the core pieces for message security if you’re using SOAP. It would be interesting indeed if OAuth, or extensions to it with the same pure-HTTP design center, were to “grow leftward” to accommodate the use cases covered by the WS-*/ID-WSF intersection.

(Anyone think the new REST-* effort will win in this space anytime soon? I’m a bit dubious, myself. Its name sure didn’t inspire any love in our lecture room.)

ProtectServe news: User-Managed Access group

After a few weeks’ worth of charter wrangling, I’m delighted to announce the launch of a new Kantara Initiative work group called User-Managed Access (UMA). Quoting some text from the charter that may sound familiar if you’ve been following the ProtectServe story:

The purpose of this Work Group is to develop a set of draft specifications that enable an individual to control the authorization of data sharing and service access made between online services on the individual’s behalf, and to facilitate the development of interoperable implementations of these specifications by others.

Quite a few folks have expressed strong interest in using this work to solve their use cases and in implementing the protocol (speaking of which, sincere thanks to the dozen-plus people who joined with me in proposing the group). With a basic design pattern that is as generative as ProtectServe seems to be, and with the variety of communities we’ll need to engage, it could be tricky to stay focused on a core set of scenarios and solutions, but I intend to work hard to do just that. Better to boil a small pond than…well, you know. Stay tuned for more thoughts on how I think we can accomplish this.

If you’d like to contribute to the continuing adventures of ProtectServe, please check out the User-Managed Access WG charter and join up! Here’s where to go to subscribe to the wg-uma list, which is read-only by default, and to become an official participant in the group, which gains you list posting privileges. (In case you’re wondering, there is no fee whatsoever for Kantara group participation.)

By the end of this week we’ll start using the list to figure out a first-telecon time slot, and I’ll provide updates on various group milestones here. If you’ve got any questions at all, feel free to drop me a line.

Relationship, authorization, make up your mind

Having been properly chastised by Paul for creating some confusion around the “relationship manager” (RM)/”authorization manager” (AM) terminology, I want to make amends, explain, and ultimately jump into the very interesting social-authorization discussion that Paul and George have been conducting.

In a nutshell, an RM is an application, and an AM is a ProtectServe protocol entity. An RM at a minimum has to serve as an AM endpoint but could also provide SP and even Consumer endpoints.

We’ve formally conceived of an RM app because data-sharing relationship management is only valuable if we have some app saving and applying a user’s desired policies, auditing and archiving a bunch of information, and other tasks entirely out of the view of other “computer entities” in the picture. A computer protocol is important for interoperability but, obviously, isn’t sufficient for success; app functionality and user experience have to be given equal prominence with interop.

A “minimum RM” could be represented like this.

1_simple_rm

The gray “archive” box stands for the pieces of RM functionality that don’t have a protocol-compliance job. Our mockup screenshots show how such an RM could be used — managing the access to calendars that live at a “foreign” online calendaring SP.

Of course, all these foreign SPs could host any sort of data on the user’s behalf. One of those SPs could be, say, a web app that helped you package up your various sets of resources (living wherever) into convenient Atom feeds. Any resources referenced in feed entries, or even buried inside other types of resources (like imgs inside HTML pages), might reside at different SPs, such that the Consumer would have to interact with a bunch of SPs in attempting access (just as it would have do GETs against a bunch of different servers today).

A fancier RM could be represented like this.

2_rm_amsp

This RM serves as a ProtectServe SP endpoint, providing on-board serving/packaging of some resources, along with being an AM. For example, let’s say you self-host your RM at your blog, and your RM plug-in has a feature for storing “self-asserted” info right there. It also offers Atom packaging for, say, typical items needed during e-commerce account registration: name, default shipping address with phone number, and credit card data.

The RM might optimize the implementation of AM/SP interactions in the case of its “local” SP because everything’s internal to one app, but all “foreign” SPs have to be treated according to the formal protocol, and any auditing and archiving should be done for all SPs, including the local one.

Here’s an all-singing, all-dancing RM.

3_rm_amspcons

Now the RM functions as a ProtectServe Consumer too. This is much more futuristic territory, but: if it’s useful for a person to track the data she shares, couldn’t it also be useful to track the data that others have shared with her, and the terms she had to meet to get that latter data? If the protocol could successfully handle payment demands and receipts and such, we’d now have a peer-to-peer data-commerce network.

This deployment scenario offers one way to handle the social-authorization use cases being discussed by Paul and George. Bob’s RM (in the role of a Consumer) could interact with Alice’s RM (in the role of an AM) to register to get future authorized access to Alice’s resources (hosted at whatever SPs). Alice’s RM could perhaps become an OAuth (or ProtectServe!) Consumer of some SP that supports the PoCo API, which could — especially if extended in the way George and Paul have been discussing — help Alice manage her universe of potential data recipients in a standardized way. At the very least, her RM might offer on-board functionality to organize sets of Consumers by hierarchy, tagging, or other means, which could control authorization policies en masse.

One final note: Nothing in the ProtectServe protocol prevents a user from deploying multiple RMs, something that could be an important tool for empowerment, data portability, privacy, and so on. Depending on the use cases you want to cater to, however, this might have protocol implications; today our design doesn’t allow SPs to parcel out authorization tasks to different AMs for different subsets of a user’s resources.

ProtectServe draft protocol flows

In previous posts I’ve described the basic ProtectServe/Relationship Manager proposition and use cases. Here is a set of web protocol flows we’ve developed to support our goals. This design is definitely not fully baked, but we hope it’s suggestive and that it generates some useful feedback.

Keep in mind that the mockup screenshots shared earlier illustrate a lot more than just bare protocol interactions. In the flows below, dotted-line arrows reflect steps outside the protocol per se, and each one of those arrows hints at lots of activity that lives in the “application value-add” world rather than the “interoperable protocol” world.

The Players

The User (working through a User Agent) is involved in the protocol per se at two key points: personally introducing the Authorization Manager and Service Provider, and optionally exercising a “right of refusal” in agreeing to a contract to release data to a requesting Consumer. In addition, the User has several duties, like policy-setting, that take place out of band. In our mockups, the User is Alice Adams.

The Authorization Manager (AM) is the hub that keeps track of the User’s registered Service Providers and all of the Consumer services that attempt to retrieve User data. It also applies the User’s policies to this retrieval. In our mockups, the AM Alice has chosen is CopMonkey.

The Service Provider (SP) is a service where the User goes to create and maintain some content that she may want to share selectively. The SP gets introduced to the User’s desired AM and thereafter knows where to send authorization requests before releasing data to Consumers. In our mockups, the SP is an online calendaring service called Schedewl.

The Consumer is a service that tries to retrieve some of the User’s data (in the form of a web resource, which may itself reference many other resources that are possibly unprotected or served out of other SPs), on a one-time or ongoing (i.e. feed) basis. Before it can retrieve the data successfully, it needs to meet the contract terms that the User has chosen to demand for that resource. In our mockups, the Consumer is the 3rd National credit-card site, asing for Alice’s travel calendar.

The Steps

There are four steps, which we have numbered in zero-based fashion (natch). The links are to WebSequenceDiagram.com flows. This tool rocks!

  • Step 0: User introduces SP (Schedewl) to AM (CopMonkey) – done once per SP/AM pair
  • Step 1: AM (CopMonkey) provisions consumer key and access token to Consumer (3rd National) – done once per AM/Consumer pair
  • Step 2: Consumer (3rd National) meets contract terms for access to resource (residing at Schedewl) – done if Consumer hasn’t yet met terms
  • Step 3: Consumer (3rd National) is granted access to resource (at Schedewl) by AM (CopMonkey) – done every time

I’m putting the actual flow diagrams below the fold because they blow out my margins. Note that you may not be able to see all of the pretty swim-lane pictures inline, depending on what browser you use; in that case, click on the step links above. [UDATE: Or if you're reading this post in a feed, try going to the real web page.]

[...]

To protect and to serve

To protect and to serve

In the last year, I’ve done a lot of thinking about the permissioned data sharing theme that runs through everything online, and have developed requirements around making the “everyday identity” experience more responsive to what people want: rebalancing the power relationships in online interactions, making those interactions more convenient, and giving people more reason to trust those with whom they decide to share information.

In the meantime, I’ve been fortunate to learn the perspectives of lots of folks like Bob Blakley, Project VRM and VPI participants, e-government experts, various people doing OAuth, and more.

Together with some very talented Sun colleagues (special shout-out to team members Paul Bryan, Marc Hadley, and Domenico Catalano), I started to get a picture of what a solution could look like. And then we started to wonder why it couldn’t apply to pretty much any act of selective data-sharing, no matter who — or what — the participants are.

So today I’m asking you to assess a proposal of ours, which tries to meet these goals in a way that is:

  • simple
  • secure
  • efficient
  • RESTful
  • powerful
  • OAuth-based
  • identity system agnostic

We call the web protocol portion ProtectServe (yep, you got it). ProtectServe dictates interactions among four parties: a User/User Agent, an Authorization Manager (AM), a Service Provider (SP), and a Consumer. The protocol assumes there’s a Relationship Manager (RM) application sitting above, acting on behalf of the User — sometimes silently. At a minimum, it performs the job of authorization management.

We’re looking for your input in order to figure out if there are good ideas here and what should be done with them. (The proposal is entirely exploratory; my employer has no plans around it at the moment, though our work has been informed by OpenSSO — particularly its ongoing entitlement management enhancements.)

Read on for more, and please respond in this thread or drop me a note if you’re interested in following or contributing to this work. If there’s interest, we’re keen to join up with like-minded folks in a public forum.

[...]