Tag Archives: EIC

Tofu, online trust, and spiritual wisdom

At the European Identity Conference a little while back, Andre Durand gave a downright spiritual keynote on Identity in the Cloud. His advice for dealing with the angst of moving highly sensitive identity information into the cloud? Ancient Buddhist wisdom.

All experiences are marked by suffering, disharmony, and frustration.

Suffering and frustration come from desire and clinging.

To achieve an end to disharmony, stop clinging.

(I can’t wait to hear his pearls of wisdom at the Cloud Identity Summit later this month… I’ll be there speaking on UMA. You going?)

This resonated with another plea I’d just heard from Chris Palmer at the iSEC Partners Open Security Forum, in his talk called It’s Time to Fix HTTPS.

Chris’s message could be described as “Stop clinging to global PKI for browser security because it is disharmonious.” He reviewed the perverse incentives that fill the certificate ecosystem, and demonstrated that browsers therefore act in the way that will help ordinary users least.

Why, he asked, can’t we convey more usable security statements to users along the lines of:

“This is almost certainly the same server you connected with yesterday.”

“You’ve been connecting to almost certainly the same server all month.”

“This is probably the same server you connected with yesterday.”

“Something seems fishy; this is probably not the same server you connected with yesterday. You should call or visit your bank/whatever to be sure nothing bad has happened.”

Perhaps I was the only one not already familiar with his names for the theory that can make these statements possible: TOFU/POP, for Trust On First Use/Persistence of Pseudonym. Neither of these phrases gets any serious Google search love, at least not yet. But I love TOFU, and you should too. (N.B.: I’m not a big fan of lowercase tofu.) The basic idea is that you can figure out whether to trust the first connection with a nominally untrusted entity by means of out-of-band cues or other met expectations — and then you can just work on keeping track of whether it’s really them the next time.

The neat thing is, we do this all the time already. When you meet someone face-to-face and they say their Skype handle is KoolDood, and later a KoolDood asks to connect with you on Skype and describes the circumstances of your meeting, you have a reasonable expectation it’s the right guy ever after. And it’s precisely the way persistent pseudonyms work in federated identity: as I’ve pointed out before, a relying-party website might not know you’re a dog, but it usually needs to know you’re the same dog as last time.

Knowing of the desire to cling to global PKI in an environment where it’s simply not working for us, Chris proposes letting go of trust — and shooting for “trustiness” instead. If it successfully builds actual Internet trust relationships vs. the theoretical kind, hey, I’m listening. There’s a lot of room for use cases between perfect trust frameworks built on perfect certificate/signature mechanisms and plain old TOFU-flavored trustiness, and UMA and lots of other solutions should be able to address the whole gamut.

Surely inner peace is just around the corner.

Comparing OAuth and UMA

UMA logo

The last few weeks have been fertile for the Kantara User-Managed Access work. First we ran a half-day UMA workshop (slides, liveblog) at EIC that included a presentation by Maciej Machulak of Newcastle University on his SMART project implementation; the workshop inspired Christian Scholz to develop a whole new UMA prototype the very same day. (And they have been busy bees since; you can find more info here.)

Then, this past week at IIW X, various UMAnitarians convened a series of well-attended sessions touching on the core protocol, legal implications of forging authorization agreements, our “Claims 2.0″ work, and how UMA is being tried out in a higher-ed setting — and Maciej and his colleague Łukasz Moreń demoed their SMART implementation more than a dozen times during the speed demo hour.

In the middle of all this, Maciej dashed off to Oakland, where the IEEE Symposium on Security and Privacy was being held, to present a poster on User-Managed Access to Web Resources (something of a companion to this technical report, all with graphic design by Domenico Catalano).

Through it all, we learned a ton; thanks to everyone who shared questions and feedback.

Because UMA layers on OAuth 2.0 and the latter is still under development, IIW and the follow-on OAuth interim F2F presented opportunities for taking stock of and contributing to the OAuth work as well.

Since lots of people are now becoming familiar with the new OAuth paradigm, I thought it might be useful to share a summary of how UMA builds on and differs from OAuth. (Phil Windley has a thoughtful high-level take here.) You can also find this comparison material in the slides I presented on IIW X day 1.

Terms

UMA settled on its terms before WRAP was made public; any overlap in terms was accidental. As we have done the work to model UMA on OAuth 2.0, it has become natural to state the equivalences below more boldly and clearly, while retaining our unique terms to distinguish the UMA-enhanced versions. If any UMA technology ends up “sedimenting” lower in the stack, it may make sense to adopt OAuth terms directly.

  • OAuth: resource owner; UMA: authorizing user
  • OAuth: authorization server; UMA: authorization manager
  • OAuth: resource server; UMA: host
  • OAuth: client; UMA: requester

Concepts

I described UMA as sort of unhooking OAuth’s authorization server concept from its resource-server moorings and making it user-centric.

  • OAuth: There is one resource owner in the picture, on “both sides”. UMA: The authorizing user may be granting access to a truly autonomous party (which is why we need to think harder about authorization agreements).
  • OAuth: The resource server respects access tokens from “its” authorization server. UMA: The host outsources authorization jobs to an authorization manager chosen by the user.
  • OAuth: The authorization server issues tokens based on the client’s ability to authenticate. UMA: The authorization manager issues tokens based on user policy and “claims” conveyed by the requester.

Dynamic trust

UMA has a need to support lots of dynamic matchups between entities.

  • OAuth: The client and server sides must meet outside the resource-owner context ahead of time (not mandated, just not dealt with in the spec). UMA: A requester can walk up to a protected resource and attempt to get access without having registered first.
  • OAuth: The resource server meets its authorization server ahead of time and is tightly coupled with it (not mandated, just not dealt with in the spec). UMA: The authorizing user can mediate the introduction of each of his hosts to the authorization manager he wants it to use.
  • OAuth: The resource server validates tokens in an unspecified manner, assumed locally. UMA: The host has the option of asking the authorization manager to validate tokens in real time.

Protocol

UMA started out life as a fairly large “application” of OAuth 1.0. Over time, it has become a cleaner and smaller set of profiles, extensions, and enhanced flows for OAuth 2.0. If any find wider interest, we could break them out into separate specs.

  • OAuth: Two major steps: get a token (with multiple flow options), use a token. UMA: Three major steps: trust a token (host/authorization manager introduction), get a token, use a token.
  • OAuth: User delegation flows and autonomous client flows. UMA: Profiles (TBD) of OAuth flows that add the ability to request claims to satisfy user policy.
  • OAuth: Resource and authorization servers are generally not expected to communicate directly, vs. through the access token that is passed through the client. UMA: Authorization manager gives host its own access token; host uses it to supply resource details and request token validation.

Much work remains to be done; please consider joining us (it’s free!) if you’d like to help us make user-managed access a reality.

Data portability and wagon-circling

One of the breakout tracks at EIC last week was Cloud Platforms and Data Portability. Dave Kearns had asked me to speak for a few minutes on the subject of social data portability before joining Drummond and Christian for a panel discussion.

I brainstormed a bit and suggested that I could comment on the notion of data statelessness, and the continuum of individuals’ data portability on the web. That somehow turned into a boldface uppercase talk called Data Statelessness and the Continuum of Individuals’ Data Portability on the Web. :-) (Hmm, maybe in German that boils down to a single long word…) I thought I’d share those thoughts here.

The Web is a teenager already

People have been pouring content onto it since Web 1.0. It’s enough time for there to be major failures of data portability.

For example, Geocities started in 1994 (with an offer of 2 whole Mb free!), and ended its life in 2009 with about 23 million individual pages — which were at risk of being abandoned.

300px-Archiveteam

Archive Team is one of the groups that performed “data portability of last resort”; they’ve managed to resurrect more than a terabyte of all that content…at Geociti.es.

data-portability-logo

DataPortability.org was formed in 2007, and it advocates being able to “take your data with you” to new services.

The Web 2.0 cocktail is even more potent

It’s a mix of some application’s features plus our own data contributions. The more “social” the application — that is, giving us human-to-human connection benefits — the more we drink.

But there’s always an application in the middle. It knows everything we share — and increasingly, selling access to that information is its business model.

Just a reminder…

Take a look at EFF’s compilation of Facebook privacy policies from 2005 to now.

Recall that a newspaper’s readers traditionally were not its real customers; that would, of course, be the advertisers.

Facebook’s end-users are not its customers.

They’re the product.

[Not that I’m picking on Facebook specifically. Though this news about a Facebook all-hands meeting tomorrow afternoon to “circle the wagons” is interesting…]

Solving the password anti-pattern began a new era of data portability

Was it accidental?

In 2008, Robert Scoble famously discovered that Facebook’s terms of service didn’t allow him to bulk-extract his own contact information, and they cut him off (at which point he got involved in the Data Portability effort!).

In the meantime, Facebook and Yahoo! and AOL and Google and many others have discovered how valuable it is to let third-party apps get access to fresh feeds of your data without your having to reveal your username and password.

They couldn’t exactly let these connections happen without your go-ahead, and so user delegation of authorized access was born — or at least standardized.

facespace
(click to embiggen)

BBAuth, OpenAuth, and other proprietary solutions led to OAuth (and its proprietary competitor Facebook Connect) — and now the draft OAuth 2.0, which Facebook already supports.

Third-party services getting access to your data with your okay is tantamount to you getting access through an “agent” — and not just one-time export when you leave, either, but regular fresh access for a variety of purposes. This has turned out to be a Good Thing overall for individuals’ chances at data portability.

What is data statelessness?

It’s the ability of a third-party service to think in terms of caching rather than replicating your data, because they can get it whenever they need it.

It’s the ability of a third-party service to add value without having to “own” your data.

It’s the ability for a single source of truth to arise — and for you to choose what it is.

Even weirder, it’s the ability for automatic syncing among a variety of sources of truth to arise — and for you to choose where to inject the first copy. (This is the effect when, say, you tell a bunch of your OAuth-enabled location services that they can all read from and write to each other.)

treasure-chest

Federated identity management in the enterprise has been striving for just-in-time delivery of user attributes from authoritative sources for a long time; it’s perhaps ironic that consumer-driven web companies seem to be getting there first.

Enter Data Portability Policy

Along with privacy policies, terms of service, and end-user license agreements, sites should have a (good) data portability policy — and the DataPortability.org folks are working on it.

The project is spearheaded by Steve Greenberg (of stevenwonders.com! that’s stevenwonders.com — that’s S, T, E, … sorry, inside joke among our little Data Without Borders podcast crew).

It addresses issues like:

  • Are your APIs and data formats documented?
  • Do people need to create a new identity for this site, or can they use an existing one?
  • Must people import things into this product, or can the product refer to things stored someplace else?
  • Does this product provide an open, DRM-free way for people to retrieve or access via third party all of the things they’ve created or provided?
  • Will this site delete an account and all associated data upon a user’s request?

Having standard templates for policy of this sort is immensely valuable. (And I can’t resist a mention of how UMA may be able to help us demand the kinds of policies we want our services to follow, in an automated fashion vs. ever having to read legalese.)

End of rant

Exit questions:

Is Facebook’s new Open Graph Protocol, openly published and based on semantic web standards, a good thing for data portability? What relationship does that have to privacy?

And do individuals get more empowered, or less, when lots of newer, smaller social apps flood the market looking for user-delegated authorization to connect with your data?

Munich fuel

To get through the intense European Identity Conference last week in Munich (thanks, Kuppinger Cole folks!), I had to make sure to drink lots of fluids. I’m referring, of course, to coffee, beer, and one extraordinary whisky (thanks, Ping Identity folks!).

kaffee
Bavarian coffee cup – gift from a local friend

The 2010 edition of the conference was lively and valuable. Here are just a couple of stories about encounters I had there, with more thoughts and info to come.

I had the good fortune to meet Christian Scholz in person for the first time; we participate in the Data Without Borders podcast series together, but in the way of the modern world, had never occupied the same room. Christian was serving as a credentialed event blogger. We hung out together during many EIC sessions, and I learned a lot by seeing the enterprise IdM world through his eyes; we seem to share a strong interest in the idea of radically simplifying IT. (I also learned how he came by the moniker Mr. Topf…) Don’t miss his conference musings.

And I had the great pleasure of meeting UMA’s own Graphics/UX Editor, the talented Domenico Catalano — though I already felt I knew him well! Domenico’s graphical and intellectual work graces a lot of the UMA material (and if you’re going to IIW next week, you’ll see even more of it). What a delight to cement friendships by meeting IRL.

The erudite and prolific author Vittorio Bertocci kindly gave me a copy of his new book, A Guide to Claims-Based Identity and Access Control — and I couldn’t resist asking for an autograph. (Though I was forced to sleep off the week’s excesses on the plane rather than read, this tome is next on my list.)

Finally, I had the opportunity to participate in three panels (data portability, privacy-enhancing technologies, and trust frameworks), and really appreciated the skillz and charm of moderators Dave Kearns and John Hermans.

Thanks and congratulations again to KC+P gang; it was a heck of a show, and they were ever the gracious hosts. Stay tuned here for more about the week’s events from my perspective.