Is it possible for an enterprise to turn itself inside-out? Apparently so. I’ve got a new post up on the Forrester blogs that discusses the “Zero Trust” aspect of enterprise security that a number of companies are addressing with various clever uses of OAuth.
Tag Archives: identity
I’ve got a new post up on my Forrester blog, commenting on CardSpace and the important trends to pay attention to at this juncture.
I’ve just made a big change, joining Forrester Research as a Principal Analyst, and this new adventure is sure to be exciting. It’s an honor to join this stellar organization and work with so many talented folks. I’ll be serving security and risk professionals and will focus primarily on identity and access management, so this move feels like a natural outgrowth of work I’ve been involved in for more than ten years now.
My tenure at PayPal was a great learning experience; I’ll never forget my time there, nor the good friends I made. I also managed to learn a few things while “catching up on life” in the few weeks between gigs. Here are some questions folks have been asking me, with answers:
Q: Are you moving back to the east coast?
A: Nope, I’m still based in the Pacific Northwest, but I will likely be out Boston-way somewhat more often. As for other appearances, you’ll definitely be able to find me at Forrester’s IT Forum 2011 in May, and I’ll be figuring out the situation with other events shortly.
Q: Will you continue to blog here?
A: Yes, though the mix of topics will likely change, as I’ll be contributing industry-related posts to the Forrester blog. I’ll post pointers to those here, and my hope is to step up my writing activity on other topics of interest at Pushing String. And I hope you’ll continue to follow my doings at @xmlgrrl (where the #forrester tag will likely make lots of appearances).
Q: What about User-Managed Access and other innovation-oriented work?
A: The plan is for me to continue in my role as “chief UMAnitarian” and to participate in certain other tech leadership activities as time allows. In the last couple of months we’ve gotten a big influx of active UMA contributors, and we’ve had a burst of progress in the last few weeks on defining how to loosely couple “user-centric” policy enforcement points and policy decision points. So I think we’re well on our way to meeting the goals and timing stated in our charter.
Q: So what did you do on your winter vacation?
A: One of my goals was to “learn one big thing”, so I started learning how to play guitar, under the tutelage of my dear old friend Rich. My original use cases were around communicating better with my Mud Junket bandmates who are actual guitarists, but Rich doesn’t fool around: I have to learn good technique and not take any shortcuts. Luckily, the fret-hand callus crop has finally started to come in.
I also read a great book called The Talent Code, which describes what goes on neurologically in people who seem like once-in-a-lifetime geniuses, and discusses how any skill (like guitar-playing!) can be honed more rapidly through “deep practice” that stimulates myelin growth.
With all this plus a healthy dose of R&R, it feels like I’m learning how to learn all over again.
It’s nearly time for the second annual PayPal X Innovate conference — October 26 and 27 at Moscone Center in SF. The PayPal X developer network has not only the coolest domain known to humankind, but it also hosts the Innovate conference, which is all about making the future of money happen.
Praveen Alavilli has slipped me a great discount code for y’all to use: “LETSINNOVATE” will get you $100 off the registration fee.
Ashish Jain and I will be there talking about identity services progress and plans, and also listening intently: we’d love to talk with online retailers and e-commerce developers about how you see digital identity playing a role in your apps and your payment needs.
Folks from Janrain will also be there, discussing social sign-on trends in retail. They’ve posted an excellent roundup of everything you can hear and experience at Innovate, and they also share some news about the OpenID Foundation’s new Retail Advisory Committee.
See you there!
Ian Glazer and I were planning a get-together next week at the OASIS Identity Management conference in D.C., and he suggested we make it a tweetup (bona fides established here). So if you’re in town because of the conference, or just…around, join us at Buffalo Billiards next Monday at 6ish.
The agenda looks solid, and since it’s arranged in a single track, should get some intensity going. I’m looking forward to participating in the privacy/identity/cloud computing session led by Jim Harper on Monday.
(For all pool hustlers flying in, remember: cue sticks are prohibited items…)
Yesterday I had the opportunity to contribute to BrightTALK’s day-long Cloud Security Summit with a webcast called Making Identity Portable in the Cloud.
Some 30 live attendees were very patient with my Internet connection problems, meaning that the slides (large PDF) didn’t advance when they were supposed to and I couldn’t answer questions live. However the good folks at BrightTALK fixed up the recording to match the slides to the audio, and I thought I’d offer thoughts here on the questions raised.
“Framework provider – sounds suspiciously like an old CA (certificate authority) in the PKI world! Why not just call it a PKI legal framework?” Yeah, there’s nothing new under the sun. The circles of trust, federations, and trust frameworks I discussed share a heritage with the way PKIs are managed. But the newer versions have the benefit of lessons learned (compare the Federal Bridge and the Open Identity Solutions for Open Government initiative) and are starting to avail themselves of technologies that fit modern Web-scale tooling better (like the MDX metadata exchange work, and my new favorite toy, hostmeta). PKI is still quite often part of the picture, just not the whole picture.
“How about a biometric binding of the individual to the process and the requirement of separation of roles?” I get nervous about biometric authentication for many purposes because it binds to the bag of protoplasm and not the digital identity (and because some of the mechanisms are actually rather weak). If different roles and identities could be separated out appropriately and then mapped, that helps. But with looser coupling come costs and risks that have to be managed.
“LDAP, AD, bespoke, or a combination?” Interestingly, this topic was hot at the recent Cloud Identity Summit (a F2F event, unlike the BrightTALK one). My belief is that some of today’s tiny companies are going to outsource all their corporate functions to SaaS applications; they will thrive on RESTfulness, NoSQL, and eventual consistency; and some will grow large, never having touched traditional directory technology. I suspect this idea is why Microsoft showed up and started talking about what’s coming after AD and touting OData as the answer. (Though in an OData/GData deathmatch, I’d probably bet on the latter…)
Thanks to all who attended, and keep those cards and letters coming.
Phil Hunt shared some musings on OAuth and UMA recently. His perspective is valuable, as always. He even coined a neat phrase to capture a key value of UMA’s authorization manager (AM) role: it’s a user-centric consent server. Here are a couple of thoughts back.
In the enterprise, an externalized policy decision point represents classic access management architecture, but in today’s Web it’s foreign. UMA combines both worlds with the trick of letting Alice craft her own access authorization policies, at an AM she chooses. She’s the one likeliest to know which resources of hers are sensitive, which people and services she’d like to share access with, and what’s acceptable to do with that access. With a single hub for setting all this up, she can reuse policies across resource servers and get a global view of her entire access landscape. And with an always-on service executing her wishes, in many cases she can even be offline when an access requester comes knocking. In the process, as Phil observes, UMA “supports a federated (multi-domain) model for user authorization not possible with current enterprise policy systems.”
Phil wonders about privacy impacts of the AM role given its centrality. In earlier federated identity protocol work, such as Liberty’s Identity Web Services Framework, it was assumed that enterprise and consumer IdPs could never be the authoritative source of all interesting information about a user, and that we’d each have a variety of attribute authorities. This is the reality of today’s web, expanding “attribute” to include “content” like photos, calendars, and documents. So rather than having an über-IdP attempt to aggregate all Alice’s stuff into a single personal datastore — presenting a pretty bad panoptical identity problem in addition to other challenges — an AM can manage access relationships to all that stuff sight unseen. Add the fact that UMA lets Alice set conditions for access rather than just passively agree to others’ terms, and I believe an AM can materially enhance her privacy by giving her meaningful control.
Phil predicts that OAuth and UMA will be useful to the enterprise community, and I absolutely agree. Though the UMA group has taken on an explicitly non-enterprise scope for its initial work, large-enterprise and small-business use cases keep coming up, and cloud computing models keep, uh, fogging up all these distinctions. (Imagine Alice as a software developer who needs to hook up the OAuth-protected APIs of seven or eight SaaS offerings in a complex pattern…) Next week at the Cloud Identity Summit I’m looking forward to further exploring the consumer-enterprise nexus of federated access authorization.
At the European Identity Conference a little while back, Andre Durand gave a downright spiritual keynote on Identity in the Cloud. His advice for dealing with the angst of moving highly sensitive identity information into the cloud? Ancient Buddhist wisdom.
All experiences are marked by suffering, disharmony, and frustration.
Suffering and frustration come from desire and clinging.
To achieve an end to disharmony, stop clinging.
Chris’s message could be described as “Stop clinging to global PKI for browser security because it is disharmonious.” He reviewed the perverse incentives that fill the certificate ecosystem, and demonstrated that browsers therefore act in the way that will help ordinary users least.
Why, he asked, can’t we convey more usable security statements to users along the lines of:
“This is almost certainly the same server you connected with yesterday.”
“You’ve been connecting to almost certainly the same server all month.”
“This is probably the same server you connected with yesterday.”
“Something seems fishy; this is probably not the same server you connected with yesterday. You should call or visit your bank/whatever to be sure nothing bad has happened.”
Perhaps I was the only one not already familiar with his names for the theory that can make these statements possible: TOFU/POP, for Trust On First Use/Persistence of Pseudonym. Neither of these phrases gets any serious Google search love, at least not yet. But I love TOFU, and you should too. (N.B.: I’m not a big fan of lowercase tofu.) The basic idea is that you can figure out whether to trust the first connection with a nominally untrusted entity by means of out-of-band cues or other met expectations — and then you can just work on keeping track of whether it’s really them the next time.
The neat thing is, we do this all the time already. When you meet someone face-to-face and they say their Skype handle is KoolDood, and later a KoolDood asks to connect with you on Skype and describes the circumstances of your meeting, you have a reasonable expectation it’s the right guy ever after. And it’s precisely the way persistent pseudonyms work in federated identity: as I’ve pointed out before, a relying-party website might not know you’re a dog, but it usually needs to know you’re the same dog as last time.
Knowing of the desire to cling to global PKI in an environment where it’s simply not working for us, Chris proposes letting go of trust — and shooting for “trustiness” instead. If it successfully builds actual Internet trust relationships vs. the theoretical kind, hey, I’m listening. There’s a lot of room for use cases between perfect trust frameworks built on perfect certificate/signature mechanisms and plain old TOFU-flavored trustiness, and UMA and lots of other solutions should be able to address the whole gamut.
Surely inner peace is just around the corner.
In the absence of any other controls, relying parties for identity info would like to be handed as much user data as they can get. It can’t hurt to have a little extra, right? But as we pointed out in the UMA webinar a few weeks ago, when web apps think they’ve gotten something valuable out of us, sometimes they’re just mistaken. When a site wants too much info and makes us give it to them in a self-asserted fashion (oh, those asterisked fields!), we just…lie. In fact, you can tell the site doesn’t do anything really important with that info if you can lie and get away with it.
Case in point: The crap that fills the fields of 77% of domain name registrations. (The Register’s headline: Whois in charge? ICANN’t tell. Heh.)
This is where “attribute assurance” could come in, involving a federated identity system that arranges for the data to be supplied by trusted issuers in some fashion. Attribute assurance is akin to identity assurance (as discussed previously here), except that it’s about the quality of specific types of information and their binding to the individual in question. The world hasn’t yet come up with a generic way of handling such assurance, though it’s been a topic of serious discussion. The Tao of Attributes workshop was a great start.
In the domain name registration case, one of the big reasons why people don’t like to supply their real information is that it’s published far and wide — anyone can learn what your address is if you provide your real one. Hence the lying, at least in quite a lot of the cases. This is a real-world situation where needs for level of assurance (LOA) are in a tug-o’-war with needs for level of protection (LOP).
What’s LOP? In short, it’s the reciprocal of LOA. Whereas relying parties want to ensure that the data they’re getting is good when they get it, data subjects and their identity providers want to ensure that the data will be protected and treated with respect when it gets there.
(You can read more about LOP, and some of the elements that need to be lined up to solve it in an Internet-scale way, in The Open Identity Trust Framework (OITF) Model, a white paper I was honored to co-author along with Tony Nadalin, Drummond Reed, Don Thibeau, and our illustrious managing editor Mary Rundle. The proposed model suggests some ways to organize the Pushmi-pullyu nature of federated identity partnerships to raise the quality, and possibly tamp down the quantity, of identity attributes floating around.)
Everybody’s talking about identity assurance these days, meaning, generically, the confidence a relying party needs to have in the identity information it’s getting about someone so that it can manage its risk exposure.
A lot of the conversation to date has revolved around NIST Special Publication 800-63 (newer draft version here) and its global cousins, which boil down assurance into four levels — hence all the loose talk of LOA (for “level of assurance” or sometimes AL for “assurance level”), even when people aren’t focusing on specific levels or even systems of assurance numbering. NIST 800-63 is intended to answer the use cases defined in OMB Memo 04-04, which deals with making sure users of the U.S. Federal government’s online systems are who they purport to be. Here’s an example given in OMB M-04-04 for one particular need for level 3 assurance:
A First Responder accesses a disaster management reporting website to report an incident, share operational information, and coordinate response activities.
And here’s how NIST 800-63 defines assurance (I’m quoting the Dec 2008 draft here; strangely, the official Apr 2006 version doesn’t include a formal definition):
In the context of OMB M-04-04 and this document, assurance is defined as 1) the degree of confidence in the vetting process used to establish the identity of an individual to whom the credential was issued, and 2) the degree of confidence that the individual who uses the credential is the individual to whom the credential was issued.
So there’s an identity proofing component at registration time that nails down the precise real-world human being being referred to, and there’s a security/protocol soundness/authentication component at run time that establishes that the credential is being waved around legitimately. These get added up into four levels defined roughly like this (leaving aside the security and protocol soundness factors):
(Here, “same unique user” means that the same user can be correlated by the RP across sessions. And “verified name provided” means that the user’s real-world name is exposed to the RP, versus some sort of pseudonym; level 1, where no proofing is done, is implicitly pseudonymous, while level 2 offers a choice.)
I don’t mean at all to criticize this rolled-up four-level approach. It seems to have met the needs set out in M-04-04, and it predated both the “user-centric” movement (Dale Olds has a nice rundown of its use cases here) and truly modern notions of online privacy.
But I think we need more clarity about assurance use cases and terminology, for two reasons: One is to help ensure that identity providers can give RPs what they need, rather than what might just be a poor approximation based on NIST 800-63’s fame. The other is to help ensure that IdPs give RPs only what they need, since more assurance is likely to involve more personal information exposure.
To that end, let me explain some assurance use case buckets I’m seeing in the wild, and their relationship to the NIST requirements and each other. First, here are some use case buckets hiding in plain sight in the NIST levels:
Simple cross-session correlation: While NIST 800-63 doesn’t formally include “same unique user” as a goal, it’s in there:
Level 1 – Although there is no identity proofing requirement at this level, the authentication mechanism provides some assurance that the same claimant is accessing the protected transaction or data.
Funnily enough, cross-session correlation (without the baggage of proofing) is a key requirement of many enterprise and Web federated identity interactions. Lots of sites don’t need or want to know you’re a dog; they just need to know you’re the same dog as last time. This way, they can authorize various kinds of ongoing access and give you something of a personalized experience across sessions. Though NIST treats this as an also-ran and couples it with weak authentication in level 1, other use cases may have reason to match up “mere correlation” with higher authentication.
Identity proofability: If an RP can trust that it’s dealing with a human being who has some level of serious representation in civil society, it’s a powerful kind of assurance for lots of purposes. More about this below.
Real-world identity mapping: When level 3 or 4, or verified-name level 2, is used, this means a user’s real name is used to build up the unique identifier that the RP sees, and this verified name leaks PII like crazy, even if it’s not itself unique. (As far as I know, I’m the only Eve Maler out there…) This is strong stuff, and in a modern federated identity environment, it is to be hoped that most RPs simply don’t need this information. (John Bradley — that is, the John Bradley who works with the U.S. government on its ICAM Open Identity Solutions program — tells me he believes pseudonyms should be an acceptable choice all up and down the four levels, indicating that this use case bucket is fairly rare.)
Now things get really interesting, because there are other use case buckets that you can sort of see in this matrix if you squint, but really they’re just different:
Anonymous authorization/personalization: This is the flip side of cross-session correlation. OMB M-04-04 talks about “attribute authentication” and the potential for user attributes to serve as “anonymous credentials” (where an RP simply can’t know if this is the same unique user coming back but can still base its authorization decisions and personalization actions on the veracity of the attributes it’s getting). The attributes in question can range from “this user is over 18” to “this user is a student at University ABC” to “this user is of nationality XYZ”.
Ultimately M-04-04 puts the whole area of attribute authentication firmly out of scope, but lots of folks have been picking at the general problem of attribute assurance in the last several months — like Internet2 in its Tao of Attributes workshop, and the Concordia group in a forthcoming survey (stay tuned for more on that).
This bucket often requires being able to check who issued some assertion or claim, and considering whether they’re properly authoritative for that kind of info. The way I think about this is: Who has the least incentive to lie? That’s why you can be said to be truly authoritative for self-asserted preferences such as “aisle vs. window”. Any other way lies madness (“What is your favorite color?” “Blue. No yel– Auuuuuuuugh!”).
Of course, there are cases where an RP really does need attribute assurance along with other kinds, like correlation or identity mapping. And don’t forget that it takes precious little in the way of personal information for an RP to figure out “who you really are” anyway. (Check out this cool Tao of Attributes diagram, which touches on all these points.)
Financial engagement: Sometimes an RP just just wants some assurance they’re dealing with someone who has sufficient ties to the world’s legitimate financial systems not to screw them over entirely. It turns out that identity proofability can often be a serviceable proxy for this kind of confidence. (Financial account numbers are one kind of proofing documentation in NIST 800-63.) And the reverse is also true: financial engagement can sometimes give a modicum of confidence in identity proofability.
Interestingly, this bucket can be useful even without any of the other kinds, partly because the parties can lean on a mature parallel financial system instead of just lobbing identifiers and attributes all over the place. For example, users often “self-assert” credit card numbers (which RPs then validate out of band with the card issuer), or use third-party payment services like PayPal (where the service provider does a lot of the risk-calculation heavy lifting).
No doubt there are other assurance use cases. Understanding them more deeply can, I think, help us get better at sharing the truth and nothing but the truth about people online — without having to expose the whole truth.