This is a summary of the discussions, design decisions, goals, and direction that came out of the OpenStack Juno Design Summit in Atlanta (spring 2014) with regard to Keystone.
Consider this to be a sequel to my similar coverage of the Icehouse summit.
(This is Juno, Georgia. There's not much to see.)
PKI token compression
PKI tokens are large, base-64 encoded, signed JSON objects containing an identity, authorization attributes, and potentially a catalog of available cloud services. In fact, deployments with a large number of services and endpoints frequently push beyond 8,192 bytes, causing various issues with web servers such as exceeding compile-time header size limits in Apache httpd. By compressing the signed JSON blob before base-64 encoding it, we believe we can give ourselves additional headroom below the 8,192 byte barrier.
During the summit, we landed a client-side patch to introduce the ability to generate and validate compressed tokens. Next, we'll need to:
- Take advantage of the new client API to generate these tokens in Keystone.
keystoneclient.middleware.auth_tokento validate compressed tokens.
- Release a new version of the client (likely as either 0.9.0 or 0.10.0).
- Bump the minimum required version of
- Change Keystone's default behavior to issue compressed tokens.
Non-persistent PKI tokens
Deployers despise the SQL
token table: it has unbounded growth unless you're aggressively purging it with
Keystone-manage token_flush, using a short token duration (like an hour or two, rather than Havana's default of 24 hours), and the performance problems are compounded by the fact that clients are allowed to generate as many tokens as they need. The memcached token backend isn't much less of a headache.
We made a step in the right direction during Icehouse by simply reducing the default token duration to an hour. This means you have fewer active tokens in rotation, as most tokens aren't used beyond the first hour of their life anyway (generally because we don't have smart enough tools to re-use existing tokens).
The purpose of the
token table is to provide a means for validating and invalidating tokens (valid tokens exist in the table, invalid tokens are either marked as such or are simply absent). But PKI tokens are self-validating, given the corresponding signing key. Invalid tokens can be described by revocation events, as introduced in Icehouse. The last step will be internally refactoring Keystone to remove any assumption that the
token table exists, and allow deployers to opt into ephemeral tokens.
Thus, Keystone can emit tokens all day long without requiring any means to store them, beyond the network itself.
Imagine that I have a key with special properties, you have a key with special properties, and just like nuclear submarines in the movies, we both need to turn our keys at once to accomplish a task. The twist is that there's only one lock in which to place a single key.
That's the basic use case we've been seeing in several places throughout OpenStack. A service user carries one role, an end-user carries another role, and only if both of them combine their role sets can a specific task be performed on a second service. It's up to the second service's policy enforcement to require two roles to perform the task.
Assuming a constraint wherein the second service only knows how to deal with a single
X-Auth-Token to determine authorization, the burden of a solution is placed onto Keystone to produce a new token representing the composite authorization of two input tokens.
Hierarchical multitenancy, the biggest, scariest term in OpenStack, colored almost every discussion around Keystone at the Juno summit and has the potential to impact every other OpenStack service. It's significant enough to warrant it's own discussion.
Identity API v3 everywhere
We're on the road to deprecating, and ultimately dropping support for, Identity API v2. That means we need to work to increase adoption of API v3, and that work starts with
- Continue improving the v3 featureset in our client.
- Replace auth code in all other clients and services by consuming
python-keystoneclient, instead of re-inventing the wheel in each project. We're focusing first on the integrated programs.
- Improve documentation to
python-keystoneclientso third parties can more easily leverage it.
- Document low-level API comparisons for the most common authentication-related operations, allowing third-parties who are unable to integrate with
python-keystoneclientto make a painless transition.
- Migrate Swift ACL's from a highly flexible Tenant ID/Name basis, which worked reasonably well against Identity API v2, to strictly be based on v3 Project IDs. The driving requirement here is that Project Names are no longer globally unique in v3, as they're only unique within a top-level domain.
Joe Savak, Brad Topol, Steve Martinelli and Jorge Williams gave a great talk at the conference reviewing what we accomplished with regard to federation during Icehouse, and provided a few hints as to what's next.
From the deployer's perspective, the Icehouse release of Keystone can be run in Apache httpd with Shibboleth (
mod_shib) configured to trust one or more external identity providers (thankfully, most of the configuration difficulty is in
mod_shib, which already has excellent documentation). You then configure Keystone to map SAML v2 assertions to Identity API v3 user groups, which have (for example)
From a federated user's perspective:
- The user attempts to reach a special deployer-defined URL on Keystone.
- The user is redirected to authenticate with the appropriate external IdP by Shibboleth.
- The user returns to Keystone with a SAML document.
- Shibboleth parses the SAML document for Keystone.
- Keystone generates an unscoped/unauthorized "federated" token.
- The user requests a list of accessible projects from Keystone.
- The user generates scoped/authorized "federated" tokens that may be used against the rest of OpenStack.
With some groundwork laid for identity federation in Icehouse, support for Kerberos and OpenID Connect will both likely land in Juno. Kerberos support will also likely require deploying behind Apache, but OpenID Connect may not.
The user experience on the CLI is an interesting one, if the federated user is not aware of the project, or projects, they have access to. The federation auth flow always results in an unscoped token, so we need to work on better client-side caching for the unscoped token to avoid requiring a second round of federated authentication (which may be a long process). We'll need to present the federated user with a list of all the projects they have access to on the CLI, and properly rescope the federated token.
At the design summit, we mostly discussed the impact of introducing federation, and what we need to do to next:
- Horizon needs a new interface to direct unauthenticated users to the correct identity provider, potentially replacing the current login screen.
python-keystoneclientneeds to implement a federated authentication plugin (or two), provide CRUD operations for IdP, protocol and attribute mapping configuration, and manage unscoped, federated tokens.
python-openstackclientneeds to expose a federated authentication flow on the CLI.
- We need to get a federation configuration into DevStack, so that we can gate on a federated authentication flow using Tempest.
- Given that Keystone is no longer acting as the only identity provider in an OpenStack deployment, services will now be unable to retrieve additional user from Keystone using the identity data in federated tokens. We'll need to work to ensure no other services have made assumptions about our legacy behavior.
Locally managed identities
Keystone is not a first-class identity provider today. We don't support password policies such as complexity validation, unique history enforcement, expiry, recovery, or lockout after multiple authentication failures. We don't support expressing particularly complicated organizations of users. The community has thus far avoided re-inventing LDAP.
Today, our primary use cases for identity, in order of significance, are:
- Leveraging existing external identity data
- Managing service identities for inter-service communication
- Providing a source of user identity in a free-standing deployment
Improving support for the third use case will also improve our support for the second use case, but will ultimately come at the expense of the first use case, due to an overall increase in complexity and user expectations.
We decided in Atlanta that it makes the most sense to split the existing identity provider API from the APIs for token generation and assignment management, prior to introducing any additional complexity that would be viewed as an API extension to the existing core API.
Reasonable configuration defaults
Keystone's upstream defaults seem to generally fit deployer's expectations for "reasonableness," which are defined to be, at minimum, the ability to run a default configuration out of the box without modification beyond specifying other pieces of required infrastructure that cannot be assumed ahead of time (such as a SQL server or messaging backend).
Downstream, however, there appears to be a significant amount of confusion about the defaults that Keystone actually sets, leading me to suspect that packagers are not using the latest sample configuration (at all) from Keystone for new deployments.
This is a broad topic that includes issues discovery, pagination, API extensions, etc. The high-level conclusion coming out of the design summit was that the OpenStack Technical Committee would support cross-project API design conventions
openstack/governance for new API development moving forward. Well-justified exceptions to the defined conventions are acceptable.
It seems desirable to centralize quote management across multiple services into a single "endpoint." Whether that endpoint lives inside Keystone, or is a standalone service has been up for debate, but it seems like the best fit would be a discrete service.
Unlike an initial prototype of such a service (Boson), a centralized quota service should not require any initial registration of quota limits, nor make it a rigid goal that quotas must be enforced strictly (they may be enforced strictly, but due to desired distributed nature, that would be difficult to guarantee with acceptable performance.
Although there is a desire from the community to make this happen, the easiest approach forward (given the available manpower) is to improving the existing quota infrastructure. If nothing else, we'll continue learning about our own quota needs, and be able to better define use cases for a discrete service in the future.
Centralized policy management
Keystone currently provides an unused
/v3/policies API that can be used to centralize policy blob management across OpenStack.
- Centrally managed policy management
- Different policies per region
- Stacking policies per project
Header imagery © Google 2014