App Security: Throw Out the Org Chart!

Traversing The Org Chart
"Only administrators can add users-- no exceptions! ...except Bob in accounting, but that's because he's covering for Sally. But only until February. And this sort of arrangement might happen again. But most of the time, it won't. I mean.. ninety-nine point nine percent of the time. But there might be exceptions... ".

Sound like a requirement you've heard before? How did you handle it?

In an earlier post, I stated that all security models are idiosyncratic, and that the way you go about designing for security must reflect the nuances and -isms of your organization. You might mistake the form used to express the model (HR records, existing databases, or some XML schema) as your security model, but you risk an uphill battle getting your organization (and I mean the people here, not boxes and circles on an org chart) to accept the result.

All of this has less to do with how we design software and everything to do with the way people organize into groups..

Groups and hierarchies are never fixed-- fluctuations occur, even if the rate of change varies from organization to organization. We see this all the time in our everyday interactions-- associations can be temporal in nature, two groups may need to work together on occasion (but only, say, within a certain context). Ownership can cut across hierarchies, and influence can shift from one group to another based on other environmental factors.

A well designed security model needs to accommodate for these kinds of factors (even if it means fluidity in some areas, and stringency in others). Handling exceptions to the original security model are key, and the sooner people discuss it while building a piece of software, the better.

The bad news is that your IT department has chosen to see your company through a fixed lens, and will attempt to enforce the same level of stringency across the organization as a whole. But can you blame them? Two department heads suddenly decide to pool their resources for a period of time to make a deadline, requiring a temporary loosening of security restrictions, and suddenly the onus is on IT to work late hours dealing with the fallout. Something which might have started as a simple conversation between two managers inadvertently puts an otherwise stable security model at risk.

So why do we expect rigid specifications when solving the problem of security in the first place? After all, these specifications often reflect only a snapshot of the structure of an organization, and rarely its propensity for change.

Colleagues of mine cite external factors as the driving force behind the need for stringent security models in the enterprise (identity theft, liability, or a third party requirement). I don't mean to discount the importance of these types of requirements, but the big risk here is mistaking contractual specifications (e.g. "PCI compliance" or "HIPAA compliance") for a security model. The two are not the same thing.

All of this leads me to the conclusion that a contractual specification is not a substitute for a security model.

The former specifies an agreement between organizations in how to handle certain business transactions, while the latter dictates user roles and access rights and even more importantly, the process by which authorization can change based on fluctuations within the organization.

Before you choose a security model for your application(s), you have to understand how the organization can change. It may be useful to look at an org chart to build a vocabulary, but in most cases you are just as well served by tossing it aside.