Friday, June 27, 2008

HMRC data loss findings

When the UK goverment, more specifically HM Revenue & Customs (HMRC) lost CDs containing 25 million child benefit records last year, it was front page news for a whole week. I wrote about it at the time and gave my usual 2 cents.

They've been on the news again this week for the past couple of days as the findings from the independent investigations into the incident were just released. I've been seeing Prime Minister Gordon Brown and Chancellor Alistair Darling back-pedalling in parliament all week due to attacks by the opposition party on the issue.

It's not too difficult to find the story online, but here (link 1, link 2, link 3) are a few to get you started.

One of the findings stated that:
"Staff found themselves working on a day-to-day basis without adequate support, training or guidance about how to handle sensitive personal data"

While another report stated that there was
"no visible management of data security at any level".
The reports also alluded to the fact that no single person working for HMRC was to blame.

If you don't feel like reading my lengthy post on the incident last year, I'll highlight a section from that post here:
The chances of something like this occurring would have been far less if HMRC had properly implemented the following (in order of importance):
  1. Decent security awareness training and education - User awareness will drastically reduce bad practices. People don't want to do the wrong thing. They just don't know when they are doing the wrong things.
  2. More security training and education - Keep it fresh and up-to-date. Things change VERY quickly in the IT security game. It also helps to remind people from time to time that security is important. It NEEDS to be part of corporate culture because otherwise, things just fall in a heap.
  3. Properly defined identity and resource/data access policies - Know what systems, applications, resources and data you need to protect and who should have access to them. Without this, all the technology in the world will not help.
  4. Properly implemented policies supported by relevant technology solutions - Policies alone will not protect you against the bad guys and the "idiot" (too stupid to understand the security policies) or "lazy" (can't be bothered reading the security policies) user. There are also many of us who fall into the "I know I shouldn't be doing this but I'm not doing this as a bad guy - I just want to make my job easier" category.

Looks like I wasn't too far off the mark. At the core of the failure was indeed the age old problem of minimal (or no) security awareness training and education and lack of any management and controls.

Apparently the government have been doing something about it and have in fact finished implementing some of the initiatives. It's not surprising given the amount of publicity the incident generated and the beating they got over it. Hopefully they keep at it and also apply the same controls across other departments (unlikely to happen anytime soon, but one can hope).

Now, if only they could figure out how to stop employing senior officials who are idiots and leave top secret documents on trains (see here and here).

Wednesday, June 25, 2008

Why does your organisation buy enterprise security software?

The answer might seem obvious, but it's not.

I had a very interesting chat with Eurekify founder Dr. Ron Rymon the other day about a multitude of things including the GRC market at large as a follow up to my post last week regarding CA. One thing I didn't mention was CA's agreement with Eurekify to resell their Enterprise Role Management product, but you'll find it within the comments in response to the post. Ron also reminded me that Eurekify has a GRC solution offering of their own. A lot of people think of Eurekify as just a role management company because that's what they're best known for. Eurekify specific discussions aside, one of the things we spoke about was the various reasons behind why organisations buy GRC software. This got me thinking a little more, which brings me to the point I'm trying to make.

I went into some amount of detail about approaches and drivers to GRC but one thing I didn't talk very much about was the reality of the situation in some organisations and their attitudes towards security related activities (I'm including compliance here). I've found in my experience that it is the attitude and corporate culture that will ultimately determine if a particular piece of security software is the right solution for the organisation from a decision making and purchasing standpoint. If you are the sales guy, you need to very quickly qualify the opportunity as follows:
  1. Is the organisation interested in implementing a security solution to solve real business and IT problems or do they want to "tick check boxes" within a form so they can satisfy specific audit requirements?
  2. Is the software product you are selling a tactical or strategic one?
Most sales guys will answer question 2 by saying "of course my solution is a strategic one!". Don't kid yourself. You know what it really is so you need to sell it accordingly. I should probably explain the difference between a strategic and tactical software product:
  • Tactical - typically a point solution that does one or two things very well. e.g. an encryption product.
  • Strategic - a platform or infrastructure solution that solves a larger, high level issue that is of business significance and affects more people (or parts of the organisation). e.g. an Identity & Access Management suite.
Taking a very simplistic view in this case, your qualification matrix looks like this:




TacticalStrategic
Solution approach
need partners
in
Tick check boxesinout


If you are selling a tactical solution, your best bet is to go for organisations trying to tick check boxes that the auditors said to tick. e.g. an auditor might say "if you encrypt all your disks and superglue all your USB ports rendering them unusable, I'll tick the PCI-compliance box for your organisation". If you are selling a strategic solution, go for the organisations that actually want to address an issue properly and more holistically. In other words, they are more interested in proper security. That's not to say you can't go for the strategic sale if you have a tactical solution. You just need to partner with the right vendors to give the organisation a "best of breed" solution.

The thing we need to examine is why so many organisations think ticking check boxes = good security? I'll talk about that another day. I've already written one long essay this week.

For now, ask yourself:
  1. If you're in sales - are these guys ticking boxes or addressing security?
  2. If you work in an environment that buys software - do we tick boxes or do we address security issues?
It'll save everyone a lot of wasted time and money.

Tuesday, June 24, 2008

Some answers on Identity enablement

James McGovern has a habit of posing questions and then naming the people he would like to answer them. This time around, I had the privilege of having my name read out on the roll call. Actually he posted it just before I went on my week-long holiday hence I'm only getting around to it now.

Here are his questions (in blue) and my attempts at some answers (hopefully I don't sound like a complete fool - I'll settle for mildly foolish):

[JM] Protocols:Nowadays, the folks over at the Burton Group such as Bob Blakely, Dan Blum and Gerry Gebel have put together the most wonderful XACML interoperability events. The question that isn't addressed is if I am building an enterprise application from scratch, should I XACML-enabled, think about integrating with STS, stick to traditional LDAP invocation or something else?

[IY] I'm not 100% sure what James is asking and my answer will probably be different if James actually means something other than what I've interpreted it as. I read the question as being once the decision's been made to use XACML, how should one be dealing with authentication? Talking about an STS (I assume James means Security Token Service) vs LDAP refers to 2 different "layers". In reality nowadays, you're ultimately authenticating to a directory of some sort. Usually you do this using the LDAP protocol under the covers. Whether you know this or not depends on the overall design.

Note that using XACML means you are pushing policy definitions around to ultimately get your Policy Enforcement Point to be able to come up with an authorisation decision. When you talk about STS or LDAP, you're typically referring to authentication which ultimately produces some sort of credential for the user within their session. Authentication and authorisation (or entitlements as people seem to like calling it nowadays) are related, but separate things that can be implemented differently as long as there is a point where they interoperate. Usually the most important part is where the authentication mechanism passes the security principal onto the authorisation mechanism so it can make an identity based, access control decision (that is also hopefully fine grained and context aware).

That said, I'm going to ignore the XACML consideration for a moment...

There are a few common approaches to take when writing authentication for applications:
1) Take a service-oriented approach (e.g. SOA or web services)
2) Leverage an API or application programming standard (e.g. JNDI, JAAS)
3) Use the native protocol of the authentication store (e.g. LDAP, SQL)

Option 1 is the most difficult to set up because this type of infrastructure is typically not built in organisations today. But it IS the most extensible option and hides the implementation specifics from every application that needs to perform authentication. Whether the underlying store is a directory, database, file system or even a mainframe does not matter. You also get the benefits served up by leveraging a web service. For example, you just need to know how to format your message (this is actually not a problem if you use a standard as it is usually taken care of by some other service) and where to send your request (via the URI). You obviously also need network connectivity to that location. There are no headaches around what libraries to import, whether your authentication services are co-located with your application or whether you need to go screw around with configuration settings and files because you are actually calling a service that is not located on the same application server as you. Keep in mind that you don't necessarily have to use an STS, although it is probably the best approach today if you're going with a service-oriented design.

Option 2 still leverages standards of sorts. JNDI and JAAS for example are both standards in their own rights. They just happen to be tied into a programming language and platform. This option is probably still easier than option 1 because organisations will more than likely have this infrastructure more or less set up. It's a matter of getting the programmers to code to these standards/APIs and then setting up the application infrastructure to hook into the relevant authentication stores. Again, this is probably already done if you're using a standards-based Application Server (e.g. J2EE). You will however, have configuration files to screw around with and changes to any of the infrastructure will potentially require applications to be modified. You're also tied into the platform somewhat. For example, if you use JNDI you're stuck with Java and the directory under the covers. With a web service, it doesn't really matter what programming language is used or what the authentication store is. You can swap components out without modifying the other loosely coupled pieces. You're supposedly able to do this easily if you code to API standards, but that's rarely true. There's usually re-configuration and some re-writing required.

Option 3 requires intimate knowledge of the protocol or tools required to access the authentication store. If using LDAP, you need to know LDAP query syntax. If using SQL, you have to know how to write SQL queries or stored procedures. It also means you cannot change your authentication stores and schemas without re-writing your applications.

They each have their benefits, but it boils down to the age old discussion around tight coupling vs loose coupling. The looser the coupling of your infrastructure components, the more extensible they are. You lose performance however and it usually takes more effort and time to build a loosely coupled system because of all the design considerations to take into account. (Note: The performance issue is usually one that can be designed around if you have the budget. You put enough redundancy, load balancers and hardware in place and you can usually get something to perform within your service level agreements (SLAs). If your network hops are large, just do a better job of negotiating your SLAs - easier said than done I know).

Option 1 is the loosely coupled approach which should "identity proof" your environment for a longer period of time (notice I didn't say forever). Option 3 is the tightly coupled approach. Option 2 is something in between. The one to pick depends on requirements, time and cost constraints. The choice will be determined by how much of the authentication infrastructure specifics you want to burden your developers with.

Bringing the XACML considerations back into the picture, I would go for option 1. If you're going to take the effort and XACML-enable your applications, why would you "get cheap" and not go with the use of an STS? It also makes everything nice and clean from an application programming standpoint. Developers will only have to worry about using security services. They won't need to remember that for certain security related things, they need to use the web service while for others they have to use the API or go direct to the source over LDAP or SQL.

[JM] Virtual Directories: What role should a virtual directory play in an Identity metasystem? Should virtual directory be a standalone product in the new world and simply be a feature of an STS? If an enterprise were savage in consolidating all directory information into Active Directory, why would I still need virtualization?

[IY] I don't come across too many organisations that use a virtual directory. Maybe it says something about the maturity of their Identity Management infrastructure. Or maybe it means that they don't see the need and are quite happy replicating things using meta-directories and synchronisation.

Leveraging a virtual directory is very much a technology choice and dependent on each organisational environment. If they have lots of identity stores and have a nightmarish amount of information that would take a long time to integrate and synchronise, then a virtual directory makes sense. If an organisation has a fairly small number of stores or their strategy is to leverage a specific central store like Active Directory, then they probably don't. On the flip side, one could argue that the virtual directory could become this "central store".

I'm not for or against virtual directories. I just think there's a time, environment and place for the choice between a virtual directory or a synchronisation solution like a meta-directory. As for using a virtual directory as part of an STS, the argument is very similar to what I've just outlined.

[JM] Entitlements: One missing component of the discussion is authorization and their is somewhat too much focus on identity. Consider the scenario where if you were to ask my boss if I am still an employee, he would say yes as he hasn't fired me yet. Likewise, if you ask him what are all of the wonderful things I can access within the enterprise, he would say that he has no freakin clue, but as soon as you figure it out, please let him know. Honestly, even in my role, there are probably things that I can do but shouldn't otherwise have access to. So, the question becomes how come the identity conversation hasn't talked about any constructs around attestation and authorization?

[IY] People are too busy crapping on about OpenID and CardSpace all the time :-)

Seriously, I think it's got to do with the way things in the identity space evolve. Before you can deal with entitlements and attestation, you need to know who people are and what they can use (i.e. what they can "log into"). It's a very high level approach sure. But that's how organisations typically start looking at the whole issue.

Also, Identity Management is not an easy issue to solve. As a result, it's taken a long time to get around to thinking about authorisation properly. It doesn't mean we will never get there. We just have to be a little more patient. All the GRC initiatives going on around the place are certainly helping. Attestation is usually one of the first things that get addressed in any GRC initiative because it needs to be done to satisfy the regulators and auditors.

Once organisations realise that having a proper authorisation infrastructure in place makes their lives a lot easier from a management and audit perspective AND it'll save them money (instead of having to re-invent the authorisation wheel for every application), they'll start to do something about it.

It's also about evangelism, education and focus. The focus isn't there yet so there's no education. Evangelism comes from the industry as a whole. The large vendors are usually the ones with the marketing clout to get the messages out in a meaningful way. Unfortunately, they chase the dollars. Until recently, there hasn't been sufficient sales revenue or qualified opportunities to justify evangelising authorisation. It's the next cab off the rank I think. We might have to wait a year or 2 for organisations to get through the early phases of their GRC initiatives before authorisation gains real traction.

[JM] Workflow: Have you ever attempted to leave a comment on Kim Cameron blog? You will be annoyed with the registration/workflow aspects. The question this raises in my mind is what identity standards should exist for workflow? There are merits in this scenario for integrating with the OASIS SPML standard, but I can equally see value in considering BPEL as well.

[IY] I don't think there needs to be identity standards for workflow in the traditional sense of technical standards. A generic workflow standard (e.g. BPEL) across all disciplines is good enough and will be better in the long run. Workflows are very dynamic, can get very complex and will be different for each organisation even if they are trying to do similar things. This makes standards tricky. Then there is also workflow from a business standpoint. When technical folk talk standards, they usually mean nuts and bolts (e.g. XML specifications). Those in the business world don't care that a worklow is written in BPEL. They care that they have to write a frigging workflow procedure from scratch. They would probably find some standards around business process definitions useful. It might be prudent of us to define easily extensible procedural workflow templates for Identity business transactions perhaps? Now that would REALLY be something.

[JM] Education: Right now the conversation regarding identity is in the land of geeks and those who are motivated to read specifications. There is a crowd of folks who need things distilled, the readers digest version if you will. Traditionally, this role is served by industry analysts such as Gartner and Forrester. What would it take for this guys to get off their butts and start publishing more thoughtful information in this space?

[IY] I don't think industry analysts are ever going to write a "readers digest" version of anything related to Identity. It doesn't help them make more money from selling reports and holding conferences for specialised groups of people while charging them thousands of dollars each. Unfortunately, I think the fastest way to get simplified Identity education out there is through the marketing dollars of the large vendors. Consulting organisations (including analysts like Gartner and Forrester) will never simplify anything because they need for things to remain (or sound) complex so they can keep charging for their "expertise".

[JM]Conferences: When do folks think that the conversation about identity will occur at other than identity/security conferences? For example, wouldn't it have been wonderful if Billy Cripe, Craig Randall and Laurence Hart where all talking about the identity metasystem in context of ECM?

[IY] Ummm how about never...which is fine by the way. The Identity industry needs to make things so dead simple and ubiquitous that there is no need to talk about it. It should just be there. Therein lies the challenge facing us all today.

Saturday, June 21, 2008

So you want to remove the Verdasys agent

This post isn't going to interest many people (so I'll keep it short), but I'm considering it a "community service announcement".

Since I wrote my "Back to Identity" post, people have been landing here via search engines using variants of the phrase "how to remove Verdasys". I assume these are employees of organisations that have the Verdasys Digital Guardian (DG) agent installed on their machines and are trying to get around the system.

I have news for you. If DG is deployed within your environment the way it is in most organisations (with tamper-proof and invisible features turned on), you can't. That's not to say it absolutely cannot be removed. The system or security administrators (whoever is responsible for the DG environment) within your organisation will know how to because they have access to the administrative console and other related details (passwords and so on). Why don't you go ask them? :-)

Tuesday, June 17, 2008

CA positioning itself to be a GRC vendor that matters

I've been away for the past week on a short break (Athens and Santorini - if you haven't been to Santorini, you MUST add it to your to-do list). Naturally, that means that I've missed a whole bunch of news and have to catch up.

CA made a bunch of announcements last week regarding their line of security related products. The first about a new release of CA Identity Manager, another regarding CA Access Control, a third referring to CA GRC Manager and the last in relation to a brand new product called CA Security Compliance Manager.

I found the Identity Manager and Access Control announcements boring because they are just upgrades to existing products which almost all their competitors have. Upgrade announcements are boring because they are about new features which no one will really understand until they see them in action...and even then the competitors will all say "oh yeah we can do that too" even if they can't and just get the sales engineer to hack something together for the demo or POC that looks like it's out of the box.

I found the other 2 announcements much more interesting from a strategic standpoint...

The first thing I noticed was that they are sticking to the industry norm of using a completely boring name for new products while at the same time managing to use the same name as another vendor (e.g. all the major software vendors have a product called Identity Manager which does all the provisioning, de-provisioning, password management and account workflow related activities). In this case however, they have managed to use the same name as a product IBM has (Tivoli Security Compliance Manager) but for a completely different purpose.

The second thing I noticed was that they suddenly have 2 GRC centric products. If you are a regular reader, you'll know that I'm now doing some work in the GRC area after my year long sojourn into DLP so anything GRC related gets my attention.

Like many industry terms floating around (especially newer ones), GRC means different things to different people. It also means there are many software vendors out there claiming to solve all your GRC problems. What people don't necessarily always understand is that there are many different approaches (and drivers) for a GRC program within an organisation. Most commonly, a GRC initiative is viewed from one of the following angles:
  • Risk Management
  • Finance/Audit
  • IT Security
  • Business Controls/Operations
This is not an exhaustive list and most of the time there's a fuzzy line between each. In other words, there's always going to be overlap. I should also point out that an approach is not necessarily the same as a driver, but they can be the same thing. For example, the driver might be that the organisation needs to meet regulatory requirements. The combination of the regulatory requirements and the business areas affected will determine what approaches need to be taken. Or the driver might simply be that business controls need to be better monitored, controlled and audited to improve the bottom line. In this case, the approach is the same as the business driver.

As a result, there a lot of GRC software vendors that don't necessarily compete with each other even though at first glance you might think they do (usually because they stick the term GRC in their description). In fact, it makes sense for a lot of vendors out there to partner with each other to provide a more complete solution for organisations because none (including the large vendors) actually cover off the entire GRC solution. Whether an organisation needs the complete solution is an issue for another day.

Here's why the CA annoucement is interesting. CA GRC Manager looks to be enterprise risk management focused. They've now added CA Security Compliance Manager which looks to be IT Security focused. It's starting to look like organisations have decided that IT Security Governance needs to be centred around identities, which is why IT Security GRC is sometimes referred to as Identity Centric GRC. In my opinion, this means CA Security Compliance Manager competes directly with the likes of Sailpoint and Aveksa. Notice that I haven't mentioned any of the large vendors (e.g. IBM, Oracle, Sun, Novell, SAP) in this space. This is because they don't have anything that competes. Here's why:
  • IBM don't have a GRC product. They use a combination of IBM Tivoli Identity Manager (ITIM) and IBM Tivoli Compliance Insight Manager to do GRC-like tasks. IBM, I'm afraid nice looking reports and some ugly ITIM screens that do some level of attestation but business users can't figure out how to use doesn't cut it.
  • Oracle have a product, but it hooks into all their Finance, ERP and CRM applications. This means it's very focused on business controls.
  • Sun thinks Role Management = GRC. I have news for you Sun. Even in combination with some of the things Sun Identity Manager does, you're still not there.
  • SAP have solutions, but they all hook into...well, SAP! Just like Oracle, it's focused on business controls.
  • Novell are even worse off than the others because they are still peddling their provisioning and access control solutions as being able to solve all your governance and compliance problems.
It looks like CA is ahead of the curve in this case. Keep in mind I'm talking strategy and ability to execute and bring something to market. I'm sure all the other large vendors I've mentioned have some sort of plan. A lot of the discussion behind closed doors is probably around who they should acquire to fill the gaps. That said, the new CA Security Compliance Manager doesn't look to have some of the functionality that Sailpoint and Aveksa have, but they've essentially just released a version 1.0 of a product and they have the marketing dollars to make up for it in the initial stages. They can also claim to have integration into their GRC Manager product and their Identity and Access Management suite so that's also a leg up on Sailpoint and Aveksa because they can sell the "suite concept" instead of convincing organisations to go with a "best of breed" approach that smaller vendors have to preach.

Thinking out loud, it might make sense for CA to partner with SAP on a joint GRC marketing campaign. I seriously doubt Oracle (or CA) will agree to such a concept. Or maybe SAP should just buy CA and be done with it.

Tuesday, June 03, 2008

Paranoid about your passwords?

I came across this today. It's pretty handy if you are sick of trying to think of (and remember) complex passwords.

For those familiar with authentication schemes, it essentially blends the concept of challenge/response questions with passwords. The way I like to think of it is by using the following example...

The challenge (called "phrase" on the site) could be something like "what is my password for site x" and the response (called "password" on the site) is just a word you use as a "password key" so to speak. The site will spit out an actual password, which you can use as your real password for whatever site you use it for.

Metaphorically, it's like going to a locker (perhaps at an airport or a train station) to get your house keys so you can go home. Thankfully, you only have to go to a website instead of navigating through traffic or the public transport system to get your house keys.

Apparently there's no "magic" behind how it generates your password for you. Read about it here.

It's not foolproof, but it's certainly better than using a weak, simple password because it's better than single factor authentication. I wouldn't go as far as calling it a 2 factor authentication mechanism because using the traditional definition requires that the authenticating site handle the authentication factors directly and that the authentication factors are two out of the following three: something you know (e.g. password), something you have (e.g. smartcard), something you are (e.g. fingerprint).

But if you think about it, handling your password this way requires that you know something (what challenge/response combination to use) and also generates you a "token" of sorts (usually this comes from something you have - in this case, you could argue that this is the website). I know it's the same token each time so it's effectively a password but it's not something that's easy to guess. One could view it as a "pseudo token" that is generated based on knowledge that you have. Call it 1.5 factor authentication if you like. Or maybe "obscured/indirect authentication". Can anyone think of a better name? In any case, I'm really stretching the multi-factor definition here. But hopefully you know what I'm getting at. In short, it's better.

It's a very simple concept (now we can all say "crap why didn't I think of that") and a good one I think if you want to add that extra little bit of security to your everyday passwords.