Next Generation Security

By Scott Alldridge, President, IP Services

Cybersecurity breaches seem to occupy too many headlines these days.  There seems to be so many attention-grabbing examples of how inadequate information, applications, and IT security can impact our businesses. There is documented evidence that security breaches can affect brand, marketplace trust, customer privacy and identity, not to mention the bottom line. The proliferation of security laws and regulations demand an ever increasing share of our attention and effort with escalating consequences for noncompliance.  In spite of this, somehow we see vendors influence the cybersecurity space with another “point-based” solution…promoting a better firewall, or some angle on threat-intelligence when security breaches and incidents are out-pacing spending on security almost 2 to 1.  Certainly we need best of breed and proven “point-based” solutions such as identity management, firewalls, and security training, but also we need to look deeper and focus on the right things that have proven to prevent breaches from causing catastrophic consequences.  Often though, with the glut of information out there and so many solutions to implement, we are left spinning in a quandary.

Answering the question “How much security is enough?” is a tough proposition. Security is hard to put your finger on. It does not reside in a particular location and is accomplished through a diverse combination of people, process, and technology controls. Adequate security for any given product, service, or organization is determined based on tolerance for risk – easy to say, hard to quantify, and constantly changing.

While we’re trying to get our heads around these complex issues and make sure we’re not the next press release (or court case!), there are a set of proven, sound practices that allow enterprise IT operations and security teams to effectively operate and maintain production systems and meet security-based compliance requirements while providing new business-driven services.

Visible Ops Security derives from years of operational experience, customer engagements, and rigorous research and benchmarking performed by the IT Process Institute. Working with top performing organizations to tease out what differentiates them from medium and low-performers, the authors have found that high-performing security teams have unique cultural characteristics and have employed some “key” foundational processes to drive highly secure postures within their organizations.

Based on this research, Visible Ops Security identifies 4 phases for integrating information security into development and operations so that it becomes business as usual. The steps for each phase offer a prescriptive sequence of measurable actions, supported by true life examples that readers can easily identify with and use to help build momentum and support. By working together, development, security, and IT are in a better position to achieve common objectives and demonstrate business value.

I would propose the following working thesis around security; no breach happens without a change or a need for a change.  So, the somewhat obvious solution is; managing change. This begs the question- how do you effectively and appropriately apply change management with your IT Assets?  Our research and data collected in partnership with the IT Process Institute has used quantitative data analysis that shows using the three following processes; Configuration Management, Change Management, and Release Management together in the appropriate way creates a “closed-loop” process for effective Change Management.

Finally, coupling these foundational process controls (Change-Config-Release) with proper tooling, such as Security Event Information Management (SEIM) and Integrity Management (IM), can provide an organization with the ultimate “back-stop” in security.   But remember “a fool with a tool is still a fool”, so having the right expertise, tools, and experience is vital and delivering meaningful monitoring at this level is always challenging, so looking to a security-as-a-service model can be the most cost effective and efficient decision you will ever make.   To learn more, visit:



Why your Cybersecurity strategy needs DevOps

By Ryan Riggs,  Vice President of Operations, IP Services

A chain is only as strong as its weakest link

In the past few years, leaders have accepted that the human error is the biggest security risk to an organization, and organizations have responded with valuable policies and programs such as security awareness training and multi-factor authentication.

Necessary, but not sufficient. Organizations must continue implementing more robust security measures, expanding focus to include automated detection and rollback.

Due to the special role of IT staff in an organization, being able to make changes to infrastructure that potentially have grave security consequences, further audit and verification that these changes are performed correctly are necessary. For years, organizations have done this within the ITIL framework with manual or partially automated verification. This process has serious drawbacks. It’s expensive, inefficient, and doesn’t always provide oversight commensurate with the risk of the change.

Auditing and SIEM tools provide valuable protection and insight, but when a human must review the results, consider the best path, and apply the appropriate configuration changes, then you aren’t reacting fast enough in today’s threat environment.

Enter DevOps

Orchestration and integrity systems provide centralized management for detecting and verification of configuration of systems, but also audit and automated rollback features across heterogeneous environments

Implementing an automated approach to rollback unauthorized changes minimizes the exposure of both unauthorized changes by authorized staff and mistakes.

For both ITSM and ITSD, I have a far greater degree of confidence in the integrity of the systems we manage, and that the risk of human error has been minimized as necessary changes are made to systems.

What Do Castles and Internet Security Have in Common?

by Mark Allers


For the past two decades myself and a handful of others have been preaching the fact that organizations must look at security through a different set of glasses.  For too many years we’ve been enamored with the bright shiny objects created by Symantec, IBM, Checkpoint and other technology companies where they want to keep the illusion that Info Security is this complicated and ambiguous thing that will always exist as long as there are bad guys with bad intentions.  That complexity and ambiguity is what drives ungodly revenues on the back of fear, uncertainty and doubt.

So what does a castle have to do with internet internet.  For years I’ve made the analogy that castles and their walls (also called “curtains”) are similar to today’s network security and that we still live in medieval times based on our approach to info security.  Around the 11th Century castles were built with high thick stone walls as a means to thwart off the bad guys.  When those bad guys learned to scale the walls, the castles began building moats around the castles to add yet another level of complexity to deter the bad guys.  But then the bad guys learned to swim and scaled the walls into the castles.  So the castles put alligators in the water to add yet another level of complexity to entering their domain.  But the bad guys learned to kill the alligators, swim the moat, and scale the wall to get into the castle.  The simple fact of the matter is that the bad guys will get in.

When The Draw Bridge Is Down

In today’s world of Info Security, the analogy above represents less than 20% of security breaches AND is where most of today’s security spending is focused.  What about the other 80%?  Well, those incidents come when the castle’s draw bridge is down and the perceived threat of bad guys does not look dangerous.  The problem is that the bad guys paint their coat of arms to the color of the castle residents and simply walk across the bridge without any conflict.  When inside the castle they do as they wish.  Today, this is what we call social engineering…opening a malicious email attachment or plugging in that USB thumb drive that was given to us at a recent conference.

Knowing that the bad guys will always get in, the real question is how can we discriminate between coat of arms and find the needle(s) in the haystack (DNA if you will) and remove them from the castle before they lower the draw bridge in the middle of the night and allow all the bad guys to simply walk on in?

Well the solution is not sexy nor is it a bright shiny object.  Using an analogy from football, it requires basic blocking and tackling and not subjecting yourself to running trick plays or throwing a Hail Mary every down to score.  It requires the adoption of best practices and an IT management methodology that instills process and detective controls to ensure service quality and the mitigation of risk and security.  A decade and a half of research and benchmarking with over 800 IT executives within 300 different organizations and industries has resulted in our Visible Ops methodology (  That methodology of IT management stands on three pillars of ITIL (configuration management, change management, and release management).


So what does Visible Ops have to do with security?  “All security events or breaches start with a change or need for change.”  A change can be anything added, modified, or deleted.  A need for change is as simple as the need to apply a vulnerability patch and apply it correctly.  In order to maintain a high degree of security in your IT infrastructure you must ensure that there is no “Integrity Drift”.  Essentially this means that what is running in your IT environment is known and trusted and when new changes or configurations are applied they are authorized and expected.  Don’t get me wrong, perimeter security has its place and value BUT should not be allocated the majority of the IT security budget/spending to address only 20% of the problem.

We need to get out of medieval times of constructing new perimeter defenses where only a fraction of all security events and breaches occur.  We don’t need to build bigger walls or add more water to the moat.  We need to enter into the 21st Century where process and methodology utilize tools that can discriminate at a DNA level what is known or authorized to run and operate in a IT environment.

Time to draw back the “Curtains” and understand the next evolution of IT management and security.

The Ultimate Backstop for Hospital Cybersecurity

“Healthcare entities that want to be well positioned against cybersecurity threats must know what resources they have, how those are configured, and tightly control any changes,” IT Process Institute chief executive Scott Alldridge said.

IT Process Institute CEO Scott Allridge has cybersecurity advice for healthcare executives: Consider ITIL, the framework formerly known as the Information Technology Infrastructure Library.

“Following ITIL best practices becomes the ultimate backstop for your security posture,” said Scott Alldridge, CEO of the IT Process Institute, a research firm that studies top-performing organizations and best practices.

Despite all the money healthcare organizations spend on security tools, such as firewalls, intrusion detection and prevention systems and email security it is becoming painfully clear — especially in light of the ongoing ransomware attacks — that executives and employees are the biggest threat.

“People are being phished and enable viruses and encryption piracy tactics,” Alldridge said. “As a result, we have to go deeper than technology solutions and have great detective-based and best practices-based controls, and better social engineering around awareness, because if a hacker can phish a person or become a person’s connection, then a threat becomes very difficult to detect.”

Controls that detect when an employee is circumventing a policy or procedure, whether knowingly or unknowingly, are lacking in healthcare, Alldridge said.

“Good security becomes about being able to track and monitor their behavior and have the proper controls in place so they are not able to circumvent policy and procedures and security practices,” Alldridge said. “That is tricky.”

Alldridge added that the IT Process Institute believes ITIL offers the best descriptive framework for developing best practices in IT, including security practices. ITIL is owned by AXELOS, a joint venture by the U.K. and a company called Capita.

“Our research and other research has proven that through the implementation of various best practices there are benefits to the business and the IT organization,” he said.

When it comes to security, ITIL encompasses best practices for improved mean time to detection (MTTD), longer mean time between failures (MTBF) and better mean time to repair (MTTR).

ITIL also takes into consideration configuration management, change management and release management as key processes healthcare organizations can master to bolster cybersecurity.

“Change management is the golden achievement, but you cannot do effective change management if you do not know what you have, so you thus have to be able to manage configuration,” Alldridge said. “And if you are going to be developing things to release that will lead to a change, there should be a go-live release practice that feeds into good change practices. It becomes a closed loop process.”

If a healthcare organization knows the IT resources it has, knows those resources are configured well, only allows changes if changes are approved, and does not develop or implement new resources unless they are tested, that organization will be positioned well to deal with cybersecurity threats, Alldridge said.

“While it is fairly simple to describe it is not necessarily so easy to do,” he said. “It is a challenge to figure out where you begin to implement or bootstrap proven best practices into your IT organization.”

By Bill Siwicki | April 15, 2016 

Essential Elements of a Cybersecurity Program

24897595_sCybersecurity is getting a lot more attention these days, even if it is not all the attention it deserves.  (Did you know that breaches are still increasing faster than spending on cybersecurity?  Some of the best data in this space comes from the federal government – here’s a good synopsis:  Federal Cybersecurity Breaches Mount Despite Increased Spending.)

Because of this increased focus, I find myself spending a lot more time these days working with business leaders on companies’ cybersecurity programs.  As I study the ways businesses are addressing cybersecurity, the essential components of a meaningful program have really crystallized for me.  It seems timely to share some of what I have observed.

I have identified several common characteristics among mature cybersecurity organizations, and those characteristics are also notably absent in less mature groups.  I plan to spend a few posts looking at what seems to be working in Cybersecurity.

Cybersecurity must be Strategic

We all know those IT Professionals who spend their days immersed in technology and rarely come up for air. They are the people we want on the job when detailed understanding of complex systems is needed.  But we also know how hard it can be for those technologists to communicate the business value of the initiatives they work on and we certainly cannot put these folks in the boardroom to help leadership understand why investment in cybersecurity is the right thing for our businesses to do.

In larger and more mature companies, this problem is solved by having a Chief Information Security Officer (CISO).  A good CISO is a senior leader who has a clear understanding of a company’s business and how to view technology risks through a business leader’s lens, but also understands how to effectively secure technology so they can hold their team accountable for managing those technology risks.

As an organization’s cybersecurity strategist, the CISO is responsible identifying the risks and prioritizing the mitigation efforts, as well as winning sponsorship from the business for making resources available for keeping the business safe.  It can sometimes be difficult for CISOs to garner the needed sponsorship among company leaders.  When this is the case, the ineffective CISO usually translates to an ineffective Cybersecurity Program.

It is also noteworthy that most organizations elevate a cybersecurity leader to be a peer to the CIO and reporting to the CEO.  This is a good indication that senior management understands the importance of security and is committed to protection from breaches.  This especially seems to help in avoiding budgetary conflicts of interest that sometimes lead to corner-cutting on cybersecurity.  It also brings about the opportunity for consensus through conflict when IT and IS are forced to find common ground, which almost always seems to strengthen an organization’s security posture.

My Company is too small for a CISO

I work with companies of just about every size.  Leaders in small and mid-size businesses wear a lot of hats to make their businesses work.  However, being effective as a CISO requires specialized training and experience that would be difficult or impossible to master with only part-time attention to the craft.  So what should SMBs do?  Outsource, of course.

There are generally two types of Cybersecurity experts for hire… consultants and managed security providers.  They both have their places – I see companies forge successful partnerships with consultants when they have projects with a tightly defined scope.  On the other hand, for shaping an effective Cybersecurity program and continuously assessing it and aligning it with business priorities, people often sign contracts for recurring effort from a managed security provider.

Building a Cybersecurity program with the part-time help of full-time experts is a great way to put an appropriate emphasis on managing risk without hiring, retaining and training a very costly employee.  Contact me if I can help you select (a) partner(s) for building a cybersecurity strategy at your company.

My conclusion is that in order to be successful managing technology risk, a company must see cybersecurity as a strategic part of doing business.  Is cybersecurity strategic for your business?  If not, why not?

Next time…  Cybersecurity must be Comprehensive.

There are a lot of layers in the cybersecurity onion.  It is imperative to cover all the bases.  How are people doing that?

Depending on Open Source software is Risky Business (and Heartbleed proves it!)


By all accounts, Heartbleed is the worst security flaw in the history of the Internet.  What it is and why it is so bad has been talked about ad nauseum, so I won’t bore you with those kinds of details, but here are a couple places to learn more about it if you are not already up to speed.  Randall Munroe from painted the picture in laymen’s terms.  Matt Smith describes here how a bug this nasty could live unnoticed in the wild for a couple years.

The thing is… I want to talk about why those of us who are responsible for bridging the gap between technology and business are crazy when we recommend that our companies use Open Source software for critical functions in our organizations without a thorough risk assessment.  “How is Heartbleed and indictment of Open Source software?” you may ask.

Well, I think there are two key reasons that it’s too risky to depend on Open applications in the business where I work.  You get to decide if you are comfortable with the risks for your business.

Lack of Accountability

A popular phrase around our office is “One Throat to Choke”.  We use it in reference to businesses that have too many vendors providing related services – when there is a problem, the vendors point fingers at each other and say “it’s their fault.”  Businesses with too many vendors often want to replace the multiple vendors with one, and that gives them “one throat to choke”.

When you select an Open Source application to provide critical functionality and a problem comes up, you have “no throat to choke”.  Because you have paid no one for their work, there is no one to hold accountable for solving a problem.

The OpenSSL folks did a great job validating and fixing Heartbleed in only a few days.  But what if they did not?  What if it had taken a week… or two… or a month?  Or what if they decided it was too much work and they did not want to fix it?   How much havoc could have occurred if the team didn’t have the time or inclination to resolve Heartbleed quickly?  None of the millions of non-contributing OpenSSL users would have a reasonable complaint against OpenSSL, because they did not paid for anything, and made no agreement with OpenSSL that the product would function securely or correctly.

Lack of Resources

As I read Matt Smith’s account of how Heartbleed could happen, I was pretty surprised to learn that both financial and human resources are so scant for the OpenSSL project.  OpenSSL has never grossed more than $1 million in a year.  They have only one full-time dedicated worker on the team.  The rest of the team is comprised of part-timers, mostly volunteering.  In spite of the fact that most of the servers hosting sensitive content on the internet depend on OpenSSL, virtually no one has supported the project either financially or as developers.

With a team that small, it is not surprising that a nasty bug like this got past code review.  I believe the risk of that kind of bug getting through code review in a larger, well-funded development team is much smaller.

I would expect more bugs like this one in the future, except maybe there’s a little good news for OpenSSL.  Some big players are stepping up to help them.

This is great for OpenSSL, but I wonder… what other critical and widely used Open Source applications have yet undiscovered bugs due to lack of funding and manpower.  Which of your critical systems is depending on those Open Source applications with bugs?


I worked for a large corporation for much of my career.  The company had (and still has) a vendor management team whose job it was to perform a risk analysis prior to doing business with any particular vendor.  There is no way I would have been able to convince the vendor management team that it would be an acceptable risk to use a vendor so under-resourced as OpenSSL.

And yet, because it is “free”, OpenSSL and other Open applications like it have permeated our corporate environments without the same risk assessment that would come with commercial software. Now you know why I think long and hard (and perform a careful risk assessment) before recommending Open Source applications to the business I work for or our clients.


On a side note, people who are part of the Open development community deserve our gratitude for their innovation and desire to collaborate with their fellow humans by making their wares available without high costs.

Is DevOps really the next Great Thing in IT?

The Good

Many of the DevOps movement’s founders come from the IT Ops world, and are asking the right question: What is important to the Business, and how do we, in IT Ops, align our efforts with what the Business cares about?

The Dev guys already figured this out, quite a while ago. The first principle of the Agile Manifesto, written more than a decado ago, is “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software”.

It’s about time Ops figured this out. There’s a reason that in so many companies Ops has a reputation for being the department of “No!” We serve our systems and processes and SLAs. But we have forgotten (or never knew) that our job is actually to serve the Business.

DevOps is largely Ops bridging the gap between who we have been, and who the Devs already are – teams who deliver value to the business.

It’s also about collaboration between the Dev and Ops silos. Teams are building expertise that crosses over between these disciplines. Responsibility for success or failure is being shared between Dev and Ops. (I’ve even heard of Devs carrying pagers and taking their turn in the on call rotation!)

The Bad

I have heard DevOps proponents tear down best practices methodologies like Problem, Incident and Change Management, for example. Some say these processes are never anything but red tape, getting in the way of delivering meaningful work for the business.

But there’s hard science showing appropriately implemented controls around processes like these increase MTBF and reduce MTTR. I think the operative word there is “appropriately” because I have also seen some of those implementations of “best practices” where the processes do, in fact, get in the way of productivity and effectiveness.

So… throwing the best practices “baby” out with the bathwater is Bad. Folks who can sort best practices out and apply them appropriately are Good.

The Ugly

I have been around a handful of Devs who have used Agile or DevOps to justify being granted the “keys” to the operational “kingdom.” At the same time, they won’t accept the responsibility that comes with that power. These people convince the business that availability problems and release delays are all about Ops and their arcane processes. I read here where the IT Skeptic called this kind of behavior SmashOps. When DevOps is used as a stick to beat up Ops, things get Ugly.

So then…  is DevOps really the next Great Thing in IT?

It’s too soon to tell how important DevOps will be. As it has been with ITIL or other best practices, no matter how brilliant a set of principles may be, when applied inappropriately, those principles will hinder productivity and business alignment.

Like any other movement, DevOps should be judged by how well the people following its approach use it to deliver business value. If the movement helps teams be more effective, efficient, and most importantly, strategic and revenue enhancing partners to their businesses, then DevOps may very well become the next Great Thing.

Is ITIL doing you any good?

I have been thinking lately about success and failure in the IT Department, particularly as it relates to ITIL.

Lots of IT organizations implemented ITSM systems built around the ITIL framework… but the magical results many dreamed of are nowhere to be found.  Folks are still struggling to find the promised productivity and availability gains, and they are frustrated with the additional overhead the new processes introduced.

IT Departments struggle to deliver required availability levels, which erodes the perceived value of the IT Department.  IT projects are completed late and over budget (if they are finished at all), only adding fuel to the fire.

So, that means “ITIL is dead” (or dying) right?

Not so fast….  I do not believe ITIL is the problem.  There are plenty of mature IT organizations making great use of ITIL for improving availability and productivity, as well as any other metrics their Businesses care about.  I have worked regularly with an organization that studies high performing IT shops, the IT Process Institute.  ITPI studies show that effective IT organizations all have in common that they use ITIL or similar processes to manage the flow of work through their teams.

If ITIL is not the problem, what is?

I talk to a lot of business and IT leaders, and if their organizations are struggling to make use of ITIL it is inevitable that the conversation will include these problems.


Do you remember “The 6th Sense” and the famous phrase “I see dead people”?  Well, “I see dead people” who believe that ITIL cannot help them.  To these skeptics, the best that could be said of ITIL is that it is a bunch of meaningless busywork, time spent filling out paperwork instead of getting work done.  At its worst, the skeptic believes, ITIL is a way for management to show IT’s ineffectiveness and justify layoffs or the off-shoring of IT jobs.

Skeptics are so convinced of ITIL’s uselessness that they often actively sabotage the efforts of their organizations to improve by using ITIL.

Inappropriate and Ineffective Application of Processes

I cannot say I understand exactly how or why this has happened… but many people seem to believe ITIL is the complete, all-inclusive, prescriptive authority on how to manage IT.  When people see ITIL this way, it is very easy to create a web of ineffective processes.  IT workers can become so entangled in these processes that they accomplish less and less meaningful work.

This is easiest to describe by an example:  Not long ago, I helped a large IT organization with their ITIL implementation.  Their first attempt at Change Management got completely mired in bureaucracy.  A Change Request to any major system would get so bogged down that it could literally take six months to be approved.

For workers to get work done, they had to game the system.  To ensure they looked like they were following the process, people would write “token” Change Requests for trivial changes that would be likely to be approved with little scrutiny.  Then they would make the real (major) changes to critical systems without any formal plan or approval.

So, the result of this organization’s new Change Management process was that actual performance was reduced because people were avoiding meaningful planning and coordination of changes to avoid getting caught in the red tape.  …not to mention, the skeptics were dancing around gloating about how right they were about ITIL’s uselessness.

The Solution

Believers instead of Skeptics

We need to find the “dead people” people and bring them back to life.  If we have a vision for effective IT Management, we should be able to show skeptics how and why ITIL and other best practices will make them more productive and improve their quality of life.

Regretfully, when someone’s skepticism cannot be replaced by belief, they are toxic and must be removed from the organization so they do not get in the way of their teammates’ success.

Appropriate and Practical Processes

Implementing the right processes in the right order can yield profound results.  If one starts with the right processes, improvement should be nearly immediate.  I think it is also important to understand that every organization needs to create an appropriate ITIL based system for managing work.  You should be judging the success of your ITIL implementation by measurably increased throughput with less human caused downtime.  Anything less means your implementation (not ITIL) has come up short.

There’s also a new IT Management kid in town, the DevOps movement.  My assessment is that DevOps is another refinement of best practices, largely aimed at teams driven by the requirement to release applications from development to production rapidly and reliably.  DevOps, like Visible Ops, seems to be another practical approach to implementing a best practices framework.  Both systems seem very congruent with ITIL to me.

Where to learn more

If you are struggling to make sense of how to implement ITIL processes in an appropriate and Practical way, The Visible Ops Handbook is a great place to start, with its four practical and auditable steps for implementing ITIL.

If you want to learn more about DevOps, have a look at The Phoenix Project:  A Novel About IT, DevOps, and Helping YourBusiness Win.  I promise it will make you think about how work flows through your team in a way you never have before.