 |
Brian Shaw, DASN C4I/IO/Space, Director of Cyber Warfare asks: "How could a government system be more resilient to attack if hosted on a public cloud computing model vice a private one and what are the added vulnerabilities the government would need to consider?"
|
- Gregg (Skip) Bailey, Ph.D., Deloitte Consulting LLP
- Nicklous Combs, Chief Technology Officer, EMC Federal
- Ron Knode, Director, GSS, LEF Research Associate, CSC
- Rick McEachern, EVP of Business Development, LongJump
- Jeff Bergeron, Chief Technologist, U.S. Public Sector, HP
- Peter Coffee, Director of Platform Research, salesforce.com inc.
- Simon Crosby, CTO, Data Center and Cloud Division, Citrix
- Jim Young, DoD Manager, Google
- Teresa Carlson, Vice President, Microsoft Federal
- Emily Hawthorn, Principal Infosec Engineer/Scientist, MITRE
|
Gregg (Skip) Bailey, Ph.D.
Director
Deloitte Consulting LLP
The public versus private cloud approach will be debated for some time. A key to understanding this debate is to distinguish between vulnerability and resiliency. With any public option that we are aware of, there are some inherent vulnerabilities that must be addressed. These vulnerabilities are beyond the security measures that take place in a private cloud. There are at least two big differences between public and private cloud offerings with regard to vulnerabilities.
First, the number of external groups that you are sharing space with and the level of trust you have with those groups. This is the reason that Community Cloud is so popular. In a community cloud you can get some economies of scale, yet limit access to external groups that you have a working relationship with as well as a level of trust. In a public cloud offering you have the whole internet in the transport portion of the cloud offering and you have the client base of your provider at the compute and storage part of the cloud transaction. Both of these are areas of concern that need to be addressed. In addition, you have to be concerned about insider threats from the provider themselves.
The second big difference in the vulnerability between public and private is that some of the security measures in your system are provided by someone else, namely the cloud provider. This means you have to trust your provider to do what they say they will in terms of security (although there are ways to verify it). You also have to understand what steps they will take to maintain security and make sure your security efforts mesh nicely with the provider's efforts. Recently, a number of cloud providers have been working to get their offerings Certified and Accredited (C&A) by the Federal Government. This effort should help alleviate some of this concern.
Interestingly, many of the situations that can cause extra risk in terms of vulnerability are the very things that can provide better resiliency. For example, there are many content management systems that can distribute your content across the Internet, making a Denial of Service (DOS) type attack much more difficult. When you have large systems that are virtualized, risk is reduced (if things are done right). In most public cloud offerings the scale is much greater than private cloud offerings. This scale (of public cloud offerings) can provide the resiliency that would be difficult in a smaller private cloud. As a general rule (if all other things are equal), when the size of the cloud increases the vulnerability and the resiliency both increase. In this case, size is a function of how many players are involved in the cloud. It is our belief that the vulnerabilities will decrease in public offerings as the vendors become C&Aed and improve their security profiles. We also believe that public resiliency will continue to increase as economies of scale continue to grow and effective practices are implemented.
For further information, please contact Gregg (Skip) Bailey at: gbailey@deloitte.com
Posted: May 12, 2010
|
Nicklous Combs
Chief Technology Officer
EMC Federal
Good Question, Although building private clouds will always provide a more resilient environment to attack than public clouds if built correctly, there are a few reasons that public clouds can be more resilient. The main reason is the statement I made about "if built correctly". Public cloud providers will normally be subject matter experts in delivering resilient cloud solutions and therefore provide high availability environments at a great price point. As many organizations start to build private clouds, they may not have the expertise to build them correctly and some will no doubt create environments that are less than adequate to meet their security needs. This is why choosing a partner with experience is critical in moving towards a cloud environment.
Another reason public clouds could be more resilient is that most public cloud environments will be limited in the types of services that they can provide therefore decreasing the attack surface of the IT environment. Private clouds will no doubt have to support a much wider range of services which will make their attack surface much larger than public cloud offerings. As far as added vulnerabilities, there are many that need to be considered, there are way too many to cover in this short blog. Customers that need to provide non-repudiation, data erasure and cleansing procedures will be challenged when working with a vendor who provides multi-tenant environments through public clouds. Who is going to be doing the auditing in a public cloud? Do you really think companies like Coke and Pepsi will want to put their data in the same multi-tenant cloud controlled by someone else?
For more information: http://www.emc.com/?fromGlobalSiteSelect
Posted: May 14, 2010
|
Ron Knode
Director, GSS, LEF Research Associate
CSC
Fight Fire with Fire? ... in Clouds?
One of the most frequently used tools to fight forest fires is ... more fire! At first blush, this approach is counter-intuitive. But, the use of "back burns" to reduce the amount of flammable material and (ultimately) control the fire itself is a well-known and effective technique.
The irony of "fighting fire with fire" lies at the heart of this month's question. And, since the issue is equally relevant for both government and industry, let’s restate the question as, "Can we use cloud processing to help solve the security and availability problems normally aggravated by cloud processing?"
Once again, at first blush the answer would be "No." The security issues of cloud processing are well-advertised, and those issues continue to be the number one stumbling block to greater industry and government use of cloud processing of all types. The lack of transparency (especially in public clouds) is the root of most anxiety about cloud usage, and thus represents the biggest restraint on enterprise use of (public) cloud processing. Even if the contradiction of "fighting fire with fire" in the cloud can be successfully applied, this lack of transparency will still need to be overcome before industry and government are liberated to use (public) cloud processing for important mission functions.
Yet, there are some features of cloud processing that do suggest we can "fight cloud insecurity with cloud characteristics"! Consider, for example, the superior scalability, flexibility, adaptability, and redundancy of the public cloud. Then, imagine using those characteristics to deploy threat and vulnerability countermeasures in thousands of locations, many of which are (dynamically) placed closer to the threat source than any conventional static system. Such a dynamic operating characteristic would provide a new dimension for a classic "defense in depth" architecture, and could result in greatly improved resilience to attack. In particular, resistance to Distributed Denial of-Service (DDoS) attacks could be enhanced with use of a public cloud. This very same architectural model for public cloud usage has already demonstrated its effectiveness against one of the largest DDoS attacks against the U.S. government. The use of Akamai's EdgePlatform (public cloud) prevented a huge DDoS attack in July 2009 from disrupting operations of protected locations.
No doubt, we could imagine other examples of how certain cloud characteristics can be used to improve some security capabilities in some circumstances. Certainly, protection against DDoS is one good example. But, not every security need can be improved by such an ironic application of the cloud. (After all, we don't save drowning people by pouring more water on them!) And, every use of a cloud brings with it the issues of lost transparency for the cloud consumer (e.g., configurations unseen, vulnerabilities unmeasured, accesses unreported, data and processing unanchored, ...).
So, the cloud can improve security in certain important ways. But, all fire, no matter how ironically used, is hot and dangerous. The cloud is no different.
See the full blog response at www.trustedcloudservices.com.
For further information, please contact Ron Knode at: rknode@csc.com
Posted: May 17, 2010
|
Rick McEachern
EVP of Business Development
LongJump
Besides being affordable, cloud computing offers the opportunity to run within multiple distributed and replicated systems. For example, through Amazon EC2 and other IaaS (Infrastructure-as-a-Service), your application servers are virtualized and replicated as you need them. This is likely the best approach to dealing with a physical attack because there is no physical box you can locate to attack. Instead, would-be attackers would instead be directly focused on cyber-warfare in the form of DOS or denial of service and intrusion.
The primary DOS attack happens through overloading existing services. Again, through an IaaS and in most hosting environments, when spikes happen, the IaaS provider generally increases capacity “elastically.” You can then determine if you need to block out access from someone. Without an IaaS, monitors would indicate traffic increases and it’s up to staff to respond by replicating servers to support the increased traffic or simply holding back access. You should keep in mind that DOS attacks can happen in any system where public networks are involved, not just in the Cloud.
For an intrusion attack where the goal is to access, steal, modify or delete the data itself, then it is all about the software and the software service provider.
Organizationally speaking, all the core system administration best practices must be adhered to. Have we set up proper roles, password policies, and limited IP addresses – particularly for administrator access? Does stored data have the option of being encrypted? Are there features to monitor changes in the data or to maintain access logs? Have we limited views of sensitive data to only the select few? How secure are the APIs into the system? Does the system allow me to limit API calls?
And the service provider must be geared to handle vulnerabilities from their internal systems into their clients’ systems. This means having an established set of policies in place and documented trust in their employees. In addition, the hosting facilities must be rock solid and trustworthy.
Even then, the one true option to circumvent many of these challenges is to consider working in a private cloud, which can offer the best of both elasticity and control. A growing number of PaaS (Platform-as-a-Service) vendors provide scalable, multi-tenant application platforms that work exactly like their public cloud offerings with the added flexibility to host all or some of the tenants in private environments, including behind the firewall or on an IaaS with a protected URL. PaaS within a private cloud can be an ideal fit for data-sensitive, mission-critical public sector computing, where you need to maintain public use of web-centric applications while maintaining controlled access to a secure backend. With a multi-tenant PaaS, government agencies can in a sense become cloud-based application vendors themselves, creating, managing, and charging back for web applications to their clients with complete control just like a SaaS provider.
For further information, please visit http://www.longjump.com
Posted: May 17, 2010
|
Jeff Bergeron
Chief Technologist, U.S. Public Sector
HP
Resilience is, at least partly, one factor in a number of events necessary to take a system offline. A system housed on a single server instance in a single location on a single network – no matter how secure the facility, is susceptible to any number of events that could take that server offline. Conversely, a system that can be rapidly instantiated on virtual servers across many providers, physical locations and networks is able to withstand events that would normally cause a system outage. In a distributed model, servers, facilities and networks are highly resilient and all cloud service providers would have to be compromised to fully disable the entire system.
The use of public cloud providers to increase resilience does introduce a number of potential new vulnerabilities, including the need to improve management and automation of provisioning in response to outage events or attacks, the protection of data and software spread across many locations that are not under the control of the government, the possibility of a Distributed Denial of Service (DDOS) attack due to an oversensitive migration response, and other attacks made possible when infrastructure control is delegated in a distributed fashion. The addition of smart, seamless, cloud management services capable of sensing vulnerabilities within the cloud and dynamically reallocating resources based on these threats could provide additional safeguards. Cloud resilience could be obtained through the use of geographically dispersed cloud service providers operating in a secure Inter-cloud service model for transparent service and data movement between trusted public cloud providers.
Posted: May 21, 2010
|
Peter Coffee
Director of Platform Research
salesforce.com inc.
There are compelling reasons for government IT to adopt the public cloud. The GSA has estimated that Web site upgrades formerly requiring six months are done in the cloud in a day. The U.S. Census Bureau used a cloud platform to achieve 12-week deployment of a system to manage its nationwide temporary labor force. The Family Service Agency of San Francisco estimates 50% reduction of administrative time, combined with improved outcomes tracking, thanks to cloud-based re-engineering of mental health case management.
Public clouds can readily handle sudden bursts of activity. The government must be the responder of first resort to massive but infrequent workloads associated with natural or man-made disaster, but it's wasteful to provision static resources – regardless of their mode of operation – that will be idle for most of their useful life. Public clouds provide scalable capacity on demand.
For all these reasons, agencies at every level of government are getting a green light from Washington D.C. to pursue cloud options. Casey Coleman, GSA's CIO, stated last year that "We will...work with industry to ensure cloud-based solutions are secure and compliant." Coleman's statement both acknowledges, and vows to address, the appropriate commitment of those who hold public trusts.
Public cloud providers must make the case for their reliability, security, and governability – but they are already doing this in financial services, health care, education, and any number of other domains in which cloud services already enjoy wide acceptance. Public clouds are forced to provide "sum of all fears" protection that addresses the demands of the most demanding customer – to the benefit of all customers.
The magnitude of threats to government IT will rise as agencies expand their use of public-facing Web sites to make services more available to citizens. Dean Turner, director of Symantec’s Global Intelligence Network, puts it simply when he says that "[Attackers] aren't breaking into your network. They don't have to. You are going to them." Governments will be targets for a growing range of increasingly sophisticated attacks – and these will arise, not only from the connected outside world, but also from within.
Every subscribing organization, therefore, will still need to assign privileges appropriately, audit actions effectively, and control access to information on its way in and out of the system. This is not a new problem arising from the cloud, but the agency using a public cloud can focus its resources on its own specific mission – while common concerns are addressed by the public cloud service provider, with attendant massive economies of scale.
The issues for discussion in the public cloud are not qualitatively different from those already encountered as governments continue their turn toward use of public networks and Web resources, for all the reasons discussed above. What's different in the enterprise public cloud is the far more affordable cost of providing the rigor, accountability and transparency that the market demands to meet the needs of serious customers – whether in the private sector, or in the pursuit of the people's business.
For a full-length version of this write-up, please see salesforce.com's CloudBlog
For further information, please contact Peter Coffee at: pcoffee@salesforce.com
Posted: May 23, 2010
|
Simon Crosby
CTO, Data Center and Cloud Division
Citrix
Picking a cloud is like picking pizza. Give it a try.
There is a misconception that public clouds are risky, but private clouds offer benefits with no downside. While there are legitimate concerns about the maturity of public clouds, where their service abstractions match application needs they can offer a superior service. For example:
- You can encrypt your data. Combined with secure access control and opaque object name spaces you can make the likelihood of data leakage effectively zero. Fewer humans, simpler, infrastructure services, and secure isolation at multiple layers can offer better security.
- By virtue of scale and geographical distribution, clouds can make applications available under conditions that would render your private cloud useless, including attacks and failures.
- Finally, because of their rich connectivity, they are far better placed to deliver applications to end users who are geographically dispersed.
Consider three types of public clouds using an analogy of ordering pizza: "Margherita (with add-ons)", "Build your Own", and "Ready to eat". Compare each to the private cloud equivalent: build the oven, chop wood, light the fire, prepare dough and toppings and then create your dream pizza.
"Margherita (with add-ons)" This kind of cloud offers a set of simple, powerful service abstractions that are easy to understand, with additional services that can be added a-la-carte. None of the services can be changed, but you can combine them. A good example is Amazon Web Services (AWS).
The chief drawback is the relative simplicity of the service abstractions, which limits their applicability. Moreover you can't install an IDS, firewall or router in your virtual network, and they lack audit, role-based access controls and other management abstractions. However, if the service model fits your app well, you benefit from lowest possible cost, and maximum scale. "Margherita" clouds are well suited to web apps, even if you retain the database tier in the enterprise, and they are increasingly useful for other enterprise apps.
"Build your Own" public clouds
These providers allow customers to build rich, isolated, private enterprise infrastructures that are verifiably secure. They are typified by hosters and carriers that add elastic resource consumption. Example: Carpathia which serves many agencies of the Federal Government today.
Such clouds let you choose your core infrastructure, layering on top security, compliance testing, auditing, granular access controls and rich management capabilities. They are fully certified to host highly regulated apps/data and use hardened facilities, with redundancy, replication, and high availability. To this they add elastic compute, network and storage, with granular access control. You can also dynamically instantiate network capabilities such as IDS, firewalling, load balancing and PCI compliance. In summary, there are no enterprise apps that cannot run in this type of cloud, and all the benefits of cloud apply.
"Ready to Eat"
These are managed service providers who host their own services. Examples: hosted virtual desktops, Disaster Recovery, or managed email. An organization with specific competence in the service also runs it, combining economies of scale and automation with their service-specific operations skills to deliver a lower cost service with strong SLAs. Dedicated or elastic services are available, making these useful wherever it makes sense to outsource a traditional enterprise function.
For more information, please see Simon's blog.
Posted: May 27, 2010
|
Jim Young
DoD Manager
Google
Thank you Brian for your question as it raises many issues and questions that should be addressed by providers.
Skilled administrators can run Internet-based services in a highly controlled traditional environment in which certain security controls are assumed, but flexibility and innovation on the system are likely to be negatively impacted. Organizations responsible for Internet-facing networks can offer much more flexible services that dynamically scale more elastically, but they also must be particularly vigilant about ensuring security because the networks are exposed beyond the specific organization.
Even so, cloud computing can in many cases be as secure, if not more secure, than traditional on premise environments. To understand how, it's important to consider how most agencies and departments, and their respective networks, function today. Often, those that run client applications and have to manage a large heterogeneous environment encompassing different operating systems, platforms, and devices running multiple versions of applications, deal constantly with security patching issues and challenges that sometimes disrupt operations in the process. It is this variation that introduces complexity, increases attack avenues, opens larger windows of exposure, and leads to more security vulnerabilities in traditional networks.
NIST has done an excellent job of defining cloud computing definitions so that we have common vocabulary, plus they identified key security areas that need to be discussed with any provider. To help simplify for organizations like yours, the Federal Risk and Authorization Management Program (FedRAMP), is a unified government-wide risk management program focused on large outsourced and multi-agency systems. The program will initially focus on cloud computing but will expand to other domains as the program matures. FedRAMP provides security authorizations and continuous monitoring of shared systems that can be leveraged by agencies to both reduce their security compliance burden and provide them highly effective security services.
FISMA security controls and follow-on, continuous monitoring requirements can be directly applied to public cloud computing models often more effectively than for current on premise systems. In addition, providers can offer community clouds that only host US government data. For more information on this, see http://googleenterprise.blogspot.com/2009/09/google-apps-and-government.html
Posted: May 28, 2010
|
Teresa Carlson
Vice President
Microsoft Federal
The great part about cloud computing is that government organizations have choice. Some data makes sense in the cloud and some data may not. It's not an all or nothing discussion. Security and privacy are rightly the top concerns for most government leaders, and some are far more comfortable housing sensitive information on-premise. That's OK. Agencies should move to the cloud as they're ready, and when they do, they have both public and private options to choose from.
In terms of resiliency, public clouds are extremely robust. They offer scale in terms of underlying architecture that private cloud infrastructures typically don’t provide. The ability to move traffic and data throughout large (or multiple) data centers is a major advantage when it comes to performance, but it's also an advantage in terms of resiliency to attack. Public cloud providers understand the inner workings of large data center environments – what data should be flowing, how it should be flowing, and when something doesn’t seem quite right. This experience and knowledge enables public cloud administrators to spot anomalies quickly, and take action against a threat when it's required. Large public cloud offerings also allow application and data to be replicated and stored in other data center locations, ensuring critical information isn’t lost during a targeted attack.
The risk lies less in the public vs. private argument than in the fact that it's a whole new approach to computing. Virtualized computing requires a different mindset, and it brings the potential for a whole new class of vulnerabilities. As cloud adoption increases additional threats may emerge, which is why government agencies need to be thoughtful about where their data is hosted. Hosting data on-premise doesn't necessarily guarantee that it's more secure, which is why rigorous security methodologies need to be implemented at the outset of development in any cloud environment. Adhering to the best IT security standards that exist today - like ISO 27001, FISMA, SAS 70 Type 1 and HIPPA - not only ensures the highest levels of data protection, but also increases transparency by providing government leaders with specifics on how it's being protected.
Cloud is a major paradigm shift that government leaders are still wrapping their heads around. Data centers aren't just a room with a bunch of servers anymore. Computing has become a utility - a virtualized infrastructure that scales in accordance with need. The balance lies in maximizing these incredible efficiencies with robust privacy and security controls in place.
For more information, please see Teresa Carlson's FutureFed blog at: http://blogs.msdn.com/USPUBLICSector/
Posted: June 14, 2010
|
Emily Hawthorn
Principal Infosec Engineer/Scientist
MITRE
Thanks to our respondents for their very thoughtful remarks!
The European Network for Information Security Agency (ENISA) lists resiliency as market differentiator that will drive cloud service providers. ENISA states, "Security is a priority concern for many cloud customers; many of them will make buying choices on the basis of the reputation for confidentiality, integrity and resilience of, and the security services offered by, a provider." Public clouds offer the potential of resiliency through a number of means including the transparent use of multiple physical sites, redundant networking paths, and the automation of many administration tasks to backup data across physical boundaries. Private clouds can be engineered to provide the same benefits, though rather than leveraging the potentially large capital investment of a multi-tenant service provider, the private cloud provider must fund, secure, and manage the capabilities internally.
The following use cases illustrate the interplay of resiliency and vulnerability in cloud computing:
First, consider moving user desktops to a cloud service, where when users log into the network, they get a virtual desktop with the expected look and feel, however a fresh virtual machine image can be downloaded each time the user logs in. The first benefit is that no matter what malware a user encounters, it is flushed from the virtual desktop when the user logs out. The next time the user logs in, they can get another clean machine image. Additionally, when patches are required, the "gold disk image" need only be patched once, and the new image is delivered consistently at each user login.
For a second use case, please consider moving an enterprise service such as email into the cloud. The user's email client connects to one of several email servers that may be hosted in a virtual environment. If one virtual machine (VM) or server fails, the user session can automatically migrate to another VM. As defined by SLAs, service to the user could continue uninterrupted, as the provider manages the resiliency of their offering – including security, configuration, performance, and patching.
Potential vulnerabilities of public cloud services can come in many forms. For example, multi-tenancy brings with it the risks of attack from within the infrastructure, by another customer of the same service, and the virtualization mechanisms can also expose an attack surface. However, as recently stated on the Navy CIO's website, "There is […] some good news. Since most of the underlying building blocks (e.g., servers, network and storage devices, and software — operating systems and applications) of cloud computing remain the same as those used in traditional information technology systems, much of the existing security policies, practices and solutions can be readily repurposed to fit the new cloud computing paradigm." Still, given the number of newly initiated and planned projects employing cloud computing, many organizations (e.g., University of California, San Diego, Massachusetts Institute of Technology) are researching new attack vectors for a better understanding of cloud implementation vulnerabilities and mitigations.
For further information, please contact Emily Hawthorn at: cloudbloggers-list@lists.mitre.org
Posted: May 28, 2010
|
If you would like to contribute an answer to this question, or future questions, please Contact Us. Terms and Conditions of Use
|
|
If you are from a U.S. government agency or DoD organization and would like to pose a question for this forum, let us know.
Welcome
"Ahead in the Clouds" is a public forum to provide federal government agencies with meaningful answers to common cloud computing questions, drawing from leading thinkers in the field. Each month we pose a new question, then post both summary and detailed responses.
Current Month
January 2011
|
|
|