The "Ahead in the Clouds" forum ran from January 2010 through January 2011. This material is being offered in archive format, and no updates are planned.
The Office of Management and Budget’s 25 point plan describes a “cloud first” policy for the Federal Government. Is the approach described in Part I, Achieving Operational Efficiency, sufficient to deliver more value to the American taxpayer? What are the strengths or gaps in the plan regarding the use of cloud computing and what types of capabilities should be moved to the cloud first (e.g., within the first 12 months)?
|
- Harry J Foxwell, PhD, Principal Consultant, Oracle Public Sector
- Ron Knode, Director, GSS, LEF Research Associate, CSC
- Peter Coffee, Head of Platform Research, salesforce.com inc.
- Kevin Paschuck, VP, Public Sector, RightNow
|
Harry J Foxwell, PhD
Principal Consultant
Oracle Public Sector
US CIO Vivek Kundra's "25 Point Implementation Plan to Reform Federal
Information Technology Management" is ambitious in its scope and
timeline. Although cloud technologies are maturing rapidly,
understanding of the benefits, risks, and costs of this approach
to IT is evolving slowly. Clearly there are cost-saving efficiencies
already being delivered through data center consolidation,
virtualization, and massively-parallel, energy-efficient, multi-core
servers and integrated systems. Further exploiting these technologies
to fully implement the NIST model of public and private cloud
infrastructures will require not only significant technology changes
but acquisition and management policy changes as well. The 25-Point
plan's focus on identifying and developing government expertise and
developing industry partnerships are essential first steps.
Efforts are currently underway within multiple government agencies
to fulfill some of the 25 Point goals related to "commodity IT
services" such as government-wide "cloud email". While even this
project is a major undertaking, it is among the most feasible of
the many candidates for initial cloud deployment. Converting special
purpose and highly customized agency software to the cloud model
will be much harder and will take significantly more time.
Cloud computing as a technology delivery model and as a business
model do have the potential to provide significant cost savings and
taxpayer value. But, as attractive as this may seem, its benefits
should not be oversold nor its costs and risks underestimated.
Additionally, although the "cloud first" policy is well intentioned,
other data center consolidation technologies should not be overlooked.
For more information: http://blogs.oracle.com/drcloud/
Posted: January 28, 2011
|
Ron Knode
Director, GSS, LEF Research Associate
CSC
New Wine in Old Wineskins?
The "cloud first" policy declaration in OMB's 25-point plan of 9 December 2010 is aggressive thinking and terrific branding. The triple-play promises of economy, flexibility, and speed are precisely the kind of IT payoffs that any enterprise would want.
However, these promises are themselves based on another promise in the same plan, i.e., the promise of a cloud strategy that can deliver safe and secure cloud adoption across the U.S. government. While there is much to like about the ambitious vision and the no-nonsense "let's get going now" message for cloud processing in the plan, real success hinges on making the underlying promise of a practical cloud strategy come true. That promise is the more difficult one. It must respond not only to the needs and realities expressed by (government) cloud consumers, but also to the needs and realities of cloud service providers who can actually deliver these payoffs. Only when both constituencies are accommodated in strategy and mechanics can we move from a hit or miss "Ready, Fire, Aim" process to a reliable "Ready, Aim, Fire" process for cloud adoption and payoff.
And, there's the rub. According to OMB plan, the promise for a practical cloud strategy is rooted in the development of standards for cloud service security, interoperability, and portability. The initial public draft of the Proposed Security Assessment & Authorization for U.S. Government Cloud Computing took a healthy first swing at such standards, but does not yet tend to the needs of all the constituencies involved. Continuing ambiguity about overall risk governance and accountability, a monitoring framework that excludes the cloud consumer, and a complicated scheme for trying to shape Spec Pub 800-53 for cloud services all present high hurdles to overcome.
One cannot but wonder if the biblical admonition against "pouring new wine into old wineskins"1 must be observed here. Trying to bend the conventional machinery for C&A into a community process for "A&A" without clarifying who is accountable for risk acceptance in cloud services only slows cloud adoption. The attempt to fashion existing Spec Pub 800-53 controls into a set of requirements suitable for cloud processing is laudable, but does not suit the consumption model of the cloud. In other words, the old wineskins of traditional C&A models and Spec Pub 800-53 cannot yet handle the new wine of cloud processing.
Until we fulfill the promises made in the OMB plan, we will be constrained to applications that satisfy the compensating techniques first introduced in "Digital Trust in the Cloud" and subsequently amplified in other places. We can gain some benefit from "safe" applications like non-sensitive email, development and test, backup and restore, and even a bit of collaboration and social networking. But, such applications do not deliver the kinds of payoff we need and expect from cloud processing.
In his earlier blog on this matter Chris Hoff declared "we're gonna need a bigger boat." Simply enlarging the vessel may not be enough. The biblical warning declares that “both the wine and the skins will be ruined”1 if we try to pour new wine into old wineskins. The new wine of cloud processing may well need completely new wineskins (standards and practices) for us to enjoy the rich bouquet of enterprise payoffs.
See the full blog response at http://www.csc.com/cloud/blog.
For further information, please contact Ron Knode at: rknode@csc.com
Posted: February 1, 2011
|
Peter Coffee
Head of Platform Research
salesforce.com inc.
When we answer the call for greater operational efficiency in IT operations, we should heed the warning ascribed to Peter Drucker: "There is nothing so useless as doing efficiently that which should not be done at all." Improved execution of current task portfolios is not enough: we should further strive to eliminate, or at a minimum delegate, any activity that does not directly contribute to mission performance. Tens of thousands of organizations use massively scalable multi-tenant services ("public clouds") to pursue that course successfully today.
U.S. CIO Vivek Kundra quickens the pace with his vigorous mandate to consolidate at least 800 data centers by 2015. This goal has the crucial merits of being countable, achievable, and uncomfortable. This goal will not be achieved by picking the low-hanging fruit of redundant or obsolete systems that are readily and painlessly retired as soon as someone decides to do so. Meeting Kundra's challenge will require fresh thinking about who performs what functions, and who needs to own what capabilities – but it will not require lowering our standards for what constitutes satisfactory performance.
Indeed, the National Institute of Standards and Technology has urged us all to treat the move to the cloud as an opportunity for substantial improvements in IT reliability and governance. In its newly released draft document, "Guidelines on Security and Privacy in Public Cloud Computing," NIST correctly asserts that.
Potential areas of improvement where organizations may derive security benefits from transitioning to a public cloud computing environment include the following:
- Staff Specialization: opportunity for staff to specialize in security, privacy, and other areas
- Platform Strength: Greater uniformity and homogeneity facilitate platform hardening and enable better automation of security management
- Resource Availability: Redundancy and disaster recovery capabilities are built into cloud computing environments
- Backup and Recovery: Data maintained within a cloud can be more available, faster to restore, and more reliable
- Mobile Endpoints: clients are generally lightweight computationally and easily supported
- Data Concentration: less of a risk than having data dispersed on portable computers or removable media
This list can aid us in choosing our targets for rapid cloud adoption. We should look for tasks requiring maximum speed and flexibility in deployment to mobile personnel or to frequently relocated sites. We should look for tasks requiring access to large collections of data, but using focused subsets of that data in typical situations. We should look for tasks requiring precise grants of privilege, and rigorous accountability for who has done what with sensitive information. All of these are criteria for which the cloud does not merely meet expectations, but rather elevates the standard of practice as widely demonstrated by enterprise customers today.
CIO Kundra's challenge comes at a time when technical transformation coincides with cultural readiness to consider dramatic change. Tightening resource constraints, combined with broad and growing public adoption of cloud services in both workplace and personal activities, create a powerful push-pull incentive to act – and a basis for confidence in the outcome.
For further information, please contact Peter Coffee at pcoffee@salesforce.com or see his blog at http://cloudblog.salesforce.com/
Posted: January 4, 2011
|
Kevin Paschuck
VP, Public Sector
RightNow
'Cloud First'—An Important Move in the Right Direction
Federal CIO Vivek Kundra's 25-Point Implementation Plan to Reform Federal IT Management, is an important move in the right direction. With cloud technology positioned prominently at the center of the initiative, we are beginning to see a real shift toward recognizing the major benefits, including significant cost savings and decreased implementation times, that government can realize from cloud-based solutions.
The plan outlines a 'Cloud First' policy, which mandates that each agency identify, within three months, three 'must move' IT services and move one of those services to the cloud within 12 months. The remaining services should transfer to the cloud within the next 18 months.
Additionally, approval is reserved for major IT programs that utilize a modular approach, with new customer-facing functionality provided every 6 months.
This is an important component and also addresses President Obama's Memorandum on Transparency and Open Government, issued on January 21, 2009. In this memo, the President outlined the Administration's commitment to creating an unprecedented level of openness in Government and instructed the heads of executive departments and agencies to work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Cloud technology can help federal agencies comply with this mandate.
To deliver on the promise of open government and the plan to reform federal IT, agencies must identify services to transfer to the cloud. Specifically, Web Self Service applications and Pilot Programs are a good starting point to identify the best solutions for specific agency needs.
Coupling cloud solutions with Web self-service applications is an effective means to simultaneously improve constituent services and reduce overhead costs. With Web self-service, constituents can find information that they need on an Agency website quickly, without having to contact a live person. Additionally, the cloud provides Federal agencies with several benefits:
- Lower total cost of ownership
- Benefits from frequent solution innovation
- Increased reliability
- Speedy, measureable results on open government initiatives
Whether in the public or the private sector, identifying the appropriate IT solutions can be a daunting task. For this reason, working with vendors that provide pilot programs is a critical component in the decision making process. One of the unique things about cloud computing is the ability to test the solution first—before signing a contract. Identifying proof points and results up front, prior to making a large investment, is critical to ensuring success.
Cloud solutions provide the scalability that government agencies require to meet constituent needs—eliminating digital capacity limitation worries. By transitioning to the cloud, agencies tap into an infrastructure that is as flexible as their needs are varied. Undoubtedly, these are some of the primary reasons why cloud is positioned as the center stone of the Administration’s plan.
Posted: February 10, 2011
|
Given the rapid expansion of mobile computing devices such as tablets and smart phones, how do you see cloud computing technology enabling capabilities, such as location independent access for users, on these devices? Please identify the best uses for this technology and approaches for the government, taking into consideration security and privacy concerns. |
- Srinivas Krishnamurti, Senior Director, Mobile Solutions, CTO Office, VMware
- Peter Coffee, Head of Platform Research, salesforce.com inc.
- Gregg (Skip) Bailey, Ph.D., Director, Deloitte Consulting LLP
- Seth Landsman, Ph.D., Lead Software Systems Engineer, MITRE
|
Srinivas Krishnamurti
Senior Director, Mobile Solutions, CTO Office
VMware
Enterprises have traditionally favored homogeneity since it enabled them to easily manage a huge fleet of devices deployed to their users. I'll call this stage Client Computing 1.0. This meant that enterprises typically standardized on as many aspects of their client strategy as possible including the hardware, OS, applications, application development frameworks, and management frameworks. The management paradigm was around managing the device and its contents. Unfortunately, the homogeneity that enterprises crave is slowly but surely disappearing.
Enter Client Computing 2.0 where almost every single vector in client computing is changing. Gone are the days of enterprises building Windows applications in Visual Basic or .Net. Many enterprises are embracing web applications, either hosted internally or in a SaaS delivery model. This trend is expected to continue at the expense of local thick-client applications. However, for the foreseeable future, Windows applications will coexist with web applications in many enterprises, so the underlying infrastructure needs to be flexible enough to support both traditional as well as emerging devices.
On the mobile phone application use is changing dramatically. In the past corporate mobile phones were synonymous with accessing email and calendar anytime anywhere. While this paradigm led to productivity increases, enterprises are realizing that more applications could and must, be mobilized to give employees the freedom to realize the true potential of mobile devices. Employees are buying cool and capable PCs, phones and other emerging devices for personal use and actually preferring to use them instead of the corporate-issued device. Consequently, Macs, iPhones, Android phones and iPads are now entering the enterprise and the demand to support them is increasing.
As multiple devices enter the enterprise, each one brings yet another operating system into the enterprise. In Client Computing 1.0 era, most employees were allocated just one device - either a desktop or a laptop. Today many employees are also given a mobile phone and tomorrow, employees may also carry a tablet. The expectation is that they will use these devices interchangeably to access the applications they need at any given point in the day. They may use the desktop at work, cell phone during long, boring meetings, laptop/ tablet during the train ride to/from work.
Consequently, CIOs are faced with having to manage a very complex and heterogeneous environment that includes different device types, operating systems, processor architectures, sizes and shapes. As if the above matrix is not complicated enough, the additional dimension is the ownership of the device - corporate-owned vs. employee-owned. The one thing that has not changed is the need to comply with various government requirements (FISMA, SOX, HIPAA, etc.). The old management paradigms of managing the device just do not work in such a diverse environment. Going forward, solutions must offer Application Access - providing the right set of applications to users irrespective of the device; Security - securing enterprise applications, services and data; and, Scalability - heterogeneous management of multiple different end points.
The challenges are many but also is the potential for increases in effeciency. The opportunity is how to merge the security and reliability of the 1.0 generation with the flexibility, mobility and choice of the 2.0 generation. You can read more about VMware's approach to these Emerging Devices at my blog.
Posted: December 14, 2010
|
Peter Coffee
Head of Platform Research
salesforce.com inc.
As 2010 draws to a close, it's being suggested that the "Wintel axis" of the 1990s and 2000s (Windows operating system, Intel processors) is being overtaken by the "Quadroid" alliance (Qualcomm chip sets in Android-based devices: smartphones, tablets and other form factors yet to emerge). The dominant means of information delivery and of business and personal interaction will be, not the thick-client desktop or laptop PC, but a panoply of network-edge devices that rely primarily on the cloud to store data and run applications -- as dramatized by a Google video showing repeated (and perhaps improbably catastrophic) destruction of a series of Chrome OS netbooks, with no impact on a user's work in progress.
Going into 2011, it's clear that two of the most important use cases in public-sector computing are ideally served by the strengths of the cloud. First, there are tasks that fall to the government requiring rapid deployment in response to legislative initiatives. Economic stimulus programs, for example in the area of health information technology, have demanded implementation schedules far more rapid than those traditionally associated with massive government programs -- and the cloud has delivered. Second, there are scenarios such as disaster relief in which the government must be among (or even in charge of) first responders, with unpredictable but absolutely urgent timing of service delivery into inconvenient locations -- and again, the cloud has delivered, with multi-agency coordination in the field and with peak-load capacity for financial and logistic support in situations such as the devastating earthquake in Haiti.
Regardless of urgency, whether driven by man or by nature, there can be no slackening of attention to security or robustness in such situations: indeed, it is unfortunately true that low barriers to entry into the cloud have facilitated fraud as well as enabling real aid. Fortunately, the apparatus of the cloud is increasingly being recognized as enabling rigorous security practices, and affording access to top-tier security and governance tools, that once were beyond the reach of resource-starved agencies in public-sector and non-profit domains.
With recent high-profile commitments to a "cloud first" strategy in the U.S. federal sector, we may hope that 2011 will bring increasingly confident use of the power of the cloud to serve the public interest.
For further information, please contact Peter Coffee at pcoffee@salesforce.com or see his blog at http://cloudblog.salesforce.com/
Posted: December 17, 2010
|
Gregg (Skip) Bailey, Ph.D.
Director
Deloitte Consulting LLP
Mobile Computing and the Cloud
Mobile computing and the Cloud can form the perfect storm for revolutionary change in the way that agencies do business. Individually each of these technology advancements shows great potential, but together they are creating possibilities for tremendous opportunity. The benefits are only limited by what we can think of.
Now, we don't want to get ahead of ourselves and must recognize that there are important and significant problems that must be solved. Privacy and security are certainly among these problems. Security is at the top of everyone's list in both of these technologies. But in one way, using the Cloud can help a mobile workforce with their security profile. For example, sensitive information can be stored on the Cloud and not on the device, thus reducing the risk of loss of data. Such an approach can be used even if the mobile device has a remote kill pill capability. The reason this is helpful is because there may be a lag in time before one discovers that their mobile device is missing.
Think of the ability to have any information you need, at any time, in any place. Examples of the potential range from the more mundane like being able to read reports on the go in a secure, ubiquitous and fast manner or being able to have your favorite music list follow you around while you change devices from home to the car to your office.
At the other end of the spectrum are examples of law enforcement situational awareness. It is possible to provide details that can help tactical operations come to a safe and successful conclusion. The stuff you see on the television series 24 is beginning to be possible. We can now use smart phones to collect and/or show surveillance video in a very unobtrusive way. Such uses do not require specialized equipment, but can take advantage of relatively inexpensive off the shelf mobile devices.
Think of the situation where an undercover operation is taking place and the undercover operative is dealing with some unknown people. Video or still photographs can be taken unobtrusively and this information can be sent to the office for analysis against a known database, or even better, can be used with crawlers to check social media sites in order to identify the individuals. This gathered information can then be sent back to the operative. Again, the device in the hands of the operative is just an off-the-shelf mobile device.
Mobility and the Cloud go nicely together. They can very much complement each other and extend their capabilities. Many of the examples I have used could be achieved without using The Cloud or off the shelf mobile devices, however, these two technologies make the possibilities even greater and should be explored.
For further information, please contact Gregg (Skip) Bailey at: gbailey@deloitte.com
Posted: December 22, 2010
|
Seth Landsman, Ph.D.
Lead Software Systems Engineer
MITRE
As mobile networks approach desktop quality network speeds there will be a compelling argument for a marriage between cloud computing and mobile computing. Cloud computing can be an enabler for mobile devices to have access to vast amounts of storage, processing capacity, and information when needed and where needed, enabling the truly mobile, always available promise for government users, as well as citizen consumers of government services.
Cloud computing is not new to mobile. Aspects of it have existed in the mobile world for many years, to delegate processing or storage to backend servers that are more capable than the mobile device. As an example, Research in Motion (RIM), makers of the Blackberry®
series of devices, delegates the management and control of devices to data center servers. Opera Software, the web browser company, provides a similar delegation example for web requests.
The integration of new cloud-based mobile capabilities will explode in the next several years. As mobile devices – both smart phones and tablets – replace the laptop as the tool of choice, the need for government to leverage the cloud for applications, storage and processing is going to become essential. As an example, Google Apps and Microsoft Office® 365 (Beta) provide cloud-based office applications that can be accessed from a browser over a network. But, this is just a start. "In the past corporate mobile phones were synonymous with accessing email and calendar anytime anywhere," writes Srinivas Krishnamurti. He adds, "While this paradigm led to productivity increases, enterprises are realizing that more applications could and must, be mobilized to give employees the freedom to realize the true potential of mobile devices."
Mobile applications can enable significant operations and business process advantages if the new capabilities are aligned with the goals of the organization and mission assurance is considered. Before government can fully realize the value of cloud-based mobile computing, network availability, service reliability and security are non-trivial factors that must be addressed. As Peter Coffee cautions, "There can be no slackening of attention to security or robustness…" Further, as these devices are expected to be more capable, their complexity will continue to increase. The reliability and availability of the mobile platform, coupled with the network and cloud services will determine the ability of citizen or government users to interact with government systems, especially when the need is time critical. If the network, device, or cloud service is unavailable, due to lack of security, robustness, or other factors, the user may not be able to access essential capabilities. Given these risks, Federal IT leadership should consider all of these implications in planning their enterprise, and develop an appropriate IT architecture, policies and procedures to mitigate these factors, depending on the intended use of these technologies.
By the end of 2011, 1 in 2 Americans will have a smartphone. As these devices become pervasive and more applications become embedded within the cloud, more government business and more capabilities will be accomplished with the combination of these technologies. As such, cloud computing and mobile technologies coupled with appropriate risk mitigations will provide government and citizen users with more capability, increased mobility and, ultimately, enable them to leverage government services more effectively, regardless of where they are and what means of access they may have at their disposal.
Posted: January 7, 2011
|
For Federal IT leaders considering building a business case for a cloud computing investment, please identify the general cost categories/drivers to include in a business case, and if possible, suggestions on approaches for attributing value to new cloud features. |
- Douglas Bourgeois, VP, Federal Chief Cloud Executive, VMware
- Nathanial Rushfinn, VP, Certified Enterprise Architect, CA Technologies
- Peter Coffee, Head of Platform Research, salesforce.com inc.
- Teresa Carlson, Vice President, Microsoft Federal
- David Mihalchik, Jim Young, Google
- Larry Pizette, Principal Software Systems Engineer, MITRE
|
Douglas Bourgeois
VP, Federal Chief Cloud Executive
VMware
This is a really good question because it considers the overall value of the cloud beyond simply cost efficiency – which is an important part of the value equation. As most are now aware, virtualization has become widely accepted as a key enabler for cloud computing. Infrastructure virtualization provides a significant means of achieving cost efficiency through increased asset utilization. So, the key driver there is the consolidation ratio. In my experience, another key driver of the business case is the VM density. As you know, not all servers are created equal and so it follows that not all virtualized servers are created equal either. In my experience, from a financial modeling perspective, VM density can be a major variable in a cloud cost model. The license cost of the software included within the cloud service offering can be another major driver. Some software products are more affordable than others and some software licensing models are more compatible with cloud computing than others. These structures can make it very difficult to get started in the cloud especially if software acquisition costs are allocated over a small, initial cloud customer base. In effect, the cloud economies of scale can work against you until sufficient scale is achieved.
The broader question of value to be derived from the cloud, of course, includes cost efficiency but does not stop there. In addition, the cloud service offering should be carefully considered and specifically selected to provide the most value to end users. One of the challenges is that the services of most value to an organization will vary depending upon the mission and capabilities of that organization. The best-practice is to identify those widely utilized and common services that would be good candidates for migration to a cloud model and would therefore draw high usage throughout the organization. This widespread potential for adoption will accelerate the efficiencies as usage increases. The final "piece" of the broader value proposition from the cloud can be associated with service levels - perhaps the most important of these is speed. One of the key features behind the end user interest in the cloud is customer self service. This capability, in itself, is not the appealing factor. Rather it is the underlying use of standards and technology to automate the processes for service provisioning that is appealing. The considerable potential to radically reduce cycle times for the provisioning of services is a major component of the overall cloud value proposition.
Since cost efficiency is the easiest to measure and budgets are tight and getting tighter, there is considerable attention given to this key driver. Don't lose sight of the fact that there is also considerable value to be derived from the selection of cloud services as well as the speed in which cloud services are delivered. This latter component – speed – is the one that will "wow" your end users the most and perhaps have the biggest impact on changing the perception of the IT organization on the journey to becoming a service provider.
You can read more at http://www.vmware.com/cloud.
Posted: October 14, 2010
|
Nathanial Rushfinn
Certified Enterprise Architect
CA Technologies
The promises of cloud computing can be nebulous. To build a business case, federal IT leaders need to balance costs of new capital expenditures with reduced operating expenses. They must also be able to measure the success of cloud computing from the viewpoint of the customer.
To realize the benefits of cloud computing, the cost of capital expenditures should be offset by reduced operating expenditures over time. Cost categories for capital expenses should include all of the hardware, software and installation costs to implement new cloud technologies.
Cloud computing will drive the adoption of open source software, reducing costs for operating systems, software development stacks, and applications like Appistry and CA 3Tera AppLogic. IT leaders should also carefully track capital expenditures for systems integration costs related to installation, configuration, and training.
The best way to track operating expenses is to use project portfolio management software (PPM) and track all expenses as services. Projects should be clearly defined so that cost codes can be assigned and broken out by specific tasks in the work break-down structure (WBS). Labor costs must be tracked for both employees and contractors and broken out for each FTE (full time equivalent). Operating expense categories should be tracked by service and should include-time-to-deliver, support costs, infrastructure, and electricity. While some operating expenses like electricity can be tracked against specific servers, many expenses like HVAC and floor space will have to be calculated.
When building a business case for cloud computing, it is especially important to quantify success from the customer's perspective. A short survey taking no more than two minutes can accomplish this. For example, a customer might be asked to rate a statement such as "I find it much easier to order IT services through the new self-service cloud computing portal" using a five-point Likert scale consisting of strongly agree, agree, neutral, disagree, and strongly disagree. Response rates of 50% or more can be interpreted with confidence. Follow-up reminders and incentives, such as a random drawing for a gift certificate, are good ways to increase response rates. Sample categories to include in any survey on cloud computing should include: reduction in time-to-deliver services; ease of use of ordering; improved confidence in IT; and reliability of delivered services.
There are many drivers for implementing cloud computing, and while initiatives or mandates do not require a ROI, a business case does. By clearly defining costs categories for both capital and operating expenses, and by using well-defined customer surveys, federal IT leaders can estimate the success of cloud computing projects from both an ROI perspective and from a customer’s vantage point.
For more information, please see www.ca.com/cloud.
Posted: October 25, 2010
|
Peter Coffee
Head of Platform Research
salesforce.com inc.
There's no question that cloud computing can be amply justified on grounds of reduced IT cost. That doesn't mean that cost-based justification is the best way to drive a cloud initiative.
Cloud computing both reduces and re-allocates the cost of managing data and supporting processes. In one widely cited study, Bechtel Corporation benchmarked its internal costs against its best estimates of the costs of cloud service providers. Across the board—storage, network, server administration, and application portfolio maintenance—the Bechtel estimates favored large-scale cloud providers by ratios on the order of 40 to 1. Economies on this scale are not merely attractive, but compelling.
Other IT costs are less readily measured, but perhaps even more vital to contain. Workers in many organizations report continual distraction and loss of time due to every individual user of a thick-client PC needing to serve, in varying degrees, as his or own system administrator. Tasks of accepting and activating software updates, managing mailbox quotas, and protecting thick-client systems from increasingly aggressive security threats demand effort from individual users but do not serve organizational missions.
Further: the acquisition and deployment costs of conventional IT will almost always precede, often by months or years, the realization of value from those investments. Hardware purchase, facility construction, software licensing, and labor-intensive custom application development divert present resources to deliver future value – that is, in the best-case scenario that a project achieves its goals on specification, on schedule, and on budget. Many projects are placed at grave risk by the growing complexity of the technology and the dynamic nature of the problems being solved: a recent federal analysis found 72% of major projects to be considered at serious risk.
Cloud systems align the cost incurred with the value received, sometimes on the scale of yearly or monthly subscriptions; sometimes at the scale of hours (or fractions of hours) of service received, or bytes of data transferred or stored. Services that are evaluated on a discount or free trial basis are the services that will be used in production, not approximations of a future on-premise configuration. Cloud-delivered applications, including custom applications as well as configured versions of packaged applications, are frequently developed and deployed in days or weeks.
But even this is an argument based on costs, when often the far more powerful justification is in the value to be gained by pursuing projects that today are deferred due to excessive cost or delay of any realistic availability date. The Bureau of the Census, for example, did not use a cloud database to save money, but to meet a deadline that's in the Constitution and was not looking likely to be met by on-premise technology.
Justification of cloud projects should therefore begin with expected improvements in cost, reductions of risk, and accelerations of service availability, but should not stop there: they should also make reasonable projections, based on growing collections of relevant examples, of the value of improved mission performance.
For further information, please contact Peter Coffee at pcoffee@salesforce.com or see his blog at http://cloudblog.salesforce.com/
Posted: October 29, 2010
|
Teresa Carlson
Vice President
Microsoft Federal
This is a question that every government technology leader must deal with when evaluating cloud computing options. What's the ROI? Is this going to save us money? The short answer is unfortunately – "maybe". In general, cloud computing offers cost benefits through increased efficiencies, pooled IT resources and "pay-as-you-go" models. But when making the business case it's important to distinguish between different types of cloud offerings, because matching the unique needs of an organization to the right type of solution is the best way to maximize ROI.
The first step is identifying the right cloud level to implement at – whether it’s at the infrastructure level, the platform level or the software/application level. For example, the GSA recently announced that government agencies would be able to access Infrastructure-as-a-service (IaaS) offerings through Apps.gov. IaaS options are great for agencies that want to get out of the business of buying servers, data center space or network equipment. It’s an entire IT infrastructure in a pay-as-you-go model, but it still requires general administration and maintenance.
For agencies that want to remove IT maintenance completely, SaaS is the way to go. SaaS allows organizations to consume finished applications on-demand, and is typically far less expensive than software that includes a licensed application fee, installation and upgrade costs. Now if an organization has internal developers with the skills to build customized applications, Platform-as-a-Service (PaaS) becomes the best option. Government is seeing an explosion of Gov 2.0 application development for improving citizen services, and PaaS provides developers with the tools they need to test, deploy, host and maintain applications in the same environment.
Organizations have options and each model follows the same basic ROI principle – you only pay for what you use. A pay-as-you-go model combined with very limited upfront costs creates a low risk environment where organizations have the freedom to innovate. If an application or program is successful, cloud offers the scalability and elasticity to incrementally grow as needed. If a program or application doesn’t catch on, the upfront investment was already extremely low. For example, it’s interesting to think about how a program like Cash for Clunkers may have been different in a cloud-based model.
Every organization has to crunch its own numbers to evaluate the cloud solution that makes the most business sense, but the number of cloud options and reduced implementation risk make the current IT environment ripe for innovation. That freedom should be factored into any ROI discussion.
For more information, please see Teresa Carlson's FutureFed blog at: http://blogs.msdn.com/USPUBLICSector/
Posted: November 1, 2010
|
David Mihalchik, Jim Young (pictured)
Google
Why the Cloud Makes Good Business Sense
Cloud computing offers the federal government an unprecedented opportunity to access
more powerful, modern technology with constant innovation at a substantially lower cost. Similar
to the existing practices of many businesses and government agencies who outsource functions
like payroll, shipping, and helpdesk support -- it makes good business sense to use a cloud
provider who offers better applications with government FISMA compliant security at a lower
cost than an organization can provide on its own.
By taking advantage of the scale at which cloud providers operate, organizations using
cloud-based applications drive down their own costs substantially. In fact, a recent Brookings
Institution study found that agencies moving to the cloud can cut costs as much as 50%. The three main areas in which the cloud offers cost savings are labor, hardware and software.
The primary driver of cost savings is the reduced amount of employee time spent patching
and maintaining servers and software applications. This labor can instead be applied to the
government's more mission-critical systems. By using systems operated by cloud providers,
agencies can decrease hardware costs and the associated costs of real estate, electricity, and
more required to operate servers in an organization's own data centers. Additionally, instead of
the traditional model of an upfront software licensing cost plus a recurring annual maintenance
fee, cloud computing applications are paid for via an annual subscription fee. In addition to
providing cost savings, this model offers both predictability and flexibility, as organizations
evolve or change in size.
Harder to measure are the soft cost savings associated with cloud computing.
Ubiquitous access, increased productivity and better security are all worth something to cloud
users, but are not always easy to value. With cloud computing, employees can access their
information anywhere they have access to an Internet connection, whether at work, home, in
the field, or on travel. The cloud also makes people more productive by making it easier to
collaborate with fellow employees and locate an organization's historical information, lessons
learned, and improve organizational knowledge management. And if users ever lose a laptop
or mobile device, with their data stored in the cloud they can be back up and running in no
time; not to mention the benefit of limiting the organization's risk of such a lost device. On the
security front, in many cases cloud providers offer security capabilities -- such as redundant
data centers for failover -- that would be prohibitively expensive for organizations to build on
their own. All of this must be considered when building a business case for moving to the cloud.
Government agencies are already benefiting from moving to the cloud. Take for example Lawrence Berkeley National Laboratory. By moving to Google's cloud-based email and collaboration tools, Berkeley Lab expects to save in hardware, software and labor costs, while increasing email storage and improving collaboration tools. (See Government Computer News article for details.) With these results, agencies should take a serious look and independently assess the business case including mission, operational, and financial, plus workforce trends for user expectations in the workplace, for moving some of their applications to the cloud.
For more business case ROI information, see http://googleenterprise.blogspot.com/2010/11/how-much-is-faster-collaboration-worth.html.
Posted: November 7, 2010
|
Larry Pizette
Principal Engineer
MITRE
The value that an organization obtains from well-publicized cloud computing benefits such as increased utilization of hardware, location independent access for users, and scalable computing environments, will vary based upon their unique goals and circumstances. "Every organization has to crunch its own numbers to evaluate the cloud solution that makes the most business sense, but the number of cloud options and reduced implementation risk make the current IT environment ripe for innovation" writes Teresa Carlson.
Government is both providing cloud environments and using them. In order to establish a business case for being a cloud provider, whether private or community, cloud-specific benefits and costs need to be estimated and analyzed. The owning organization invests their resources into their own hardware and software and operates and controls their own infrastructure. Through more efficient use of physical servers, reductions in cost categories such as capital investment and ongoing operating expense can be realized. The value of new capabilities for users and costs for delivering the capabilities should be included along with the costs for meeting rigorous requirements for COOP, location independent access for users, security, "up time" and help desk support. Nathanial Rushfinn notes, "By clearly defining cost categories for both capital and operating expenses, and by using well-defined customer surveys, federal IT leaders can estimate the success of cloud computing projects from both an ROI perspective and from a customer's vantage point."
When using a public or community cloud service, the acquiring government organization no longer needs to invest significant capital for building their own data center capability, which can include cost drivers such as buildings, storage hardware, servers, and HVAC systems. Associated cost drivers such as electricity, maintenance contracts, software license costs and support personnel for data center infrastructure are reduced. In addition to the cost reductions, there can be value from increased agility. Douglas Bourgeois states: "Don't lose sight of the fact that there is also considerable value to be derived from the selection of cloud services as well as the speed in which cloud services are delivered." These cost reductions driven by using public and community clouds need to be compared against the cost areas that will increase. In addition to monthly usage costs, there are on-going costs to manage the relationship with the provider. These cost categories can include porting, integration, data migration, testing, security analysis and certification and accreditation (C&A) costs that impact the business case.
In addition to the above factors, there are many considerations relevant to organization-specific business cases that can drive costs, such as schedule demands, network dependency, security requirements, and risk analysis. The value can be more than cost savings, notes Peter Coffee. "Justification of cloud projects should therefore begin with expected improvements in cost, reductions of risk, and accelerations of service availability, but should not stop there: they should also make reasonable projections, based on growing collections of relevant examples, of the value of improved mission performance."
Posted: November 12, 2010
|
Often service level agreements (SLAs), contracts, or memorandums of understanding (MOUs) are used between organizations to define the relationship between the service provider and consumer. For a Federal Government or DoD context, please describe or suggest important attributes of SLAs, contracts, MOUs, or other status information that are needed to enable successful operational cloud deployments.
|
- Gregg (Skip) Bailey, Ph.D., Director, Deloitte Consulting LLP
- Erik Hille, Director, Cloud Business at CA Technologies, CA Technologies
- Ron Knode, Director, GSS, LEF Research Associate
- Peter Coffee, Head of Platform Research, salesforce.com inc.
- Teresa Carlson, Vice President, Microsoft Federal
- Lynn McPherson, Lead Software Systems Engineer, MITRE
|
Gregg (Skip) Bailey, Ph.D.
Director
Deloitte Consulting LLP
The relationship between the provider and the consumer (or subscriber) is critical to success with Cloud Computing, as it is with any service. One piece of the relationship is to fully understanding what you are buying. For an Internal Cloud, the provider and consumer may be in the same organization. In the case of a Public Cloud or Virtual Private Cloud, the need for a good relationship cannot be over stressed. It has been said that good fences make good neighbors. Creating and maintaining a good set of SLAs are the fences. Accordingly, a clear and healthy relationship of mutual understanding and alignment is a critical success factor. For the IT shop providing or brokering Cloud Services to internal clients, getting the right SLAs is critical as they ultimately are responsible to the client regardless of the downstream agreements.
First, you should make sure that you are clear about what is most important to the consumer in terms of performance. For some it may be availability or system responsiveness, and others a timely back up schedule. I recommend listing the attributes of the service that are most important, and then figuring out how to measure those attributes in the most systemic and meaningful way possible. I had one agency tell me that a major problem came up because two vendors were using different sources of time. So you may even want SLAs on how time is used and what source it comes from.
Next, as it turns out, coming up with good metrics is one of the most difficult steps to establishing an SLA. Let's takes something as seemingly strait forward as availability. How do you measure it? Is it the availability of the infrastructure and network or the availability of the application? If it is the application, which applications are critical to the end-users? How do you handle scheduled down time? There is a twist to choosing metrics. In some cases metrics can drive unintended behavior. For example, we used to measure programmers by lines of code delivered (or bugs fixed). The natural result: very long, monolithic, inefficient code. Many of these unintended behaviors can be hard to predict. Make sure you have a way to adjust the SLA if it is creating unintended behavior or just not working for you.
Finally I would recommend that your SLAs have real teeth in them, aligning risk and reward appropriately. If the risks are outside of your immediate control, the SLA should address the risk and the consequences. If the SLA does make the provider a little uncomfortable, they may be more responsive and deliver quicker solutions. In either case, both parties must be involved in creating, monitoring, and fixing SLAs.
In summary, first focus on the attributes most important to you, remembering relationship is important. Next, build good metrics for those attributes. Monitor the SLAs to make sure they are working for you and change if necessary. Finally, make the consequences of failed SLAs painful enough to promote quick response.
For further information, please contact Gregg (Skip) Bailey at: gbailey@deloitte.com
Posted: September 23, 2010
|
Erik Hille
Director, Cloud Business at CA Technologies
CA Technologies
Pressured to improve operational performance and accountability, many federal agencies have increased scrutiny over their outsourcing strategies. Ironically, as the outsourcing market has evolved to include cloud-based services, this level of scrutiny has not been applied to these emerging delivery methods. Cloud providers excel at communicating the business benefits of their service, but from an accountability perspective, many could stand to take a more proactive stance. Here are 5 things you should think about when establishing service level agreements (SLAs), memoranda of understanding (MOU), and performance measures with cloud providers:
1) An MOU won't cut it: Although it can express an obligation between the service provider and the agency, an MOU is not a strong enough document to govern the relationship. MOUs fall short of being an enforceable contract. Because the outsourced services may be mission critical and involve fundamental building blocks such as infrastructure or platforms, it is far more effective to leverage a contract that describes what services are outsourced, what the responsibilities of both parties are and what the performance characteristics will be.
2) SLAs and contracts are the same thing: Outsourcers have used a specialized version of an SLA called an underpinning contract (UPC) for years. This contract outlines the services the provider will deliver, the penalties and credits associated with under-and over-performance, and the metrics that will be used to describe how the contract will be delivered.
3) Measure performance, not just provisioning: Note that the SLA is a contract, not an operational performance characteristic such as "% uptime." Instead of SLAs or UPCs, these measures are "metrics," indicators of the cloud service's operational parameters. Because much of the external cloud market grew out of the virtualization space, many providers offer provisioning metrics, but steadfastly avoid performance metrics (% uptime, throughput, capacity, etc.). These cloud services are still fundamental building blocks for running your agency and must be protected operationally.
4) It is your data, and you are not doing the provider a favor: It's not uncommon for providers to keep performance metrics close to the vest. Many are trying to avoid obvious penalties and some go to great lengths to report performance only to the minimum required level. If a provider claims the performance data is proprietary, they are wrong. The agency needs to ensure that it has access to these metrics for a level of assurance against the obligations agreed upon in the contract itself.
5) Active -- not passive -- monitoring is key: Another way some cloud service providers might fail to report performance is to expect it of the agency). They leave it up to the customer to find the outage, report it, and collect the penalty. Instead agencies need to be proactive, either requiring the provider to implement a Service Level Management solution to actively monitor the agreement, or they need to do so remotely. In this way, both parties are able to agree to the performance of the contract.
For further information, please contact Erik Hille at: Erik.Hille@ca.com
Posted: September 24, 2010
|
Ron Knode
Director, GSS, LEF Research Associate
CSC
In the Cloud, Security Begins with a 'T'
We've all seen clouds work. We've all read case studies of productive use of the cloud in both government and industry. We've all been inundated with a seemingly endless cascade of cloud technology announcements, offerings and alternatives. And, we're probably all near to some cloud technology testbed of one variety or another. In the face of such single-minded devotion to the "technology of cloud" we might conclude that all we need for a trusted cloud operation is the right technology arranged and configured in the right way. Clouds are technology, right?!
Wrong! As much as we are (rightfully) intrigued by the technology of cloud, and as much as we impressed by the snappy and snazzy way cloud technology seems to respond to our needs, the real power of cloud is what happens around the technology. Clouds need technology for sure. But, trusted clouds need people, and process, and rules for operation, and governance, and accountability for outcomes even more. The technology of clouds is evolutionary. The consumption model for clouds is revolutionary. When those people and process and rules and accountabilities that are needed span organizations (internally or externally), then we inevitably must include some sort of agreed mechanisms for cloud service delivery, e.g., Service Level Agreements (SLAs), Memoranda of Understanding (MOUs), or contract terms and conditions (T&C's).
Okay, we're making progress. We know we need the important (revolutionary) mechanisms around sound (evolutionary) technology in order to generate and sustain trust in cloud operations and reap the payoffs that are promised to us. But, what should those mechanisms emphasize in order to capture the best payoff situation?
That’s the thrust of the question for September. And, as usual, the Government's desire to find the important characteristics of such mechanisms is not much different from that of industry.
The answer to that question lies in the recognition that:
In the cloud, 'security' begins with a 'T'.
Transparency is the single most important characteristic required to generate trust and capture payoffs. Beyond the standard characteristics of availability and incident response timeliness (for which SLAs are well known), additional SLAs, MOUs and/or T&C's should reinforce the characteristic of transparency in cloud service delivery. While the technology must support the delivery of transparency of service, it is the accompanying mechanisms of service definition that provide the real payoffs.
The Precis for the CloudTrust Protocol includes a list of the elements of transparency that can be the basis for an SLA that requires the measurement and reporting of such metrics. The CloudTrust Protocol is intended to provide a vehicle for such reporting. In addition, that same reference also describes a recommended SLA for self-reporting by cloud service providers (to reduce the chances of 'gaming' results). Whether in SLAs or MOUs or T&Cs, or even in standards and 'best practices' themselves, attention to the transparency of service is essential.
Just remember your spelling lesson for the cloud and payoffs can come your way.
See the full blog response at www.trustedcloudservices.com.
For further information, please contact Ron Knode at: rknode@csc.com
Posted: September 27, 2010
|
Peter Coffee
Head of Platform Research
salesforce.com inc.
IT-using organizations want service today, not a credit for service tomorrow -- or any other compensation for a service provider's failure to provide what was promised. Proven cloud providers like salesforce.com, Amazon Web Services, and Google are meeting the need for true service by giving customers prompt and detailed information -- via Web sites like trust.salesforce.com, status.aws.amazon.com, and www.google.com/appsstatus -- to provide the record of reliability, and disclosure of even slight departures from normal operation, that let customers plan with confidence.
Organizations should make a cloud comparison, not against an ideal, but against legacy IT's reality. When organizations own and operate their own IT infrastructure, they assume the entire burden of providing reserve capacity and protection against interruption of capability. Backup storage, backup power, even entire backup data centers are routine expenses in the world of traditional IT. If these protections turn out to be inadequate, the user organization bears all costs of mission failure.
In contrast, cloud providers provide reserve capacity and redundant capabilities, such as backup power and backup connectivity, at a lower cost per customer due to massive economies of scale. Moreover, cloud providers have enormous incentive to assure that their customers do not experience service degradations that lead to unfavorable publicity. Further, each customer of a true multi-tenant cloud enjoys "sum of all fears" protection: the provider must satisfy all concerns of all customers, and in the process will generally address a superset of the concerns of any single customer.
If cloud providers price their services to cover worst-case consequential damages of any service limitation, as felt by their most demanding customer, the result will not be economically attractive to the vast majority of customers. Those customers will be better served when they make their own assessments of risk and cost, and mitigate those risks in a way that meets their own requirements.
Cloud data centers will be, for the first time, statistically similar enough to enable accurate pricing of risk: a vast improvement over the unique, unpredictable risk of a myriad of individual and uniquely configured on-premise data centers. We may therefore expect to see an efficient marketplace of risk that the IT professional has previously lacked.
Compare the cloud's risks to the risks that are experienced by the customers of a shipping service. Those customers make their own judgment of the reliability of that service, and of the consequences of any delay or damage or loss of a shipment. They purchase insurance, or keep extra inventory on hand, or take other measures to limit their exposure. In a similar way, the most risk-sensitive customers of cloud services will choose their own measures that are suited to their own particular circumstances.
A "service level agreement" does not give the customer what's really needed -- which is reliable, secure service that gracefully handles peak loads without the customer needing to own peak-load capacity. That's what the cloud is all about, and that's what cloud service customers are quickly learning to expect.
For further information, please contact Peter Coffee at: pcoffee@salesforce.com
Posted: September 29, 2010
|
Teresa Carlson
Vice President
Microsoft Federal
The same terms always pop up when discussing cloud SLAs - uptime, availability, reliability. These words speak to the really innovative quality of cloud computing – how computing resources are accessed. You're not buying a product with a set of agreed upon features, you're buying a new way to house and tap into your IT assets. Customers want assurance that they will have access to their data and applications, and it's up to vendors to guarantee this access. When reliability is combined with security, cloud computing becomes a no-brainer, and SLAs are absolutely necessary to outline agreed upon service expectations that meet customer needs.
But as cloud infrastructures have improved, access seems like a pretty low bar. If I'm a Public Sector CIO evaluating cloud computing options, I'm not willing to accept a significant decrease in access (uptime, availability, reliability) in order to gain the other benefits cloud offers (efficiency, scalability, cost reductions). A large part of my decision will be based on a cloud solution's ability to be there when I need it, and it shouldn't be much different the reliability of traditional IT infrastructures.
Federal agencies can't afford regular, unexpected service interruptions. The data and the mission is too important. This is why data portability is essential. It gives agencies the ultimate option - to immediately relocate to another cloud provider if their service needs aren't being met. Agencies need the freedom to move their data to an environment they trust, and SLAs that include data portability language protect customers more effectively than any other metric or clause.
It's common for SLAs to include financial compensation for service outages, and that's an important start. Customers should be compensated for lost access, but if there are repeated, unscheduled breaks in service, that policy is failing to provide value. All enterprise organizations require consistent access to their computing resources, and when service needs aren't being met, data portability adds another layer of assurance beyond financial return.
It's true that service interruptions often occur because of network outages rather than issues with the cloud solution itself. Unfortunately, the result is the same for customers – lack of access. To limit these breaks in service, vendors should address minimum network connectivity requirements in the SLA. Network monitoring is a key component of a holistic cloud implementation, and vendors should continually and proactively work with network providers to ensure connectivity needs are being met. SLAs can address these issues at the outset, and can even outline network backup options like leveraging satellite connectivity.
Overall, SLAs are extremely important, but they are evolving as cloud offerings improve. Customers are rightly expecting more, and vendors must step up their game to deliver. Ensuring data portability in SLAs avoids vendor lock-in, promotes choice, increases competition and allows government enterprises to freely choose the best available solutions.
For more information, please see Teresa Carlson's FutureFed blog at: http://blogs.msdn.com/USPUBLICSector/
Posted: October 4, 2010
|
Lynn McPherson
Lead Software Systems Engineer
MITRE
An SLA is an agreement between two parties, the service provider and the service consumer, that defines a contractual relationship. As Skip Bailey stated in his October response above, "The relationship between the provider and the consumer (or subscriber) is critical to success with Cloud Computing, as it is with any service." As is true in any successful relationship, both parties must understand and accept certain responsibilities—successful relationships are rarely one-sided. Among other things, the responsibilities of the service provider include providing the described service within defined constraints, collection of agreed upon metrics, timely production of predefined reports, and adherence to an agreed upon incident management and resolution process. Likewise, the consumer bears certain responsibilities which include, but are not limited to, ensuring that they don't exceed the agreed upon workload as well as validation that the provider is collecting and reporting metrics properly through a quality assurance surveillance plan.
Other necessary and fundamental aspects of the relationship include:
- Control: Delineates the aspects of the service which are and are not under the control of the provider and is critical to writing an effective, enforceable SLA. The provider is held accountable for delivering a particular level of service which is agreed upon in the SLA; however, the provider should not be held accountable for failures that occur which are outside their control. The complexity of today's computing environments necessitate that the SLA clearly describe those aspects of the service which are and are not under their control. In general, descriptions such as this necessitate the inclusion of an architecture diagram to supplement the verbiage provided.
- Measurement: Ensures and demonstrates that the agreed upon level of service is delivered. Measurement encompasses measures and metrics. A measure is a value that is recorded as a result of a physical measurement such as a single instance of a response time. A metric is a quantitative measure of the degree to which a system, component, or process possess a given attribute. Metrics are the foundation of a well defined SLA; they must be objectively measureable or calculated from objectively defined measures. The lack of objectively measureable metrics may result in an SLA that is unenforceable.
- Transparency: Implies openness, communication, and accountability. A successful relationship is always based, in part, on trust and transparency is fundamental to trust. Transparency applies to many aspects of the SLA including the definition of unambiguous responsibilities and metrics as well as a clear understanding of the provider’s span of control. In addition, it is extremely important that the reporting process, scheduled reviews, and methods for computing incentives and penalties be completely transparent to all involved parties.
Taken together, responsibilities, control, measurement and transparency in an SLA can help to establish trust and facilitate a successful cloud computing relationship.
For further information please contact Lynn McPherson at cloudbloggers-list@lists.mitre.org
Posted: October 5, 2010
|
If you would like to contribute an answer to this question, or future questions, please Contact Us. Terms and Conditions of Use
|
|
If you are from a U.S. government agency or DoD organization and would like to pose a question for this forum, let us know.
Welcome
"Ahead in the Clouds" is a public forum to provide federal government agencies with meaningful answers to common cloud computing questions, drawing from leading thinkers in the field. Each month we pose a new question, then post both summary and detailed responses.
Current Month
January 2011
|
|
|