The Centurylink Technology Solutions Blog - Trends in IT Infrastructure

Results tagged “cloud computing” from The Centurylink Technology Solutions Blog

bigdata-resize.jpg

If you want to get Freudian about Big Data, though, you might notice that the word "oops" is nearly all in the name of the platform.  The truth is that Hadoop in its pure form can be quite challenging to get up and running, especially in a public cloud environment.  While the cloud offers many advantages of cost and flexibility for Big Data, doing Hadoop in the cloud unassisted is asking for a hassle unless you're a serious open source expert.  This is a problem we're solving.

On June 17, 2014, CenturyLink Technology Solutions will hold the event "Enterprise Cloud and the New Rules of IT" during London Technology Week. The event will look at how enterprises can deploy cloud solutions that are both IT ready and developer friendly.

David Shacochis, vice president, cloud platform, at CenturyLink Technology Solutions, recently spoke with William Fellows, research vice president at 451 Research, who will be a guest speaker during the event. In the video interview below, William and David discuss the rise of shadow IT, what's in store for the cloud market and why London needs a technology week.

Digital Customer Experience

Marketing has taken a dramatic turn in the last few years. The traditional "make and sell" model - where, after researching customer reaction to new products or advertising concepts, you create and push a series of campaigns - doesn't work anymore.

 

Now that consumers are continually connected via smartphones and constantly influencing others via social networks, that one-directional, linear approach is quickly giving way to interactive "sense and respond" digital marketing. In this real-time world, an organization must operate at the speed of consumers. To pull that off, you need to master two key emerging technologies: cloud and big data.

data center aisles.jpg

Hybrid IT solutions - particularly those involving data center colocation, managed services and cloud - are thriving.

 

Some prognosticators believe that cloud is the future of everything in enterprise IT. While certainly bold and attention-grabbing, forecasting a future without colocation and managed services is extreme. Cloud certainly is at the forefront of our industry, but hybrid IT solutions will continue to flourish.
CenturyLink Tech Uniquely Positioned.jpg

Having spent more than a decade as an executive in the corporate technology field, I sometimes find myself a bit mystified by the prognostications about who is winning and losing in the race for dominance in any given category.

 

The enterprise cloud falls into this pattern, with observers discussing the industry as if it were a football game at halftime -- the winners clearly delineated and the competition all but over. I beg to differ. If enterprise cloud were a football game on TV, we would barely be at the "brought to you by" commercial in the pre-game show. Clearly the players are on the field, and today we are going to show that CenturyLink is most definitely in the game.

Greetings from 34,000 feet!

 

As I fly through the clouds to VMworld 2012 Barcelona this week, something's put me in a nostalgic mood. It could be inspired by the number of times I've heard "welcome back" after recently rejoining our Cloud Business Unit and re-engaging with the field. It could also be the amazing rate of evolution in our enterprise cloud market has me looking back at how far we've come. It could just be the lack of oxygen on this Lufthansa transatlantic. Or the lyrical cabin crew.

 

In any case, I can't help but think about our strategic relationship with VMware. Savvis has a long history of virtualization and automation, dating back to the Utility Computing platform we launched in 2004 around cutting-edge technologies, like stateless blades and boot-from-SAN. I remember the point when we realized hypervisor technology had caught up with service provider requirements. I still remember that first teleconference with our VMware account team, where we had to introduce the idea (undoubtedly echoed by many others) of a service provider program for ESX licensing. We launched our first VMware cloud product in 2005 and have been growing our Savvis Symphony cloud ever since.

 

We've long shared VMware's vision for the software-defined datacenter, and it has been amazing to watch the maturation of the vCloud Director technology. I remember our team sitting in meetings with VMware in Los Altos, Calif., as far back as 2008, describing the ideal middleware platform for enterprise cloud orchestration. With the latest version of vCloud Director, VMware completes that journey, and we're excited to roll out this great new technology across the Savvis Symphony product line. Our customers will gain even greater flexibility in how they manage their trusted, secure hybrid cloud solutions at Savvis.

 

As we head into VMworld 2012 Barcelona, we're excited about what this strategic relationship means for the growth of our cloud and the future of our product line. We've recently announced VMware vCloud Powered status across our flagship public enterprise cloud, and we've entered the VMware Service Provider Program at the Premier level. This expanded relationship with VMware has brought additional momentum around our recently-announced Savvis Enterprise Cloud Ecosystem Program, which allows our customers to access leading orchestration, brokerage, migration and management solutions that complement their installed solutions in our cloud.

 

It should be an exciting week with our EMEA team, technology partners and ecosystem members as we speak with global enterprises about these exciting new developments and our global cloud strategy. It's fun to look back sometimes, but it's so much more exciting to look forward!

 

David Shacochis is vice president of cloud platforms at Savvis, a CenturyLink company.

 

Cloud Icon"Will migrating to the cloud save me money?" This is a question that comes up fairly often in my discussions with customers. The reality is that there is no clear yes-or-no answer. It depends on a number of factors.

 

If you're currently looking at cloud adoption for your enterprise and are approaching it with the viewpoint of looking to save money, that is a valid business driver. That being said, a complete replication of your existing data center in a public or private service provider cloud is not guaranteed to save money and from Savvis' perspective isn't the right approach. In a future post I'll talk to some of the common business drivers for cloud adoption that we are seeing.

 

It's understandable that IT executives are looking at cloud in this way. After all, in the traditional model of IT, as outlined in Figure 1, the business need drives the application and the application drives the infrastructure. So the thought is, regardless of what the infrastructure looks like, as long as it meets the needs of the application, then it should be OK. This together with the idea that public cloud is a multi-tenant environment and so costs are shared across multiple customers leads to the perception that public cloud is cheaper. This isn't always the case.

 

Infrastructure at the bottom of a waterfall of requirements

In an IaaS cloud that paradigm is changed around, as depicted in Figure 2. The business need still drives the application, which still drives the infrastructure, but now the infrastructure has the capability and expectation to meet the business need as well. But it doesn't only have to meet the business need today, it has to meet that need at every point in the future. As we all know, the only thing that is clear about the future, is that it's unclear ... cloudy, perhaps.

 

Infrastructure meets business needsA better way to approach the adoption of cloud is to first understand the different types of clouds that are available and what type of workloads would be suitable for each. I plan to write more about this in a future post but to be more specific, the types of questions to consider are:

- Where should you use a private cloud?

- Where should you use a public cloud?

- Where shouldn't you use cloud at all?

- And most importantly, how do you tie all of these different pieces together to form a cohesive solution?

 

By effectively answering the above questions you will be able to optimize your infrastructure to meet the needs of the application and business. Instead of saving money you will enable your company to be more financially efficient. If correctly planned and implemented a byproduct of this will be lower costs.

 

Cloud isn't a one size fits all proposition. Your cloud provider should know this.

 

Jeff Katzen is senior manager, cloud business solutions, at Savvis, a CenturyLink company.

Compensating controls in the cloud

Cloud IconAbout a month or so back, I was attending a tradeshow where I happened to overhear a passionate argument between sessions about the impact of cloud on risk management. It was one of those times when I was trying my best not to eavesdrop, but these two gentlemen were so vocal about their various opinions that it was hard not to hear.

 

The crux of the argument had to do with whether cloud made risk assessment easier or harder to accomplish. On the "easier" side was the argument that reviewing a cloud services provider once and using contractual language to "lock in" operational controls took several review steps out of scope. On the "harder" side, the argument was that the risk assessment process had to be done for each type of business process that intersected the provider since no one audit could account for every way that the provider would be used (i.e., "Today we use the CSP for public data and we audit their controls for that case, but the business could move private data there tomorrow once the vendor is approved").

 

It was an interesting discussion and, as you can tell, it stuck with me. I'm still not sure who was "right" in this particular discussion - they both made valid points. But it seems to me that there was something bigger left out of the discussion: namely, the impact of cloud on mitigating control selection.

 

Here's what I mean: No matter whose model of risk management you're using (ISO, NIST, Octave, etc.), there's more than just the assessment phase. After assessment, there comes risk treatment. In most cases, that means control selection.

 

Cloud changes this, it seems to me, quite drastically. Specifically, when you engage a cloud provider - whether IaaS, PaaS or SaaS - you are drawing a line in the sand. In effect you say, "Everything below X level of the application stack will be a black box." You are deliberately abstracting yourself away from some portion of the technical substrate. In an IaaS context, it could be that portions of the network leave your scope of control while the OS and platform stay in it. For the PaaS, you retain control over the app but you give up control over the platform ... and everything below that (OS, network, etc.). For SaaS, the whole potato is a black box (the application and everything below).

 

For some purposes, this is a good thing. The less that's in your scope of control, the less you have to deploy custom security controls to address particular issues. However, it's also important to remember that once a particular level of the stack goes from being "something you can manipulate" to "something you can't," you also lose the ability to deploy a compensating control at that level. This impacts (or at least should impact) your control selection.

 

As an example, say you have an application that's historically been hosted within your organization's infrastructure. If you discover an issue at the application layer (say the application is vulnerable to SQL injection), you have a number of options across every level of the application stack. You could, for example, update the app. Alternatively, you could implement monitoring in the database or middleware, or you could implement host-level controls or network-level monitoring. All these options are open to you because you control every layer of the stack.

 

In a cloud context, options are more limited. If you use a PaaS, you can't deploy an OS-level control because you don't control the OS. Is this an issue? Maybe not ... at least, not if you're planning for it. But the bigger issue is what happens when you move existing applications and business processes to the cloud. In that case, compensating controls can "fall on the floor" unless you've either A.) kept detailed records of compensating controls you've historically put in place mapped to the original risks so that you can gauge their efficacy in the new environment or B.) systematically re-evaluated each application to determine what compensating controls will need to be re-implemented. And, not to be a pessimist, but most firms aren't doing either of those things.

 

Now, I'm not going to say that every firm out there should start from scratch in their risk mitigation strategy when they move to cloud. But I will say that a move to the cloud - at least for firms that are serious about security - could be a useful time to evaluate risks in the applications and processes that they plan to move.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

 

********

 

Savvis Security Webinar:

"Eight Steps to a Secure Cloud Infrastructure"

June 6, 2012

Presented by Chris Richter, vice president of security products at services
http://ht.ly/aXlSC

 

Cloud IconI am astonished by how little practical or empirical data exists on the topic of cloud bursting. A quick Google search on "cloud burst" or "cloud bursting" yields, well, not much - that is, short of "Men Who Stare at Goats" references, questionable YouTube clips and a campy '80s "art rock" video. To further mystify the topic, all of these data points really revolve around the dubious and Fringe-esque claims of "cloud busting" (notice the missing "R") - or making rain by tampering with clouds.

 

Yet, the concept of cloud bursting (with the "R"), or horizontal application scaling into the cloud (i.e. moving compute workloads into an on-demand resource pool to access additional capacity), has come up in just about every one of my conversations with enterprise clients. Why? Because this could be one of the fastest and most impactful ways for customers to harness the power of cloud computing to grow applications and respond to seasonal, cyclical, or ramping demands ...  and it's really pretty straightforward if you have selected the right cloud provider.

 

I guess I shouldn't be too surprised about limited information on the topic, considering that cloud capabilities vary greatly from provider to provider (see our evangelism efforts on Not All Clouds are Created Equal). Therefore, there is not one easy three-step guide on how to cloud burst. But, hypothetically, if one were to exist it might look something like this - assuming you have your workloads already virtualized:

 

1. Define what you will be bursting. Of course, this relates to applications - but the important question is which layer within the application architecture is ideally suited to a cloud bursting use case? Is it the web, application or database layer within a traditional three-tier relational database-driven application stack, or are we talking about a flat file, NoSQL or big data bursting scenario?

 

2. Select your target cloud. This directive is tightly correlated to how you responded to the first step, since cloud service providers each handle tiered models and distributed data models differently. Most enterprises tend to prefer highly secure and high-performance cloud that makes it easy bring in workloads.

 

3. Convert your source images and upload. Based on the provider you have selected, it's time to bring in your image. Is it a VMDK, OVF, XenVM or something else? It's common that VMs in even Open Virtualization Format should adhere to some service provider-specific configurations. Tools like VMware Studio, Platespin and others can be used to convert workloads.

 

Now that you have identified your applications, chosen a provider and converted your image to interoperate with your cloud, as well as uploaded it, you are practically there! However, there are still several factors to consider, and cloud vendors handle these very differently:

 

- How will you launch your workloads? From a template, from a clone or from a dormant VM/instance?

 

- How will you connect, and how much data do you intend to push over this network connection? Is it a point-to-point network, MPLS, EVPL or VPN, and is it production data, metadata, sensitive data or management traffic?

 

- How automated should this solution be? An API can provide full automation, but will require coding and additional business logic in your applications. Is the cloud portal you have chosen easy enough to operate to take advantage of cloud bursting?

 

- How will the cloud handle your security policies? Does the cloud you have chosen have the governance and maturity you would expect for you data? Can you even bring your own policies into the cloud? After all, the cloud holds your data, shouldn't it be able to support your existing IT policies?

 

- How will you handle load balancing? Will you need local and possibly global load balancing that can be dynamically updated to include the new workloads you have bursted into the cloud?

 

- How will you charge back? Does your cloud bursting solution make it easy to charge back internal and external customers and set spend limits, controlling cloud sprawl and avoiding the auto-ballooning of cloud costs?

 

Whether you are cloud bursting or busting, as the great Lil' Wayne eloquently put it, "Make it Rain." Optimize your existing workloads and select the right provider - one that cannot only help burst your workloads onto enterprise-class cloud platforms, but also help you develop the IT strategies you need to grow your business.

 

Aditya Joglekar is director of cloud business solutions for Savvis, a CenturyLink company.

Cloud IconA recent article appearing in Computerworld titled "Bandwidth Bottlenecks Loom Large in the Cloud" brings attention to a critical, yet often overlooked, element of every cloud implementation: the network that is used to access those state-of-the-art virtualized services.

 

As companies add new users and move additional applications to the cloud, it is easy to assume that planning for cloud access boils down to nothing more than adding a few Megs of bandwidth to an existing Internet connection. While that may eliminate some congestion, it does little to address the other issues, such as latency, which can affect application performance and the overall experience of end users. To put it another way, adding raw Internet bandwidth is analogous to attempting to eliminate a traffic jam by building an extra lane on a busy stretch of the highway without knowing diurnal traffic patterns or the impact of nearby on-ramps.

 

To successfully leverage off-site virtualized services, IT managers should consider a more proactive approach to their cloud connectivity. There are numerous tools and services available for both public and private networks designed to help maximize the efficiency of a network and provide visibility into network performance. These tools include WAN optimization, content caching and proactive network monitoring.

 

WAN optimization and content caching, including Content Delivery Network (CDN) services, can help improve application performance and maximize the use of existing bandwidth. These offerings use various compression, replication and caching techniques to increase data-transfer efficiencies across a network. In terms of our analogy, it's a way to raise the speed limit and fit more cars on an existing highway, rather than making the capital investment to build new lanes.

 

Another important part of a proactive approach is to develop a detailed understanding of the traffic on a network. Network monitoring software offers enterprise-wide reporting on the composition of traffic on every link of a network. This allows network administrators to monitor jitter and packet loss across all network connections, and also helps to identify which applications, locations and even individual users are consuming the most bandwidth. This knowledge can help to refine MPLS Quality of Service (QoS) settings and improve bandwidth management. So, to return to our analogy once more, it's like having a continual aerial view of the highway - a nonstop "shadow traffic" report - providing a comprehensive view of the traffic flow on all the highways, allowing for better signal light coordination.

 

The ideal cloud implementation provides improved compute efficiencies, flexibility and cost savings, but it must also appear seamless to all end users. If end users can't access applications and databases with at least the same speed that they did when they were in house, then the cloud is inefficient. And what's the sense of building a highway if people don't want to drive on it?

 

Dennis Brouwer is general manager of converged cloud solutions at Savvis.

Cloud IconThis month, I had the pleasure of speaking with Sramana Mitra, a strategy consultant and entrepreneur based in the Silicon Valley. Sramana and I spoke about cloud computing and its impact on the data center. We spent a lot of time speaking about key trends in cloud computing. During the conversation I underscored that cloud use (not to mention IT choices in general) must be driven by a business need.

 

Until recently, we have seen a number of enterprises that "got in front of their headlights" around cloud, trying to adapt it to all use cases. Organizations need to measure and monitor the impact of cloud technology on addressing business need so they have full visibility whether or not their IT organization is executing at high levels. IT organizations that have a strong link to the lines of business and are tracked against business objectives are more successful. With cloud, many organizations got away from the business driver and were allured by the technology and having the features of the technical solution lead the conversation. 

 

Other key topics we discussed included:

 

  • Differences between cloud computing and the data center
  • Mitigating risk in the cloud
  • How cloud is face-lifting commercial models

 

To read the full article go here. Note that this is a six-part series.

 

Steve Garrou is vice president of global solutions management at Savvis, a CenturyLink company.

Cloud IconWe all know it: economic factors drive cloud. As I outlined on this blog last month, that sometimes means it's hard to add unanticipated security controls to a new cloud deployment (since costs of controls eat into savings projections).

 

We talked about some tools that can be used to limp along until funding can be secured to meet the security requirements and deploy appropriate controls (it's January now, so maybe FY'12 dollars are in effect already taking that pressure off). What we didn't talk about though is the inverse: the budgetary expectation that the legacy environment will shrink. It seems like a given - and maybe not such a big deal at first blush - but it has consequences. And it means security organizations need to start planning now so as to not get blindsided when this happens.

 

Budgetary Changes and Economic Drivers

Think about it this way: for a deployment like a virtualized data center, the expectation is that costs will decrease over the long term, right? That's a self-evident statement being that the goal of cloud is to reduce - or at least make more efficient - overall technology spending in the organization. However, what is the specific trajectory of that long-term reduction? The way this plays out can have an impact.

 

It usually consists of a "balloon" expense immediately followed by a long tail of spending drop-off. Why the immediate increase in spending? Keep in mind that many virtualization projects mean maintaining two environments in parallel: spinning up the new virtualized DC and at the same time decommissioning the legacy physical DC. So costs might be immediately up, but then ultimately fall off.

 

For security organizations, this is important to understand. Why? Because if the organizational long-term roadmap contains decreased investment in IT overall, that means reductions in security controls as well. The same forces that make cloud more cost effective (economies of scale) make it harder to maintain certain security controls in the legacy context. That's because at the same time that cloud is successful due to economies of scale, shrinkage of the legacy environment means decreases in economies of scale in that environment.

 

What Does that Mean for Security?

This means that funding for existing security controls will ultimately shrink, impacting what we can keep deployed, what we can spend on personnel to maintain controls, and so forth. But this reduction is deceptively slow. Why? Because of that spending "bubble" we talked about - it can take between one to two years for the first reduction in spending to occur. And because budgetary changes are "stepped" (i.e., occurring in year-by-year increments), it might be three years before the first real constrictions are felt. But when they hit, it's huge.

 

So it doesn't take a fortune teller to see what's coming down the pike. If you're a security pro in an organization that has a multi-year plan for reduced technology that includes reduced spending, it's only a matter of time before you get hit - hard - by a cut budget. In other words, start planning now.

 

One exercise I find helpful is to divide security controls up into groups along economic lines. Meaning, take the existing controls and processes we have now and categorize them according to what they protect (data center, workstations, network, etc.), annualized hard-dollar cost and annualized soft-dollar cost. Having this data can help you decide which controls will naturally erode as environments shrink (i.e., data center controls) vs. those that are going to stay relatively constant regardless of environment (e.g., user provisioning).

 

Obviously the specifics of the controls will vary according to environment so I won't go too far down that path other than to point out that planning here is required. The temptation is to ignore this situation and leave planning for down the road. Don't do it. Because the controls that you can quickly cut when blindsided by a huge budget reduction aren't the ones that you necessarily would choose to cut if given some time to prepare and think about it.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.PK

Cloud IconWhen it comes to cloud, planning is everything. This is the case when it comes to every aspect of a cloud migration, and includes in no small measure security as well. However (surprisingly, given the importance of security in a cloud migration), sometimes security and economic goals clash in a cloud deployment.

 

This happens because many cloud migration efforts are economically driven - and security isn't free: either from a planning standpoint or from a control deployment standpoint. So the addition of controls can eat away at projected cost savings - especially when security parameters are not understood fully at the project outset. Because of this, security teams sometimes find themselves in a situation where they need to add controls to meet regulatory requirements or address risk areas, but because a migration is already "in flight," those controls aren't budgeted. Oops.

 

This leaves security organizations with two alternatives: 1) Do nothing and drop the control on the ground, or 2) Do something at minimal cost.

 

Doing nothing isn't usually a recipe for success, so option 2 - doing something on the cheap - can be a lifesaver. Fortunately, there are a plethora of free tools - software and resources - that organizations can look to in a pinch to fill in gaps. Note that I'm not addressing soft costs here - staff time is staff time ... and that's never free (well, unless you have interns, I guess). I'm just talking about what you can do to meet controls without having to go back to the budgetary well.

 

I've tried to outline a few - that you can get up and running quickly - to address particular situations as they arise. These aren't the only ones by any means. I've tried to pick out short term "gap fillers" for this list. There are literally hundreds (if not thousands) of excellent free tools out there that let you do everything from log correlation to asset management to monitoring in the cloud (and out of it for that matter). The difference is that not all of them are "spin up/spin down." For example, you can use a tool like GroundWork (monitoring) or snort (IDS) that are every bit as feature rich as commercial counterparts - but once you have it up and running, are you going to want to spin it down again in three months? Probably not. So while those tools are great (can't stress this enough), I didn't include them on the list.

 

What I did include were tools that you can get up and running quickly, that fill an immediate need, and that doesn't commit you long term. Meaning, you don't lose (much) data or have to retool the environment (much) should you decide to stop using them later.

 

Free Data Discovery

Finding out where your confidential and/or regulated data is prior to (and let's not forget during and after) a cloud move is always useful. You'd be surprised what data is located where in a large or even medium-size enterprise. There are a number of free tools out there that help you search assets and locate certain types of (usually regulated) data. MyDLP, OpenDLP and the cardholder-data focused ccsrch can help data in automated fashion. All of these tools have merit. Although I personally found the step-by-step installation instructions for MyDLP to be particularly helpful in getting up and running quickly - and the ccsrch tool's simplicity and efficiency make it a good choice if you want to focus just on credit cards.

 

Free Compliance Tookits

Evaluating a vendor's security posture and control deployment sometimes gets done prior to picking a vendor; but sometimes (like when security or IT isn't consulted in that process), it doesn't. But many regulatory requirements require specific validation of vendors. In that case, it's on us to do that after the fact. Now sure, general-purpose information-gathering materials like the Shared Assessments (formerly FISAP) Standardized Information Gathering questionnaire are great, but let's face it, they're cumbersome when applied to a hosting provider. That's why the Cloud Security Alliance's GRC Stack - specifically the Cloud Controls Matrix (CCM) and the Consensus Assessment Initiative (CAI) can help. Why redo the work when you can reuse what's already done for you?

 

Free Two-Factor

Many organizations require two-factor access as part of remote access policy. Although it's one of those things that many times organizations overlook in the planning process. WikID - an open source two-factor authentication platform might be something you can look to for meeting the requirement short-term. It's easy to set up, and doesn't require per-user hardware to provision in order to get up and running.

 

Free Network Analysis

Most folks probably already know about wireshark ... you knew it was coming, right? Sometimes you just have to know what's going on over the wire.

 

Free AV

Fungible as many organizations perceive it, people are sometimes surprised when it comes to AV during a move. Why? Because many commercial AV platforms are licensed per client. A physical-to-virtual move many not result in a one-to-one mapping between existing physical hosts and virtual images. Particularly in the interim period while you stand up the virtual infrastructure. This means (sometimes) that you need more AV licenses - depending on your licensing arrangements with your current vendor.

 

What happens when you discover this mid-effort? Going off to secure funding for more AV licenses in the middle of a move isn't a fun conversation - and because it's a regulatory requirement (for example under the PCI DSS), just making do without isn't a good idea. One solution is to leverage free AV tools like ClamAV in the interim. Yes, long-term management is an issue in supporting another product over/above commercial tools you might be using on-prem. But to fill a short-term need while you sort out the licensing? Why not?

 

Maybe some of these might be helpful - particularly in Q4 when budgets are frozen anyway.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

Professional Services IconHow can I transform my enterprise to become cloud-centric? There is no right answer to that question. But there is an answer to the question "How can cloud serve my business needs?" There IS a way to harness the power of cloud to drive your business agenda rather than thinking the other way around.

 

Lots of times I hear my clients asking me "What should my enterprise cloud strategy be?" and "How can you help me accelerate into the cloud?" In my opinion, those are not the right questions to worry about. The concern shouldn't be how to become cloud-centric. Cloud is just one way to service your IT needs.

 

Instead the question should be "How can my infrastructure be more business-centric?"

 

We should first try to understand what the needs or challenges are of your business - is it time to market, resiliency or having to align IT spend with business outcomes? Then we should see what kind of enterprise IT architecture (that includes infrastructure and operations architecture) you need to adopt in order to meet those needs and challenges. In that quest for target state architecture, I'm sure cloud can play a pivotal role.

 

Having said that, there are some simple considerations that can simplify your approach/thinking around making cloud work for your business. They are: Workload, Technology, Efficiencies, Security and Business Case.

 

I plan to tackle each of these considerations one at a time on this blog, starting here with the most important consideration: Workload.

 

What does your workload look like? If you were to map the workload demand would it look like a human heartbeat - with ups and downs in very short intervals? Or is it much more seasonal - where it lies low most of the time and spikes up periodically? The distance between peaks is a very important factor in deciding whether or not something should be moved into the cloud.

 

While on one hand cloud is very well-equipped to handle sudden spikes in workload, there is a "cost" or overhead to RAPID provisioning and decommissioning. In a completely variablized cloud commercial model, the unit cost of resource (like compute) is naturally higher than a fixed-term-based model.

 

Oftentimes, we use the "pay by the drink" analogy when we talk about the commercial model of cloud. Well, it is very true - when you order drinks by the glass versus buying a bottle, what's more expensive? Obviously, by the glass. So, since, the variable unit rate is much higher than a fixed-term unit rate, unless there is a substantial amount of "rest" period in the workload, it doesn't make economical sense to leverage cloud for your infrastructure needs.

 

Now, that doesn't mean you SHOULDN'T use cloud in all such situations - you might have another compelling reason why you should. All these considerations are exclusive to each other. Even though one of them might stop you from thinking about cloud, the other ones might out-weigh the negatives and still justify the usage. So, I hate to sound like a consultant, but it DEPENDS on what your BUSINESS needs and priorities are ... that's what will drive your decision.

 

So, what kind of workload IS suitable for the cloud? A workload that is seasonal - retail applications that typically spike during holidays, financial workloads that peak up during period-endings, educational applications that peak up during admission season or non-production environments of usually very stable and static applications in production that might undergo patches a couple of times a year are just some of these prime applications.

 

In all these situations, the amount of time where the peak is happening is much lesser than the "off-peak times" and the peak loads are somewhat predictable. So, even though you are paying a much higher unit rate when you are using the cloud resource (such as compute), it is much lesser than what you would have paid if you had procured all of the infrastructure that you need at peak load and let them idle for most of the year.

 

So, hopefully, based on the above discussion you have a better idea now how to assess your workload for suitability in the cloud. In my next blog entry, I'll talk about Efficiencies in the cloud.

 

Kaushik Ray is practice head, integrated technology solutions consulting (iTSC), at Savvis.

Colocation IconAs they move through different points in their lifecycle, it is common to see companies change their mentality around colocation. The overhead of managing their own increasing colocation equipment rises in parallel with the complexity and size of their business. This steers them to start planning a move to managed services or the cloud because they realize that it is a better use of resources to leverage the expertise of their service provider's technology specialists to manage infrastructure, freeing them to deliver and migrate apps and features rather than maintaining their own IT infrastructure.

 

This type of a scenario has led to an increase in demand for service providers that offer a full portfolio of services ranging from colocation to managed hosting to public, private and hybrid cloud services. However, developing the facilities and the capability to integrate this full range of technologies has been a major challenge for many colocation providers.

 

The biggest of these challenges is effectively managing the data center. Running a data center is similar to attempting to keep a vehicle on the road 24 hours a day, 365 days a year without stopping - yet driving as efficiently as possible. Even if you started with the best equipment in the world, planning and then implementing the necessary rolling maintenance is critical if single points of failure and outages are to be avoided. The evolution of technology is helping, bringing cheaper UPS, generator and cooling technologies together with planning, automation and monitoring tools. But as yet, one of the most valuable assets in colocation provision continues to be experience.

 

The desire for fine-tuned control over systems has been one of the primary needs that colocation has satisfied. For most clients, a sufficient level of control is currently available in the cloud, which eliminates the burden of configuring and maintaining equipment. Therefore, to maintain relevance in the future, colocation providers need to evolve and become a bridge to a wider range of managed services. This approach will provide a base for effectively connecting an organisation's unique IT configurations and intellectual property costs to the wider range of services required to support that technology. The parallel provision of colocation as host for, and part of, the full spectrum of cloud options is where the future lies for the industry.

 

Drew Leonard is vice president, colocation product management, at Savvis.

What is enterprise cloud?

Cloud IconYou may be thinking, "What is enterprise cloud?" As you know, not all cloud infrastructures or providers are the same, and not all methods offer the full value IT requires. Within the cloud arena, public and private clouds are well established. A new model, enterprise cloud, is emerging.

 

Enterprise clouds offer the same benefits as private and public clouds, including flexibility, quick provisioning of compute power, and a virtualized and scalable environment. Similar to private clouds, enterprise clouds provide "private access" and are controlled by either a single organization or consortium of businesses; services are delivered over the Internet, removing the requirement to purchase hardware. Commercial-grade components provide the usability, features and uptime required.

 

Enterprise cloud not only delivers cost savings, but, more importantly, provides a range of security options, and unprecedented speed-to-market with vastly improved collaboration among business partners and customers. Enterprises realize tremendous value in this approach because of its ability to allow them to innovate as well. For businesses that want to make IT faster, better, cheaper, more agile, enterprise cloud will likely be your solution of choice. Corporations and government agencies that are reluctant to outsource their information services are likely to embrace this model as well.

 

For example, enterprise clouds are ideal for organizations that want to minimize risk and expenses of trialing new service and application options. There are no upfront capital expenses and new projects can be brought to market instantly or shut down just as quickly if they fail to give corporations a new sandbox in which to pilot offerings. Enterprise clouds allow organizations to create secure workspaces to enable the partners and customers a superior forum for collaboration.

 

Savvis' enterprise cloud is a VMware-based service differentiated by an array of built-in security features, as well as many optional managed security capabilities. Savvis built its cloud solutions using the same trusted suppliers - including Cisco, HP and VMware - used by enterprise customers in their own data centers. The cloud services are divided into "tiers," providing different levels of performance and availability for different types of application needs. These services are delivered in a multitenant way and can also be delivered as a single tenant.

 

For customers with complex IT needs, Savvis offers multiple solutions, including colocation, managed services and networking solutions. These solutions, when deployed, are fully integrated for customers and supported by robust infrastructure SLAs.

 

Find more information about Savvis cloud services here.

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis, a CenturyLink company.

We spend a lot of time discussing production-based cloud environments. But what about all the hoops that software development teams jump through to build development environments? This is an area that we seem to overlook all the time.

 

Every software development team develops, integrates, delivers and manages environments. I am not here to argue the merits of the various software development methodologies, but I do challenge the old way of thinking when it comes to software development.

 

I see companies taking old servers out of production and placing them under a desk in some cubicle and labeling them "Development Servers." This method is very cost-effective. They are using assets that are now off the company's financial books because they've used them through their useful production lives.

 

The developer's next step is to reformat these servers with the latest operating systems and development stacks. Following the completion of this massive accomplishment, a few users are added to the systems and they are off to the races. The developer has just configured a web server and database server. What else could they need?

 

Don't laugh here - while my scenario above is very simplistic, many of you have done exactly what I have just described.

 

When delivering a Software-as-a-Service (SaaS) solution, there are many complex pieces to the architecture development that get overlooked and are typically the "gotchas" during a SaaS implementation. These gotchas include load balancers, firewalls, monitoring, identity and access management, routers, network connectivity and storage.

 

So, let's spend a few moments here talking about cloud-based development. First, using a cloud environment, let's create a reusable container for things that can be shared by a software company's SaaS deployment and should be managed. We do this almost instantly and via a point-and-click web portal. We can put routers, load balancers and firewalls into this container. These pieces of the SaaS-based architecture are typically static in nature; developers typically close down most protocols from the outside world and only open those up for specific communication services. Next, let's do almost the exact same thing with our firewall configuration as well as our load-balancing rules. We should be able to consume these resources based upon usage and need.

 

Now, instead of using that old production server, let's configure a couple server instances with our favorite operating systems and development stack as well as the latest tools. Again, we do this almost instantly and via a point-and-click web portal.

 

Developers now have an environment they can begin developing upon ... but there are some very big differences with what we have just accomplished versus the "old" way. We have the entire environment up and running, which is very similar to a non-cloud production environment. Yet, with our cloud deployment, we now have a tremendous amount of flexibility.

 

How many times have we heard a developer say, "We need to take down the server to perform some maintenance?" Don't smile - we have all heard that before. In a typical non-cloud environment, "performing maintenance" is the same as "no one can use the system."

 

With a cloud deployment, the developer can copy an image and create a clone of the system. This way, that "all important" demo to the CEO can go on as planned or the quality assurance team can continue unit testing.

 

The bottom line is: Cloud changes how we look at everything. Employment of a cloud solution changes developers' processes, procedures and management of development stages. Cloud brings more agility, more flexibility and better cost management structures to our fingertips than we've ever had before.

 

Larry Steele is technical vice president, Software-as-a-Service, at Savvis, a CenturyLink company.

An application without connectivity is like a train attempting to run without tracks. Enterprises are moving more and more of their mission-critical applications to enterprise cloud environments every day. But are they ensuring that the train tracks ahead can safely and efficiently handle the ever-increasing load?

 

Reliable, redundant connectivity is central to the value proposition of cloud-based computing. The underlying network infrastructure must be robust and flexible enough to support the demands of the applications running on it. Applications like Oracle, SQL and SAP require predictable performance. Video and voice transmissions are sensitive to network stability issues such as packet loss and jitter. ERP and CRM systems are essential to business operations, but also require a high-performance systems environment no matter where employees are worldwide.

 

Once these types of applications are moved into the cloud, the challenge becomes ensuring that they all are globally available with the same levels of security, consistency and control associated with local environments. Private connectivity into a cloud environment via a high-performance, high-capacity network ensures that end users can securely access these applications, and any related data, as quickly as possible. Utilizing Ethernet as a global access method for better scale and accessibility, network technologies such as multiprotocol label switching (MPLS) and virtual private LAN service (VPLS) make it possible to provide high levels of performance, minimal down-time and end-to-end Quality-of-Service (QoS) prioritization for essential applications such as mission-critical financial or e-commerce systems. Value added services, such as network-based storage, optimization, security -- including firewalls, virtual private network (VPN) access and denial of service protection --and load balancing, offer IT managers additional built-in capabilities.

 

Enterprises must also focus on the interaction between and among their applications. For certain applications, such as stock trading, shaving a few milliseconds off network latency may provide a key competitive advantage. The positioning of multiple, interdependent applications in network-adjacent locations creates a highly efficient community of interest, enabling application owners to optimize application interaction, utilize off-line storage and minimize connectivity costs. Each application cluster provides strong economic synergies that drive the further growth of communities of interest in markets as diverse as media, gaming, voice, and video services.

 

Moving applications into the cloud is a big step for an enterprise. But it is not the only step. Choosing the right connectivity to support these hosted applications is just as important. It must be substantially more robust and secure -- and possess more value-added data delivery and bandwidth management capabilities -- than what is typically built using traditional IT models. If your cloud connectivity is not sufficient, the flow of information is interrupted, and the train, and your business, will grind to a halt.

 

Dennis Brouwer is general manager of converged cloud solutions at Savvis.

What is enterprise security?

cybersecurity.jpg

While I know that some practitioners are going to scoff when I ask the question "What is enterprise security?," I'm going to ask it anyway.

 

You see, great leaps forward very often start with questioned assumptions. Ptolemy assumed (based on a set of perfectly logical assumptions) that the sun rotated around the earth. It was only when subsequent thinkers questioned his universally held theory (in many cases at great personal cost to themselves) that a cataclysmic advance in humankind's understanding of the solar system became possible.

 

The point is, if we don't stop every once in a while to question what we believe, we can hold on to outmoded assumptions way past their "sell by" date. And when it comes to the security of the information we steward in our organizations, outmoded assumptions create risk. In other words, if you assume things about your environment that (maybe) were true once - but aren't now - you put yourself in a situation where conclusions you base on those assumptions may very well be false.

 

Take an assumption like this one: "Two devices on the same isolated network segment communicate more-or-less privately." Maybe that's true. But if you're wrong - like if the segment doesn't stay isolated or someone moves one of the devices off that segment? Risk.

 

The answer to the question "What is enterprise security?" is neither static nor a given. And while many organizations on the edge of change are rethinking and embracing what "enterprise security" means and adjusting accordingly, just as many are clinging to outmoded definitions about what's "inside" vs. "outside" the enterprise and what's "security's job" vs. not. These boundaries just aren't as meaningful as they used to be.

 

"Enterprise" and "security" are borderless

First, it's important for security practitioners in today's IT shops to realize that the definition of "enterprise" is changing. A few years ago we in security talked casually about the "disappearing perimeter" (remember that?), but for today's security practitioner an appropriate question might be, "What perimeter?"

 

If it wasn't true before, it's certainly true now: Enterprise security and location of resources are unrelated. From a location-of-access standpoint, take the trend of mobility to its ultimate conclusion: Users employ an array of mobile platforms to send email, modify documents and close deals - or they access critical applications from home machines not provisioned by the organization. But the data we hold needs to be protected just the same. Just because devices accessing critical resources aren't coming from some arbitrarily drawn geographical border doesn't mean that the security of those resources is any less relevant.

 

On the other hand, "enterprise" isn't defined by location of computing resources either. This time, take cloud to its conclusion: Critical business applications sit on dormant virtual machine images in redundant, geographically distributed data centers. These images and are spun-up on demand in response to user requests, live just long enough to service the request, and then are spun down to conserve energy, bandwidth and CPU cycles. Enterprises reallocate storage and processor resources on the fly across the globe in response to user demand, business volume, time of day or any number of other factors specific to their business. Are you free from the need to care about security because your data is hosted outside your data centers? No.

 

In both cases, security is still a critical factor of supporting the organizational mission. But the temptation - particularly when we're strapped for resources or under the gun to deliver a critical task - can be to draw a line in the sand and decide that certain technologies are outside the boundary of our security plan because they're implemented by a vendor or because they leverage devices we didn't provision. But nothing could be further from the truth. In fact, this just makes security more important rather than less.

 

"Enterprise" is defined by data; "security" by relationship

So if geographic location doesn't define what's in the enterprise, what does? In my opinion, it has to be the data. When geographical boundaries no longer define what's "inside" vs. "outside" and security isn't tethered to particular systems or applications, the answer has to be to focus on what we're ultimately trying to protect: the mission of the organization. And the embodiment of the organizational mission is the data the organization creates, processes and stores.

 

Said another way, information systems used by an organization process and store data for a particular purpose; so the data those systems operate on are the raw materials that the organization uses to complete that purpose. Everything that goes into the processing and storage of that data - no matter where it's located or at what third party - is in scope from a security standpoint and therefore must be included "enterprise security."

 

This is true even when the data is outside of your organization's direct control. Say for example your hospital outsources storage of your medical records. If your medical records get exposed inappropriately, do you honestly care whether it was the hospital that accidentally lost them or whether it was a service provider? I don't. I have a relationship with the entity that I trusted with my data. And I trust them to only share that data with trustworthy organizations. So when someone violates that trust and puts users at risk, users are going to hold accountable the entity they trusted in the first place.

 

Just like the data defines what the enterprise is, so also is "security" defined by the chain of relationships along which that data travels. If the data is compromised, the responsibility for failure to protect that data rests with the organization with the relationship to the data owner. If confidentiality, integrity or availability of that data are keys to supporting the organizational mission, the organization is the one that takes the hit. If the organization is acting as a steward of that data on behalf of someone else, they are the ones with the relationship to the data owner and are therefore the one to take the hit when security fails to protect it.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

four t's clipart.jpg

Over a three-month span earlier this year, I had the opportunity to play a leadership role in the TechAmerica Foundation's Commission on the Leadership Opportunity in U.S. Deployment of the Cloud (CLOUD2). This industry panel, organized at the request of the Obama administration, on July 26 published our titled "Cloud First, Cloud Fast: Recommendations for Innovation, Leadership and Job Creation." Our leadership team recently had the opportunity to share this report with multiple committees on Capitol Hill.

 

This report, specifically encouraged by the Federal CIO and the Department of Commerce, is focused on the future of cloud services in the American economy. Our report establishes a forward-looking roadmap of maturity measures that will enable the United States economy to maintain its leadership position in the cloud services marketplace worldwide.

 

One of the things that inspires me every day is the fact that our industry plays such a key role in the stability and recovery of the American economy. Technology sector leaders are sought out as key political supporters by both the left and the right. Technology sector job creation is one of the few bright indicators in the current landscape. Many view government IT reform as a critical step in achieving increased government productivity and budget control.

 

Our recently-released report issues 14 recommendations across four categories. These categories I like to call the "Four T's Of Cloud":

 

  1. Trust - We recommend participation in international certification frameworks, investment in identity management ecosystems, standardization of national breach law, and further academic cloud research.
  2. Transnational Data Flow - Our commission recognizes the need for digital due process that allows cloud providers and cloud customers to clearly understand their obligations and protections under the law. The report also recommends that the U.S. align with international privacy frameworks, and show leadership by allowing appropriate government workloads to operate in transnational cloud environments.
  3. Transparency - With the cloud market clearly concerned about vendor lock-in and interoperability, the report calls for cloud providers to develop disclosure frameworks around the operational status of their environments, and to offer data portability tools that enable public and private customers to access their data freely.
  4. Transformation - Recognizing that cloud computing is in many ways a business model as opposed to a new technology, government procurement experts on our commission made specific recommendations around federal budgeting, regulations and incentives, which could spur market adoption and maturity. We also call for continued investment in broadband infrastructure and ICT education, to ensure the supply of key materials this industry requires.

 

As we developed these recommendations for maintaining leadership in the cloud market, many members of our CLOUD2 commission drew positive and negative comparisons to other technical markets such as the financial services industry and the Internet protocol backbone. Many look for parallels to these modern technical markets for examples of what works and what doesn't when trying to show leadership in a global market.

 

The primary contrast that jumps out at me is the method for evaluating services. Older, more mature, technical markets tend to be evaluated by simpler criteria. An investment vehicle is given a risk rating and is either profitable or not profitable. An Internet link is given a quality-of-service, and it is either up or down. When evaluating the cloud services market, the criteria for a "successful" offering is far more subjective and diverse. Buyer criteria ranges widely across service plans, computing models, operating systems, application support, provisioning time, automation capabilities, security tools, transparency measures, portability factors - the list goes on and on.

 

These "Four T's of Cloud" and their corresponding recommendations serve to highlight just how complex our cloud services market can be. The cloud market is as challenging to regulate as the Telecommunications Industry or the Financial Services industry, and is further complicated by the rate of technological change inherent in the industry. New software, hardware, and business models can all change the face of our entire industry in a matter of months, making it hard to set any type of long-lasting regulatory policy.

 

These globally aware, progressive recommendations made by the CLOUD2 report are a good set of guidelines against which cloud services can be developed and improved. Here at Savvis, everything we do in our cloud services roadmap will be examined against the Four T's, and evaluated, in part, by how well we are advancing their objectives in the years to come.

 

David Shacochis is vice president, global public sector, at Savvis.

Moving to cloud is a big decision, but the transition to the cloud alone will not be the panacea for all your infrastructure woes, as the hype may lead you to believe. A few months ago, I compared how cloud is a lot like relocating or buying a home. I posed many considerations and questions that showed a stark similarity between them - and how much thought and consideration needs to go into each transition for it to be successful.

 

I recently sat down with Savvis' consulting team. These experts have spent thousands of hours helping customers prepare and transition to enterprise cloud. During our conversation, the team outlined the top considerations that organizations need to address to position themselves to select the best cloud type for their enterprise, and to achieve a successful transformation.

 

Answering - or not answering - the following five of questions can have a significant impact on whether or not the organization realizes the promise of cloud infrastructure.

 

Decide whether you are going to maintain two infrastructures or consolidate.

Different requirements determine if the organization is going to augment its existing infrastructure with cloud, or use cloud to consolidate. Knowing the business and technical drivers that are moving the organization toward cloud will determine which path to take. Most organizations we work with implement a hybrid approach using cloud to achieve specific levels of flexibility and value not just cost savings.

 

Understand what applications are currently running in the existing environment and expectations for moving certain solutions to the cloud.

Mobility and growing data needs are placing new requirements on applications and services. It is important to analyze the applications in your environment and understand who is using them, how they are being used and what applications can be eliminated. Understanding the applications and the workload parameters will help to best distribute your assets and prep your user communities for the move.

 

Analyze the architecture of the application environments.

Virtualization has helped organizations lower storage and data center costs. Virtualization creates a pool of manageable, flexible capacity. Automation and orchestration take that pool of resources and enhance its manageability based on business policies and service-level requirements. The decoupling created by virtualization, combined with defined service offerings and automation, greatly enables cloud computing. In addition, companies that have virtualized their applications have already gone through a segmentation process and have the foundation for understanding what bridges are needed between the different infrastructure components. Applications that are on horizontally scalable systems and configured in clusters streamline the transformation and reduce upfront work as well.

 

Determine how much capacity you need to run the applications; are the capacity requirements seasonal or variable?

Knowing your application capacity requirements will ensure you meet your investment applications. While cloud allows per unit pricing, this approach is still more expensive than purchasing capacity in bulk. Based on our experience, most organizations can predict 70 percent of their capacity requirements. Cloud is a superior infrastructure for applications and user communities that have variable or seasonal capacity requirements.

 

Assess compliance and security requirements.

To move to the cloud, organizations must identify which applications are PCI compliant and define clear application security requirements. Some applications may never move, but knowing those services and solutions that require higher levels of security will help define if a dedicated cloud approach is better than an open one. Regulatory compliance policies and other internal procedures will inform what needs to be enforced on the cloud.

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis, A CenturyLink Company.

It's a fact of life in today's world: More and more organizations are adopting cloud and cloud technologies. Recent surveys from IDG, for example, suggest that 57 percent of firms surveyed are already in the cloud, with another 31 percent planning to move to the cloud in the next year. Putting those figures together, this means that 88 percent of firms are either in the cloud already - or will be there imminently (i.e., 12 months or less).

So the end-state for these firms is pretty well-known - they're going to migrate some portion of their environment to the cloud. However, the mechanics of how they get there - in other words the path that each individual organization follows on the way to that defined end-state - varies tremendously. This includes different cloud models, different architectures and different types of service providers. 

And while some of those organizations have their IT department riding shotgun (i.e., acting as technical "navigator" and adviser) during this transition, it's not always the case. In fact, evidence suggests that many organizations leave IT out of the loop entirely in some cloud planning scenarios. Forrester, for example, cites statistics that suggest upwards of one-half of cloud buyers may be outside of IT.  

Leaving IT out of cloud planning is a reality in many cases - and while there are of course "many paths to the Buddha," it's also important to recognize that some paths (like leaving IT out of the discussion in cloud planning) are harder to follow than others in a few different respects. So while lack of IT involvement in a cloud deployment is something that we know is happening, it also can (and does) have some unintended consequences from an information security standpoint. 

Why Leave IT Out at All?
Folks who work in IT probably have one question upon hearing that; namely, "Why leave IT out of a cloud deployment at all?" There are a few reasons why this can happen in practice.  

First, there is a perception in some firms that IT is a "stumbling block" to forward progress. In many cases, this perception is baseless; however, it can be understandable why a business partner might feel the way they do. For example, a business partner might not understand the need for adequate preparation or technical planning. In other cases, the perception could be legitimate (let's face it, there are some IT shops that culturally have a high amount of inertia). 

But true or not, the fact is that IT can sometimes be perceived by the business side of the house as something that would slow down a deployment. And in a world where some vendors advertise their ability to circumvent IT participation (seriously, they do), no wonder some firms feel this way. 

In addition to perception-related reasons, don't discount the fact that in some cases cloud migration carries with it some reduction in IT budget and/or staffing. This is obviously not going to be welcome news to folks actually in the impacted department. In those cases, IT may be purposefully left out of the discussion until the full impact to the IT organization can be determined and quantified. In some cases, this means leaving IT out of the discussion entirely until a migration is well under way.  

Lastly, don't forget ignorance. Not everyone will realize that IT should be involved.  

Security Impact
So what does it matter if IT is left out of a cloud transition? From a security standpoint, there can be a few impacts. It's important that firms recognize this. To name a few possible areas of concern: 

  • Technical impact - Certain types of deployments (e.g., IaaS, PaaS) can shift how/where applications and critical services are located. This can introduce new data pathways that didn't exist before or obviate assumptions made about security made under the old model. For example, an assumption along the lines of "We don't need to encrypt this traffic because the database server is on the same VLAN as the app server" makes less sense when the assumption (that they're on the same VLAN) ceases to be true.
  • Regulatory Impact - In some cases, there may be regulatory drivers that impact the data. Your IT department may be working closely with the compliance office to track and manage something like PCI DSS compliance (credit card data) or HIPAA compliance (medical records). If you start replicating certain types of data outside of your data center to an environment that may or may not have been certified to implement security controls you need, you introduce risk.
  • Operational Impact - Certain controls, such as those for security, may operate only within a particular context. Changing the context (for example, by moving to the cloud) may impact how operation of that security control functions.

There are other impact areas that are possible, of course. Those listed above are only a few example areas of places where impact could potentially happen. The point is that IT is typically chartered with overseeing the technical landscape and overall environment from an information security standpoint - by going "around IT's back" to the cloud, it logically follows that information security can therefore be impacted. This creates complexity - and puts the firm as a whole in a position of increased risk.

Now risk isn't always bad ... but unless you're Evel Knievel, it pays to think through relative merits of risks before you take them on. Meaning, it's important that organizations think through the level of IT involvement in a cloud deployment to determine whether level of interaction is appropriate given the possible security impact. Business partners should be thinking about this when IT isn't involved, and IT should be thinking about it when they learn of a cloud migration they're not fully engaged in.

This isn't to say that IT should be involved in every deployment (good idea though it is, situations do not always permit the optimal case to play out), but organizations that purposefully leave IT out of the conversation should be even more careful about how they approach the technical deployment. "More careful" means that they plan carefully, that they reach out to all stakeholders, and that they fully maximize how they leverage service provider technical expertise. After all, in many cases they're operating without a net.

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

Technology - as we all know - changes quickly. What sometimes changes even faster are the buzzwords. And the newest one is "big data."

 

It's a cutesy name for a powerful concept: specifically, the concept that data has utility - and when datasets increase exponentially, so also does the utility of that data increase nonlinearly. In other words, opportunities to make use of the data for some productive purpose compound along with the size of the dataset, as does the complexity of management.

 

This should be of particular interest to organizations going through a cloud transition. Why? Because efforts to virtualize, centralize and standardize inevitably lead to centralization and aggregation of data. As that centralization occurs, data that may have been dispersed and diluted throughout the enterprise under the old model becomes concentrated in the new.

 

Dilute data (where data is spread over the entire enterprise and stored/maintained only at tremendous expense) becomes "data as singularity." Like a black hole, our data becomes extremely powerful (though difficult to harness) due in part to its density.

 

While this is extremely powerful for IT generally, for those of us who are chartered with maintaining the security of that data, it's a mixed blessing - there's an upside as well as a pretty clear downside. Let's take a look at both at a very high level.

 

Security Downsides

It goes without saying that centralized, extremely large volumes of data carry a significant security impact. First of all because it makes a heck of a target for a crook. Can you think of a more appealing target for someone who wants to get their hands on your organization's crown jewels? I can't.

 

Just like this data is potentially valuable to you, so also is it valuable to an attacker. Not to mention that enforced separations when a portion of the data is compromised go away as centralization occurs. In other words, because the data is centralized, any exposure is total exposure.

 

However, it's not just the "target-worthiness" of the data alone that constitutes a risk. It's also because the size of the dataset makes implementing security controls unwieldy. Can you imagine, for example, the engineering challenge associated with encrypting an Exabyte of data? Consider the tried and true tool in security -- linear search (i.e., how many AV, DLP, and IDS solutions work). "Big O of n" becomes "Big OMG" (sorry, couldn't help it).

 

So not only does the data have a huge bull's-eye on it, but the tools required to implement technical security controls at this level are complicated to deploy. This is one of the reasons it pays to think through (and set up) security controls while the dataset grows too large.

 

Security Upside

But it's not all downside. There are a few security advantages that follow as a consequence of centralizing and expanding the dataset in this way. First of all, in a distributed-data model, understanding the universe of locations within the enterprise (and outside of it) where data lives can prove extremely daunting: to the point that asking the seemingly simple question of "where does the data live" may simply be unanswerable to many organizations.

 

As data becomes more centralized, while the specifics of the data storage at the central location becomes more complicated, the "sprawl" of data within the enterprise can be reduced. Note that this is highly dependent on individual circumstances - so your mileage may vary. Getting away from this sprawl has a tremendous benefit as we can centralize - and improve in so doing - security controls.

 

Secondly, the dataset itself can be analyzed to find fraud. Keep in mind that much of the data that is in the set will be security relevant (security logs, etc.). We're already seeing efforts by the Department of Homeland Security to analyze datasets to combat real-world security threats in certain situations. So also can your organization seek to mine the data for information about attack conditions and fraud. Depending on the nature of the data in scope, there can be opportunities here though obviously the specifics are up to you and take planning to implement.

 

Lastly, it's an opportunity to revisit the legacy environment and apply financial resources to bring security to the data. Anything that loosens the pocketbooks and allows investment in IT is a way for the savvy security practitioner to capitalize. Security is obviously a huge part of the data strategy for any organization, so getting out in front of the "big data" movement can be a huge win for security.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

Moving to a managed cloud model for Software-as-a-Services (SaaS) delivery makes a lot of sense for independent software vendors (ISVs).

 

However, it's key to first conduct research and ask the right questions before outsourcing to cloud. ISVs should know what to look for in a SaaS infrastructure services provider and what types of questions to ask.

 

Security

When it comes to cloud, most of the questions I receive are around security. In short, cloud can be as safe as any other form of IT infrastructure: It's as safe as the security measures you have in place.

 

Ask potential service providers whether they can filter out threats at the network level - it's a much more powerful method of protecting your IT infrastructure than doing it on site. Ask how they minimize exposure to common threats. Ask how they identify and assess system and application vulnerabilities. Do they offer 24/7 monitoring, management and response?

 

Service Levels

Single service-level clouds may not fit all applications. As an ISV, you either offer a standard service level to customers or have varying service levels based on your software tiers and other factors.

 

Be sure to review your potential cloud provider's capabilities carefully. Remember: SLAs you offer cannot exceed what your service provider is capable of providing.

 

Explore the service provider's standard and emergency change windows and procedures. When does their SLA "clock" start ticking? Things do go wrong from time to time, and how your service provider responds to those issues will affect your SLAs to your customers.

 

Lastly, how redundant is the service provider's cloud environment? It doesn't start and stop at the hardware, network and storage layers but also continues into the facilities (i.e., power, battery backup, redundant and varied paths for network into the building). There's nothing wrong with asking for a data center tour.

 

Hybrid and Flexible Solutions

ISVs running in the cloud may want to tap into their legacy IT environment to get to market faster.

 

The availability of hybrid cloud solutions - the tying of private and public clouds to each other and to legacy IT systems - is important to solve IT issues related to temporary capacity needs (i.e., bursting) and to address periodic, seasonal or unpredicted spikes in demand.

 

Ask if the potential vendor's assets work together to fully embrace the cloud model and deliver a combination of colocation, managed services and network that best suits your immediate and future needs. This capability enables the flexibility you need to both maintain your traditional licensing business and transition into SaaS. The vendor you choose should help you navigate the transition, no matter what your scenario entails.

 

Pricing

Vendors tend to price their clouds differently. Make sure you compare "apples to apples" and not just what vendors market; an instance of computing in the cloud may mean different things across vendors. To get the full picture, compare and contrast solution pricing versus individual element pricing.

 

Ask about what features (i.e., storage fees) are included in data center services. Are backup, security and support services included? What are the costs to add network connectivity options?

 

SaaS Expertise

In the end, the ultimate factor - in some instances even a deal-breaker - should be SaaS expertise. Look for a service provider with experience building solutions specifically for ISVs. Ultimately, the vendor should be able to help you figure out the right solution and roadmap to meet your business needs. If they don't specialize in offerings for SaaS companies, look elsewhere.

 

Cloud enables ISVs to implement their offerings in any market in record time. However, true cloud computing for ISVs needs to go beyond just an array of flexible storage and processing capacity. Be sure to conduct research, ask questions and find a solution that meets your needs.

 

Larry Steele is technical vice president, Software-as-a-Service, at Savvis, A CenturyLink Company.

In the next few weeks, I will be packing up my life in Philadelphia and moving to Chicago. I am, in fact, writing this blog on one of my many trips between the two cities to ready my new house and family for the move. As I think about all that goes into a move to a new city, I can't help but see the similarities between transitioning the data center to the cloud and buying a house and moving a family.

 

To go smoothly, the logistics for my move, just like a transition to the cloud, must be well prepped and nicely staged. I have had to weigh different priorities and answer many questions - frankly many of the same questions IT and Business executives face when considering cloud technology.

                                                                                         

The Considerations

Relocation

Moving to Cloud

Location, Location, Location

Schools, public services, ease of transportation, social life

Regulatory guidelines, latency, security, additional services

Budget

How much house can we afford: What are the incremental expenses we will need to consider such as taxes, utilities and other variable costs? How will these items impact the overall budget we must allocate for running our house?

How much will the cloud cost? Have I considered all requirements such as network, security and the number of applications that need to migrate? What ongoing expenses will I need to consider?

Services and Partners to Help With the Move

What resources do I need to pack, ship and unpack? What will I outsource and what will I do myself?

Which cloud provider do I want to partner with? Will I use its resources or in-house assets, a combination of providers or just one?

Logistics

When do I turn off my old utilities and when do I start new ones? Who in my family is coordinating the process? How do I inform friends I am moving and where can they reach me?

How do I handle data migration and security? What do I tell users? How do I prep users to access and locate applications and services that have moved?

Moving Day

Who will wait at the new house? Who has to care for closing the old house? What preparation do I need to make for my children?

What preparation do I need to make for alerting users about the move? When we make the switch how long will old services be available?

Ongoing Maintenance

How do I care for what was not done prior to moving? How do I fix leaking faucets and other items we discover as we live in the new house?

What happens when applications don't function? What happens if I want to move more services into the cloud or move some out?

 

Stay tuned, for more information on how to answer these important transformation questions. In the meantime, tell me what key considerations you have as you think through whether a move to cloud is right for your organization, and let me know of any restaurant recommendations you have in Chicago.

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis, A CenturyLink Company.

Every single one of us has been on the wrong end of a purchasing decision at one point in their lives. For me, one case of that was the Xbox. Everyone was talking about how great the Xbox was, the commercials looked awesome, reviews seemed to be overwhelmingly positive. But then I tried it out -- and it turns out it wasn't my thing.

 

Now sure, I know about "caveat emptor." I realize now - just as I realized then - that paying attention to what you buy is a top priority as a consumer. But sometimes the market creates conditions when fully evaluating a purchase is discouraged. When something's really new ... when everyone is saying how great that thing is ... when everyone else seems to be buying ... or when we only have limited time to act. Well, under those situations, sometimes getting caught up in the frenzy happens.

 

And while "caveat emptor" is easy during a steady, clear-headed purchasing decision (i.e., one based on reasoned and careful analysis), it's harder to be careful in a purchasing decision made under pressure.

 

This is happening to companies right now with cloud. Quite literally, almost everyone is moving to the cloud; I've seen statistics that suggest upwards of 70 percent are already in the cloud and other sources suggesting that 80 percent of new applications will be developed for cloud going forward. There's quite a bit of transitioning going on.

 

And in the rush, organizations are moving all sorts of services to the cloud, and in the charge to get their own efforts under way, some organizations are making moves that might not necessarily be the most appropriate from a security perspective. Here are a few easy-to-ask questions that can help make sure you and your service provider are on the same page when moving resources to the cloud:

 

Question 1: What level of service am I buying?

Remember, service providers sell multiple different kinds of services to different customers. They might have an environment appropriate for federal customer built around NIST 800-53 controls; they might have a healthcare environment built around HIPAA security; they might have a retail environment built around PCI. They might have a low-security environment with very few protections at all. It's very important that a customers' security organization understand what they are buying - particularly if the security organization is looped in after a purchase is in progress.

 

Question 2: Is your environment certified?

One of key benefits to security from an outsourcing relationship has to do with streamlining the audit process. Ideally, you should be able to just hand an auditor a list of the controls employed by that cloud provider and let them go to town. But without certification (i.e., unless someone has actually gone off and validated that environment), the assurances you can have are slim. Ask your provider to provide proof. Whether it's PCI-DSS certification, SAS70 audit or other certifications, ask them to give you the ammunition you need to provide in a format that's easily used and consumed by your auditors in turn.

 

Question 3: What can you offer in writing about security controls you provide?

It's never good to assume. Ask for statements about control deployments in writing ahead of a purchasing decision. If need be, work that response in so that cloud providers are contractually obligated to meet the bar you have defined.

 

Question 4: What happens if SLAs get missed?

Missing an SLA - particularly in a security context - can be a big problem. Say your service provider fails to notify the right people of a breach until eight days after it occurred? If you're talking about California, where failure to report within the time constraints of their breach disclosure law is illegal, there could be serious ramifications - potentially stiff fines or other regulatory action. Define ahead of time whether - and how - your service provider is accountable from the get-go.

 

Question 5: Who's doing what? Put that in writing too.

Some security controls come standard with different service levels and types of services purchased. It's important to understand what your vendor will be doing to support you from a control deployment and operations perspective and what you will have to do yourself. Remember, personnel change - so it's important to get these facts in writing as well.

 

Ed Moyle is senior security strategist at Savvis.

CIOs have seen their roles shift from technical to strategic planning with a focus on the latest technology and trends while also looking at innovative ways that IT can help achieve business objectives. With the markets jittering about a double-dip recession, infrastructure utility approaches such as cloud are likely to get an even greater boost.

 

The consumerization of IT plays into this need for innovative and new delivery models for IT. Employees demand increased mobility and businesses scramble to comply and empower their employees to work any place at any time.

 

However, I want to remind you again that one should not go blindly into cloud thinking it's all about cost savings or that it will be the panacea to all headaches relating to IT. Rather it is a "tool" to help optimize spending amidst shrinking budgets while continuing to accelerate growth and productivity. To use the tool effectively, organizations will need to transform their thinking about the role of IT and revamp their IT departments to best understand when to leverage a more standardized, cloud-based model and when to retain assets and expertise in-house.

 

In its May 2011 report "IT Infrastructure and Operations: The Next Five Years." Forrester Research, Inc. emphasizes that the next five years is about economics. Based on my tour of customers, yet, economics is important, but with an increased focus toward improving competitiveness and organizational agility rather than merely to drive down costs.

 

Forrester seems to agree and emphasizes in its May 25 report "I&O Execs Must Determine Which Applications Should Move to Cloud" that to contain costs and increase productivity, IT organizations, in general and infrastructure and operations (I&O) in particular, have started thinking in terms of "IT industrialization": a rationalization of IT processes and tools that would lead to more flexible, predictable and reliable services.

 

Key to realizing these benefits is not just using IT to automate processes and tools, but to be an expert at finding the right service delivery platforms for the task. Business processes, from purchasing products to customer service to payroll, are all accelerated through automation provided by IT services. Coupled with the right quality of service, they improve productivity to make the enterprise more competitive.

 

Forrester highlights "two technological changes [that] have the potential to effectively offer a solution to solve IT's future productivity issues: automation takes care of diversity [and] ... cloud computing shows potential economies of scale." These two concepts have and will continue to change the face of how services are sourced and how they are deployed.

 

Forrester emphasizes the balance between traditional IT and new delivery methods and is spot on when it says the traditional approach to "throw more people at the problem" is no longer efficient: Staff augmentation is subject to the law of diminishing returns, which can turn counterproductive and quickly encounter financial and operational limits. IT must overcome these limits by improving productivity by an order of magnitude over the next five years.

 

Forrester reinforces, "Cloud computing is not replacing traditional outsourcing. It simply adds some new outsourcing options, giving I&O teams greater choice, which ultimately leads to greater value. But you have to understand the breadth of options and what makes them different to gain the most benefit."

 

Forrester's diagram [see graphic] illustrates that companies need to understand the value each delivery approach can provide and which is best suited for their unique organization and needs.

 

Forrester - Rightsourcing.jpg 

As you've read from me many times, cloud is only a piece of the IT puzzle (a corner piece at that) and the applications, benefits and ramifications need to be considered and understood in advance. Don't underestimate the impact of your infrastructure choices on the rest of your IT environment. The worst decision is to go blindly into a single model and think it will be the solution to all woes.

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis.

"Situational awareness" is a term we hear often enough (particularly in security) but one that isn't always fully appreciated; put simply, it's the art and science of paying attention to the world around you and responding appropriately to situations as they change.

 

Believe it or not, this is a critical skill - one that can quite literally mean the difference between success and failure in a business context. When something changes about the business environment, not noticing the change creates risk - noticing the change creates opportunity. And both risk and opportunity abound in looking at the current environment within the healthcare sector.

 

Risk and opportunity: HITECH and business associates

In healthcare, HIPAA is obviously a very big deal. For the past decade and a half since the law went into effect, organizations in the healthcare community have been struggling to come to grips with a set of federally imposed mandates governing the security and privacy of the electronic health data within their organizations. Historically though, the situation has been quite different for "business associates" - those firms that provide IT or other support to hospitals, insurance companies, or clinics but not covered entities directly.

 

In most cases, business associates have access to the same data, the same systems, the same documents and artifacts of patient care as covered entities. But yet, they were not required to implement the same physical, technical and administrative security controls as covered entities were. They were required to sign agreements with covered entities stating their intent to protect data, but they were not on the hook to implement any specific security technology or controls to actively defend the data in question. At least not until recently.

 

As of the Health Information Technology for Economic and Clinical Health (HITECH) Act, business associates are in a different boat. Now, they are required to adhere to the same security standards as covered entities. What's more, they're on the hook from an enforcement standpoint as well. For business associates not paying attention, this introduces risk and opportunity: risk that they will not get into compliance with the law and that they will be subject to enforcement action and opportunity to differentiate themselves to their customer base based on their understanding of the requirements and ability to implement secure practices that safeguard patient data.

 

Meeting the requirements: Cloud strategies

For business associates going through this, it's important to realize that getting to where they need to be from a HIPAA security standpoint could potentially be facilitated through a (seemingly) unlikely source: their cloud migration efforts. In other words, a migration to cloud already in progress may (under certain circumstances) be one potential avenue to meet some of the required security HIPAA security controls head on. Why, you ask? There are two reasons for this:

 

1.) Many firms that provide cloud services have already implemented the specific controls required by HIPAA security in the course of servicing covered entities; and

2.) Overlapping controls (such as those required to support other requirements such as payment or banking regulatory requirements) may potentially be used in support of HIPAA.

 

In other words, the promise of cloud is leveraging economies of scale for security as well as other desirable technical outcomes; so rather than each firm having to implement security and other controls themselves, they "consolidate" that effort and implement it once in an environment that can be shared among consumers.

 

So for business associates looking to rapidly meet the specific controls required by HIPAA security, looking at environments that are already servicing covered entities could be a good bet. Since these environments are on the hook as business associates (just like you are), they are required to meet the equivalent bar as you; so by leveraging their service you ostensibly leverage the effort they've put into implementing the technical, physical and administrative security controls as well.

 

Of course, this is by no means a substitute for an internal compliance effort. You'll still need to make sure that you're doing the right thing throughout every place that you interact with, handle or access patient health information, but it can certainly be a head start for areas that you're looking to migrate to the cloud anyway. By selecting an environment that will implement the same controls you are required to, and by getting in writing that your service provider will implement the appropriate controls, you just might put yourself farther down the road than you'd otherwise be.

 

Ed Moyle is senior security strategist at Savvis.

Study after study has shown that if you are a Web-based business and your landing pages are slow to load you will lose business. You will also pay a second penalty, losing search rank, making it harder to recover after fixing site problems. Likewise, an overloaded site can quickly turn a marketing success into a PR problem as clothing store Reiss found out last week.

 

Most companies know this, so they ensure that they have a solid SLA in place with their data centre provider covering the performance and availability of their cloud or colocation space so that their apps stay up and are on a platform that should deliver the designed responsiveness.

 

Unfortunately, guaranteeing that the lights are on, the platters are spinning and that the bandwidth is in place is not enough to ensure success for a Web-based business, your customers do not connect at the cage or even at the edge of your provider's WAN. Instead, your Web apps must traverse thousands of miles of fibre over multiple networks before reaching their destination. The variables these routes impose play a key role in the overall delivered responsiveness of your applications, and need to be monitored and reported on so that action can be taken to ensure that each end-user's quality of experience (QoE) remains high.

 

Savvis' End User Experience Monitoring (EUEM) service, which is powered by Gomez, can analyse performance both from the internet backbone in over 150 major cities, for an overview of performance, or drill down to customer level via tests run on a network of more than 150,000 end-user desktop computers located in multiple countries. We usually recommend that alerts from the network are copied to our own systems management teams so that we can start investigating issues and recommending ways to resolve them as soon as they arise.

 

The payback from end-user monitoring can be almost instantaneous. We recently set up an initial monitoring profile as a test for a client for just 24 hours. The tests highlighted that a particular code block was causing loading to appear to pause. The diagnostic information we provided enabled the customer's development team to modify the application so that the page now performs better.

 

As you run EUEM analysis over longer time scales you can establish trend data that informs capacity planning and allows for exception monitoring that can aid early fault detection. You can also use EUEM for strategic planning.

 

A great example of this is when planning to roll out a service to a new market. If, prior to roll out, you use the EUEM network to run test transactions against the application from your target market, you can compare this to the performance norms to identify if there are any particular local bottlenecks that need to be addressed, for example, by considering moving the load from that market to a more local data centre or modifying the application to split the transactions into smaller parts.

 

With this sort of flexible capability, I believe EUEM should be considered as part of every Web service infrastructure contract as a complement to standard SLAs. Used fully EUEM will help ensure that not only is the site up, but that it is delivering. Without EUEM can you honestly say you know how your customers see your apps?

 

Steve Falkus is product marketing manager, EMEA, at Savvis.

As the pace of cloud adoption quickens, many organisations are belatedly realising they need to extend IT best practice and governance to cloud and hybrid systems.

 

The journey to cloud infrastructure often starts with a small project that uses cloud to avoid prohibitive upfront costs. Occasionally this expedient sidestepping of the upfront cost of procuring hardware means that cloud capacity has simply been purchased on a corporate credit card or under an operational budget. However, this pragmatic approach, which has helped many IT projects get off the ground, is risky from a governance and manageability perspective.

 

When a small test project becomes an overnight success, the cloud infrastructure system comes into its own, delivering capacity on demand so the user base can grow. But not all clouds are created equal, and as the user base grows the system's importance and the business risks both tend to rise. Supporting the system becomes more vital, as does managing the capacity purchased on demand to control the cost - particularly if working within the margins of a budget not defined for the purpose.

 

Rapid development, test and deployment can be seductively simple on consumer-grade clouds. Delivering a guaranteed level of availability, providing accountability for data and guaranteeing enterprise-class security is more complex, however, if not impossible in a typical mass market, or so-called "credit card," cloud. Effectively integrating this type of simple cloud into an enterprise environment, so it does not become a management or financial burden, is likely to be a significant project in itself.

 

When embarking on a cloud-based project, engineers need to think beyond the ability to scale capacity. They need to design systems from the outset that can self-manage as much as possible. This means setting policies for resources, applications and their operation, then automating them so that as demand varies the service remains stable and requires little or no input to manage its resources.

 

There is little point after all in saving the purchasing cost of servers and storage if you allow operational costs to escape control by not tying them to demand - design to scale up as well as down - even correctly managing lunch time demand cycles can increase cost efficiency. Similarly taking additional time to manually manage cloud utilisation hour by hour is unlikely to be cost effective.

 

Automation has to be central to the design of an effective cloud-based system. At Savvis, we have put automation at the core of our own cloud systems. We extend this automation to our clients and partners via a RESTful API as an add-on to our drag-and-drop topology designer. The API provides programmatic control of resources and configuration, enabling clients to monitor the performance of their virtual data centre and deploy resources dynamically in response to system status and activity trends.

 

It is relatively early days for the use of the API, but many of our clients are already moving beyond simple automated deployment of resources into true intelligent management. For most, the goal is simply to achieve the full potential of a correctly "tuned" self-managing and automated cloud-based system. For others, though, the choice of an enterprise-grade cloud was in part due to the need to meet corporate governance requirements. For these clients, the ability to establish rules in the API to aid the management of, for example, the geographic locations of data storage or to automate ILM processes is central to their cloud strategy.

 

Whatever your cloud infrastructure plans are, look before you leap. Ensure that your organisation's accepted IT best practice and governance can be delivered and automated on the cloud platform you need. Only then will you be able to guarantee that your business can reap the full on-going benefits of cloud infrastructure.

 

Steve Falkus is product marketing manager, EMEA, at Savvis.

There is no doubt in my mind that one of the biggest underlying concerns that IT managers have when moving to a cloud computing model is a fear of losing control of their infrastructure.

 

Think about it: Corporate local and wide area networks have truly become a company's lifeline, delivering an increasing amount of critical business applications to users that are both across the hallway and across the globe. These users typically don't care about terms like "cloud" or "virtualization," or how many different things are simultaneously running over the network, or what this year's IT budget looks like. They just want quick, easy access to all the systems and services they need do their specific job.

 

When a company experiences any type of delay or outage, the IT manager immediately becomes the "bad guy." To avoid this designation, IT managers must have their finger on the pulse of their virtualized infrastructure, as much as - or even more than - they did in pre-cloud days.

 

They must ensure that the LAN is supporting all business applications while optimizing the end-user experience over the WAN. They require the tools to monitor the day-to-day and long-term usage and performance of these networks, and the ability to determine which apps and end-users are the highest bandwidth consumers.

 

They must learn exactly what and where the risks are, and understand precisely when and how other business-critical applications will be impacted so they can mitigate the risks from both planned changes and unexpected events. And they need to know how to accurately prioritize all resources based on a real understanding of the impact to critical services.

 

While it seems logical to keep visibility and management in mind when initially deploying new technologies like cloud, unfortunately many IT managers tend to focus on monitoring their advanced environments and troubleshooting performance problems only after experiencing issues in production.

 

Numerous network service providers, Savvis included, offer Web-based tools providing network visibility and reporting capabilities of varying degrees. These services are often positioned as non-standard, optional features that can be activated by customers at any time. But I truly believe that for today's IT manager, there is no type of network visibility or monitoring tool that should ever be considered optional.

 

Gene Rogers is director, network product marketing, at Savvis.

Government CloudCloud is much more than just a passing trend in the public sector, as government IT strategies have evolved and no longer allow for siloed, monolithic systems that lack interoperability and flexibility.

 

But why has cloud become such a major factor in the evolution of IT infrastructure across the global public sector? Many industry analysts attempt to answer this question by looking to the past.

 

Some point out that traditional IT projects typically struggle with agility and discipline, leading to unmet expectations, timeline slippage, project delays and budget overruns. Others focus on pure financials, pointing out that IT budgets are increasingly unable to support inefficient procurement models that build for peak conditions and suffer idle capacity.

 

However, it is every bit as important to consider future trends. Cloud models are clearly more agile and flexible and, if done right, allow IT managers to implement capacity on-demand and better match supply with future demand.

 

In the future, government policy around grand data sets will influence tax revenue and public safety. Infrastructures will become smarter, with embedded sensors delivering tremendous streams of data for analysis. Regulatory code will need to adapt to the complex legal environment in which we live, requiring sophisticated frameworks for analysis and early warning.

 

Cloud allows government agencies to realize cost savings, efficiencies and modernization and expand their existing infrastructures without having to rely on capital resources. Governments want shared services, automation and standardization and are increasingly issuing mandates that make cloud models the preferred model of implementation.

 

The key to government cloud adoption is the risk classification process and the assignment of workloads to the appropriate type of cloud deployment model - public, private or hybrid. For example, an ingress point for tax returns would have a much different risk profile than an interactive map of a public transport system. This risk classification process can be a challenging area for government agencies, which for years have had direct, hands-on access to their server farms.

 

As governments expand their use of cloud models for appropriately classified workloads, Savvis finds itself involved in a number of opportunities in the heart of a major government cloud - or G-cloud - initiatives around the world. For example, our contract with the U.S. General Services Administration allows us to provide cloud to federal, state and local government organizations. And in the United Kingdom, we have made our Government Wide Services platform available to all of the country's government departments and third-party suppliers.

 

Savvis continues its cloud deployments in countries such as Singapore and India, where governments have similarly aggressive strategies. IT leaders there are looking for providers that offer the most effective and secure cloud computing and shared service models to help transform their IT strategies.

 

Clearly, every sovereign nation has different approaches to cloud computing models. Countries like the U.K. and Singapore, which place a higher focus on government-provided public services, have strong IT infrastructure demand that is compatible with cloud. The U.S. and Singapore, as early cloud adopters, have done extensive research and experimenting, but I am struck by how many similarities we see across these regions of the world.

 

Governments all over the world are utilizing cloud to help transform the supply chain, improve government service and revolutionize public sector IT.

 

David Shacochis is vice president, global public sector, at Savvis.

Cost optimization remains the top driver for cloud, and infrastructure utility models in general, with a majority of our clients. When costs are optimized a company can perform better, fund new markets and innovation, become more competitive and accelerate growth.

 

Cloud and infrastructure utility models offer an almost immediate fix to some of the most significant hurdles that drive escalating IT costs:

 

  • Keeping IT capability ahead of the competition in a world of rapid technological innovation
  • Reducing the administrative burden of procuring and tracking assets, diverting focus from your core business
  • Addressing the inefficient use of capital investment and capacity due to management inefficiencies and demand fluctuations
  • Increasing support and operational costs to address end user needs
  • Increasing need for flexibility and reliability in services delivered to an increasingly diverse user base

 

Regardless of what drives an organization to seek a better cost structure, cloud clearly delivers value across multiple dimensions. Cloud provides both a game-changing technology and a sustainable commercial model.

 

IT decision makers who think most creatively about how to leverage cloud currently are examining how their cost, control and end-user experience metrics will benefit from various types of cloud offerings -- often in combination with traditional managed services -- and are starting to experiment with these options.

 

On a recent client visit, a CIO team shared with me that they had a need to decrease their operating costs and improve the service levels and reliability of their internal IT systems for their employees. They were comfortable with their capital spending levels, but felt cloud would be able to assist them in improving their operating costs and offer better services for their internal customers (their employees.) The improved service levels will be derived by the reliability of a professionally outsourced IT infrastructure and the investment (shift in spending) in strategic planning and support services.

 

Clearly, by viewing IT as a strategic tool and a driver for optimizing cost, executives recognize that to maneuver for competitive advantage in today's tough economic climate, testing the cloud waters is a necessity rather than a luxury.

 

What business opportunities are you pursuing based on your adoption of cloud and a more optimized cost structure?

 

  • Shift IT leadership focus from technical planning to strategic
  • Market expansion or new market entry
  • Additional IT services with improved service levels available for your employees?
  • Something else?

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis.

A turning tide for Government IT

Global government demand for cloud services has never been more acute. The need for effective and efficient service delivery and faster overall deployment in challenging economic conditions has brought the benefits of the cloud to the fore.

 

According to our annual cloud research, 41 percent of public sector organisations were using or plan to use enterprise-grade cloud for applications they own or manage within the next 12 months.


Earlier this year we expanded our Savvis Symphony Virtual Private Data Centre cloud service to our data centre in Washington, D.C., to provide cloud computing capacity for the U.S. government. We have
opened this to a wide range of clients who now have the potential to cost-effectively deploy new services for the government, which is paying only for the IT infrastructure that it actually consumes.

 

Back here in the United Kingdom, in spite of a strong culture of outsourcing, progress in delivering the G-cloud has been hampered by the high costs and complexity around meeting government risk control and security management requirements.

 

The good news is that recently there has been a notable acceleration in the acceptance that these concerns can be easily addressed within a well-designed infrastructure. In fact, in a report issued this week, shared IT Infrastructure, open-source software and a stripped-back IT estate are at the very heart of the U.K. government's ICT strategy.

Confirming this trend, this week we made an exciting announcement that our Government Wide Services (GWS) shared service platform is now available to all government departments and third-party government suppliers in the U.K.

 

This Infrastructure-as-a-Service platform will have positive implications on the uptake of hosted, reduced-cost IT operations and services within the U.K. government. What is most exciting about GWS going live for government as a whole, though, is the potential for it to make it easy for government departments to achieve economies of scale, whilst allowing new suppliers to join the government software market and provide it with innovative services, without the traditional need for additional capital expenditures.

 

Government departments are now able to deploy applications onto our existing platform on a pay-per-use basis, centralising the service and enabling standardisation and encourage a broader use of IaaS, PaaS and SaaS within government. This will reduce costs and improve service, leading ultimately to a government app store, and crucially, a more agile and efficient public sector.

 

Neil Cresswell is managing director, EMEA, at Savvis.

The scary-but-true fact is that many Software-as-a-Service (SaaS) vendors do not have a true disaster recovery plan. Many software companies operate in a single facility and assume that the cost of creating a new or standby environment is too expensive. However, as SaaS vendors mature, disaster recovery plans are a must-have.

 

Luckily, the cloud enables software companies to have a fully enabled disaster recovery site. For example, vendors can now host half of their companies in one cloud location, half in another and move both loads back and forth in case of disaster. This creates a worst-case scenario of needing to recover 50 percent of customer data at any one time. Additionally, data replication technologies are generally available and have been proven to reduce recovery times between geographically dispersed facilities in the unfortunate event of a disaster.

 

But disaster recovery is more than just the technology put in place to solve all your problems. As a SaaS vendor, you also need to think through and encompass your business areas as well.

 

Have you geographically disbursed your sales, marketing, product development, finance and operations divisions inside your company? You should. In the event of a regional disaster, you should have a subset of employees in other regions who can pick up and run the business. In the disaster region, there will be many challenges for your employees who are dealing with the disaster and it is imperative that others are available to take on more responsibility during this trying time.

 

Just as you build out a disaster recovery plan for your customers to keep them in business, also think about the SaaS applications or legacy on-premise applications that you use to run your business. Do they have disaster recovery plans in place to keep you up and running? Can your finance team still invoice and collect funds? Can your sales team contact customers?

 

Next, do you have a defined set of processes in place? Are your processes checked and verified at least twice a year? Simple process example: Do you have a call-tree established to communicate with your employees? Who notifies whom? What do they say or what should they say? Who should they call and notify? What if the phones don't work? Where should people meet? As you can see, while this is a simple process it still has many moving parts.

 

I would like to leave you with this: A disaster recovery plan is about people, process and technology. It is just a plan and you cannot plan for every type of disaster. Decide what risks you can or should mitigate. Train your employees, practice your plans. It will never be perfect. I speak from direct experience here: It's not the plan, but how you respond!

 

Larry Steele is technical vice president, Software-as-a-Service, at Savvis.

 

  • Promising!
  • Transforming!
  • Confusing!
  • Evolving!

These are all descriptive terms I heard used to describe cloud during a recent tradeshow jaunt (Cloud Connect), briefings with industry pundits and meetings with customers. While industry excitement with cloud is widely accepted, use cases are now evolving quickly and diversifying to meet unique industry needs.

Addressing cloud concerns in Asia

When we talk to organizations in Asia that are turning to cloud infrastructure services, their challenges and concerns typically center around the security of sensitive data, integration with existing infrastructure and how to utilise the full capacity of the cloud.

 

1. The security of sensitive data

As organisations consider moving sensitive business information outside their private IT network and into an external data centre that they access over a network, there is a fundamental shift in the security boundary. There is a need for a more pervasive approach to application, data and infrastructure security, with businesses and external service providers both taking responsibility.

If you are anything like me, work invades your personal life at the oddest moments. I experienced one of these moments just the other day when my youngest son turned 4.

 

His grandparents recently purchased a new laptop (that's a whole other story) and decided to use an Internet-based video service to deliver an "in-person" rendition of "Happy Birthday" and watch in real-time while their grandson blew out his birthday candles.

 

What followed was one of the worst song renditions I've ever heard. Even when the video was not frozen, the audio delay sounded like a mocking echo of the less-than-perfect singing on our end.

 

The moment reminded that not long ago, companies conducted important video conferences over similar quality networks, with results not too different from ours.

In the technology services industry, sometimes the best thing a marketer can do is come up for air. I was reminded of this while talking to my neighbor, a 72-year-old retiree, who from my own observation is the furthest thing from a "target market" that I could imagine when I consider the demographic of online shoppers.

 

Boy was I wrong.

What exactly can independent software vendors (ISVs) do to onboard customers more quickly?

 

Automation, automation, automation. Well that was easy!

 

This is an economic as well as a user experience question. The faster you on-board your customers, the faster you recognize revenue and the better the user experience, the more satisfied you customer will be.

Washington_and_Lafayette_at_Valley_Forge.jpgOne of the most well-known narratives of the American Revolutionary War is the harsh winter suffered by Gen. Washington and his colonial soldiers at Valley Forge, Penn. Troops were under-supplied, their uniforms were threadbare, and many suffered illness and disease from the brutal conditions.

Cloud computing is clearly on the mind of most of our enterprise clients, but many IT decision-makers are beginning to wrestle with the issues of how best to integrate potent new virtualization technologies into their existing privacy, security and operational environments.

 

While Internet-connected mass market cloud services are perfectly appropriate for some applications, enterprises know that application performance and business success are often dependent upon the level of control that an enterprise can exert over network routing, throughput and visibility, tightly coupled with existing enterprise security models. As a result, enterprise buyers are increasingly aware that they need more than "best effort" from their cloud provider. They need a completely integrated cloud infrastructure - a converged cloud solution - that includes high performance network services.

My recent customer visits, coupled with our earnings announcement, signal that cloud fever is not just buzz, but a strong reality. Most organizations want take a different approach to managing their infrastructure to reduce cost and alleviate management resources and development headaches.

 

Cloud is, and will continue to be, an enabler for this transformation. However, even though customers are adopting cloud solutions at an increasing rate, organizations are also trying to answer, "Where does it fit, and how can I get further leverage from this technology?"

outsourcinggraphic.gif

Looking back at our 2010 global market research, we discovered that IT directors in the United Kingdom, as compared to their counterparts in the United States and Singapore, were still treading gingerly in the cloud computing space. In the U.K., only 10 percent outsourced majority of their IT infrastructure as compared to 18 percent in the U.S. and 38 percent in Singapore.

Cloud computing has definitely moved beyond hype in Singapore in the past 12 months and I'm excited to observe how the IT landscape develops over the course of 2011.

 

Interest in cloud is strong in Singapore, especially when compared against current usage in the United States and United Kingdom. In fact, a study commissioned by Savvis found that Singapore IT heads are leading the shift to cloud, with 76 percent of responding organisations already using cloud computing today.

Time is money, so let's keep this post short and to the point. We left off last time discussing how cloud as a tool helps software companies deliver their services to market quicker. As I mentioned in my last post, all software companies develop, integrate, deliver and manage their software. So, let's explore how the cloud as a tool assists in getting to market quicker?

In 1789 Benjamin Franklin wrote, "In this world nothing can be said to be certain, except death and taxes." An obvious addition to this list, particularly looking at the financial markets landscape going into 2011, is "change," as the markets and their constituent players face, and sometimes embrace, constant change:

Valogix logoThis is the first entry in what we hope is a regular series of blog posts featuring Savvis clients answering five questions about their business and IT solutions. We start with Mark Yablonski, chief technology officer at Valogix, who shares details about his company's SaaS delivery model and use of cloud computing. 

As we move into a new year, I am seeing customers rethink all components of their business - people, operations and of course, IT infrastructure. Addressing today's market realities has even business veterans scratching their heads to figure out how to control costs without compromising a renewed focus on growth or innovation.

About Us

A global leader in cloud infrastructure and hosted IT solutions for enterprises.

more »

Connect with CenturyLink Technology Solutions

@CenturyLinkTech

Twitter