The Centurylink Technology Solutions Blog - Trends in IT Infrastructure

Results tagged “cloud” from The Centurylink Technology Solutions Blog

big data 3-resize.jpg

The public cloud can present an appealing option for organizations that want to take advantage of NoSQL and Big Data but are not willing to invest in physical data center infrastructure to make it happen.

The cloud offers a higher degree of flexibility and the ability to on-ramp a NoSQL project more rapidly than can usually be done on-premises. However, astute IT managers and their Line-of-Business (LOB) counterparts should be asking, "How much will it actually cost to do in the cloud?  Is it really less expensive than doing it on-premises?" Taking it a level higher, the manager might also ask, "What is the overall financial impact of doing NoSQL and Big Data in the cloud?"  

Tap the power of the crowd

In the fourth part of our series titled 7 Secrets to Becoming a Digital Disruptor, we will talk about the power of the crowd. You can read  the third part here and get insights around how data-powered decision making has changed the customer experience.

Crowdsourcing is not just for open source software developers anymore. These days, crowds deliver the collective actions, shared ideas and emerging tastes of the market in real time - even when the market decides that what it wants is a $50,000 potato salad.

GraphicforBlogPost4-resze.jpg
CenturyLink China Hosting.jpg

CenturyLink just announced the availability of our hybrid IT managed services in China.

The enormous opportunities in China have created strong demand from our multinational corporation customers to access the Chinese market, but logistical issues and concerns about security and navigating the business and regulatory climate present unique challenges.

CenturyLink Joins ODCA Steering Group

CaptureF14.JPG

Forecast 2014, the annual conference of the Open Data Center Alliance ("ODCA"), kicks off today with the fitting theme "Where Cloud is a State of Business." The ODCA is an independent IT consortium comprised of over 300 leading global organizations from a wide array of industries who have come together to provide a unified voice for IT requirements for cloud computing adoption.

When Silicon Valley-based RMS wanted to introduce the global insurance industry to an entirely new way of analyzing real-time risk on-demand, it turned to the cloud and strategic IT services from CenturyLink Technology Solutions to create a robust infrastructure strategy that could bring its vision to market faster.

We talked with Paris Georgallis, SVP of cloud platform operations at RMS, about the technology trends and industry dynamics driving the company's decision to host its platform in the cloud:

CenturyLink Big Data

If you're considering leveraging a solution from an Anything-as-a-Service (XaaS) managed service provider, make sure you understand what infrastructure underpins that solution.

Businesses - large and small - often turn to
colocation and cloud services because they want to focus on their core competencies rather than on using valuable resources to design, build and maintain an IT infrastructure that allows their organization to run smoothly. And now there's an interesting trend in which many XaaS providers are doing something similar: They are contracting with larger IT service providers to host their equipment and manage their IT operations.

IT groups are under increasing pressure to deliver new capabilities more quickly - without a corresponding increase in budget.

 

But as they strive to increase agility, many IT groups are hobbled by the fact that they spend the majority of their limited resources maintaining their existing IT infrastructure and applications. That's why so many IT leaders are asking themselves a key question: "How do I optimize my resources in service of the company's business growth opportunity?"

CenturyLink Tech Uniquely Positioned.jpg

Having spent more than a decade as an executive in the corporate technology field, I sometimes find myself a bit mystified by the prognostications about who is winning and losing in the race for dominance in any given category.

 

The enterprise cloud falls into this pattern, with observers discussing the industry as if it were a football game at halftime -- the winners clearly delineated and the competition all but over. I beg to differ. If enterprise cloud were a football game on TV, we would barely be at the "brought to you by" commercial in the pre-game show. Clearly the players are on the field, and today we are going to show that CenturyLink is most definitely in the game.

Every day, we're bombarded with information about cloud computing, and while, it's important, it's often not the whole story.

Clearly, the traditional capital and labor-intensive model of IT infrastructure ownership has become unwieldy - its days are numbered. Over-provisioning to create headroom has become unsustainable - financially and environmentally - and unjustifiable for meeting short-term or temporary workloads.

With the end of the year buzz around predictions, it's hard not to join in the conversation. Our CenturyLink Cloud leadership team came together for a few predictions for the year ahead - and to show we are keepin' it real, we scored last year's predictions as well.

Savvis recently released its annual Global IT Trends report by Vanson Bourne, predicting that IT leaders will make a massive migration from in-house IT environments to colocation, managed hosting and fully outsourced cloud services over the five years.

 

The video below shows why a progression through the spectrum of IT models may make more sense for many organizations than taking a sudden "leap" to cloud. The question is: Where do you sit on this progression?

Tier 3 has joined CenturyLink. We are going to build amazing things together.

 

But let's look back before we look ahead.

 

Many people contributed to the success of Tier 3. Developers launched feature after feature, while network engineers supported customers day and night. A passion for problem solving fueled their achievements.

IDC predicts that by the end of 2013, the 'digital universe' of all digital data created will reach four zettabytes - nearly 50 percent more than 2012 volumes and almost a quadrupling of 2010 volumes.

 

Just as the proliferation of digital data drives rapid improvements in the way businesses innovate their operations and customer engagements, so too is it propelling new approaches to managing the IT that supports it. 

The next five years is set to bring a dramatic, hybrid shift in the way IT leaders approach the IT infrastructure that supports their business growth and innovation, according to a global IT outsourcing study released today by Savvis, a CenturyLink company. 

 

In fact, by 2018, this global report predicts nearly 70 percent of IT infrastructure will reside in colocation, managed hosting and outsourced cloud models - a near full reversal from the 65 percent of infrastructure living in in-house environments today.

The following is a guest post by Dekel Tankel, director of customer engagements at Cloud Foundry.

 

Open source developers have transformed the world.

 

And Cloud Foundry's new relationship with Savvis is a perfect example of a momentous milestone in an emerging open source ecosystem. Welcome to the community!

Savvis will join more than 20,000 attendees at VMworld 2013 from Aug. 25-29 in San Francisco. With a platinum sponsorship and lots of activities planned, we are set for what should be a huge industry conference.

This week, Savvis will join AppFog, CenturyLink and scores of developers and innovators in Portland, Ore., for OSCON, a meeting of the minds on the latest trends and techniques in open-source technology.

SaaSIcon_Larger.jpgWhile an e-mail back-up solution offers a tangible solution to an immediate problem--such as a hard drive crash, software corruption, a computer virus, or natural disaster--an email archive solution extends beyond recovery and can be an important investment with both short and long-term benefits.

The summer is upon us, and with it, music lovers are gearing up for another packed season of music festivals and outdoor concerts.

 

Sadly, what's becoming just as predictable as the summer line-up are the poorly-performing e-commerce sites for these events - and yet, more and more we're seeing website crashes being portrayed as a sort of 'badge of honor,' the mark of a sell-out tour.

GlobalIcon_Larger.jpgThe UK government's recently announced a "Cloud First" policy follows a realization long understood by organizations around the world: Consuming ready-made IT services drive cost savings and agility improvements. Coca Cola recently announced its adoption of a cloud-first policy, formalizing a direction it's been moving in since 2009.

Mark Smith.jpgAs we announced the Asian expansion of our Cloud Application Database today, a look at global cloud computing trends on Forbes.com caught my eye and made me think about how far this technology has come in a very short period of time.

 

Asia, in particular, is being projected by industry experts to lead the growth in cloud over the next few years. Last year, for example, Cisco's Global Cloud Index whitepaper predicted Asia Pacific to generate 1.5 zettabytes of cloud traffic annually by 2016, the highest in the world.

Last week I had the distinct privilege of representing Savvis as a participant in the Future in Review (FiRe) conference in Laguna Beach, Calif. It was two and a half days of stimulating dialogue around some of the greatest challenges facing the future of our planet - economics, energy, education, technology and governmental policy.

 

I came to talk tech, but I left the event with something much more valuable: the understanding that our company is right in the heart of it all - that data center technology, powerful networks and cloud computing are all essential fuel for creative thinkers around the world to illuminate the years ahead.

CloudIcon_Larger.jpgWired.com runs a community called Innovation Insights, dedicated to "new thinking for a new era of technology."  Earlier this week, I contributed this article on cloud security, reflecting on some of the interesting trends around cloud network access and the impact on customer adoption.

 

I encourage everyone to dig deeper when a colleagues reference "security concerns" as one of the reasons holding them back from public cloud adoption models.  What are those concerns?  Are you exposed to them today?  And how do you mitigate risk?

cybersecurity.jpgMany cloud computing pundits state that cloud computing introduces a new method of Infrastructure-as-a-Service; however, that is just not the case.

 

Cloud computing introduces automation, orchestration, service provisioning/delivery and service management either initiated or through application programmable interfaces (APIs). The ability to secure these new capabilities is necessary.

CloudIcon_Larger.jpgConsider: 5.1 billion people on the planet own a mobile phone - only 4.2 billion own a toothbrush.

 

In today's media organizations, user experience is almost as important as the content itself. Compelling content means nothing if the user experience is poor, and the 'it just works' expectation set by the likes of Facebook and Google means content must be delivered in the right way more now than ever before.

CloudIcon_Larger.jpgJust back from a trip to Europe - the perfect rally point for meeting with many of our European customers to discuss how they are using our growing London-based cloud platform and how Savvis should prioritize its development backlog.

 

We were also able to meet with a number of industry analysts Cloud Expo Europe and update them on some exciting new developments inside the Savvis Cloud, such as our Symphony Cloud Storage offering and the savvisdirect project.

Steve Bacastow for web2.jpgMobile payments: The new frontier? Possibly, if you read the analyst reports. Yesterday, Forrester predicted the mobile payment market in the United States alone will reach $90 billion transactions by 2017.

 

This week, cloud-based mobile payments provider Mozido revealed plans to leverage Savvis' cloud and IT hosting solutions for its mobile payments platform.

 

We sat down with Steve Bacastow, senior vice president of operations at the Austin-based company, to talk about how outsourcing to the cloud aligns with Mozido's tailored solutions for global and regional brands in financial services, consumer packaged goods, telecommunications and retail.

 

Looking at ocean.jpgGoodbye 2012. Hello 2013!

 

It's another year, wide open with opportunity. Let's savor its newness for a moment and capture our surroundings as they exist right now. For in this fast, tech-driven world, we know one thing is certain: What concerns us today, probably won't a year from now.

 

That may have been clear to those of you who took a look back at 2012's most-read blog posts, according to Google Analytics. So now that the slate's been wiped clean, what IT infrastructure topics take priority?

Professional Services Icon'Tis the season for year-end reviews, and at The Savvis Blog, we're taking a look at the most-read trends and themes defining 2012: Big data. Disaster recovery. Mobility. And of course, cloud.

This week, we at Savvis EMEA are locked into three days of lectures, workshops and demos with more than 5,000 other IT leaders attending the HP Discover User Conference in Frankfurt, Germany.

 

As I listened to the discussions taking place yesterday, I couldn't help but think of how much this year's event theme, "Making technology work for you," fits perfectly with our view of the German IT outsourcing market at the end of 2012.

It's Halloween: one of my favorite holidays. And between the free candy, hot cider, fresh apples and costumes always lurks a good ghost story.

 

In movies and around campfires, a ghost can be fun. But the real ghosts threating enterprise security professionals can be anything but. For the application security decisions made in the past sometimes have ways of coming back to haunt us - and our apps.

 

How do you approach disaster recovery in the cloud?   

 

Businesses that have witnessed the cost savings of the cloud now want to apply those same gains to enterprise disaster recovery. And on first glance, that works because, by definition, cloud allows enterprises to keep a small footprint in their back-up data center and scale it out when a disaster occurs.

 

But questions often linger over how to effectively handle data replication between the production and disaster-recovery sites.

 

DataGardens, a member of Savvis' new Enterprise Cloud Ecosystem Programhas developed a way for businesses to protect IT systems from disaster, regardless of whether those systems are physical, virtualized or both. I had a chance to speak with DataGardens CEO Geoff Hayward about how businesses can build cloud advantages into their disaster recovery plans. Geoff Hayward-DataGardens.jpg

 

Jeff Katzen: Who is DataGardens?

Geoff Hayward: DataGardens got its start in 2007, developing some remarkable software that performs live transfer of virtual machines between sites--moving both server and datastore--while consuming only minimal network bandwidth. While the software enables geographic redistribution of virtual infrastructure in response to any business need, DataGardens began by focusing primarily on the disaster recovery market. The company has since further refined its focus to address the special challenges of cloud-based disaster recovery.

 

Katzen: Given your expertise in enterpise disaster recovery, what capabilities should businesses look for when picking a disaster recovery solution?

Hayward: Of course, different companies have different priorities. Generally, companies want a disaster-recovery solution that is not limited to a specific application or to infrastructure from a particular vendor. Often the goal is to find a comprehensive solution that can protect a broad cross section of physical and virtual IT infrastructure while being easy to administer through an intuitive interface.

 

A good disaster-recovery solution should also allow users to develop and test custom recovery plans, ensure group consistency, support ordering and allow easy failback to the production site after the disruption events are resolved. Perhaps the most important element, though, is that the user must be able to benefit from the inherent cost advantages of the cloud paradigm.

 

Katzen: What gaps are you seeing in today's market for disaster recovery in the cloud? 

Hayward: We see two broad classes of cloud-based disaster-recovery solutions out on the market today.

 

On the one hand, there are the application-oriented solutions. These provide good RPOs and RTOs but are specific to a given software application. They also require active instances of the application in the protection site at all times and, hence, tend to be quite expensive to provision and operate.

 

On the other hand, there are the infrastructure-oriented solutions. These focus on failing over groups of servers and storage systems between sites and offer the potential for application-agnostic, enterprise-wide disaster recovery. Unfortunately, these solutions tend to be incompatible with multi-tenant clouds because each subscriber needs direct control over the cloud provider's IT infrastructure in order to sustain replication and achieve failover.

 

At DataGardens, we believe we have developed a third alternative that offers the best of both worlds, with protection across physical and virtual infrastructure.

 

SafeHaven VPDC.jpg

DataGardens' SafeHaven console seamlessly integrates with Savvis Symphony VPDC to handle data replication between production and disaster recovery sites.  

 

Katzen: I understand that Savvis is the first cloud provider you have established a go-to-market strategy with. Why?

Hayward: We have had a relationship with Savvis for more than two years now. Among cloud providers, I believe most industry observers would agree that Savvis has distinguished itself as a leader in cloud security and threat management. Disaster recovery is simply part of that larger picture.

 

Savvis Symphony VPDC offers all the perimeter control pre-requisites that we rely on in order to provide secure replication from a private customer domain into a multi-tenant environment. Also, from our earliest discussions, we and Savvis have shared a common vision to provide cloud subscribers with a new class of premier data center protection service at a very attractive price point. Despite what they say, few other cloud providers really share that vision.

 

Jeff Katzen is senior manager, cloud business solutions, at Savvis, a CenturyLink company.

Living in a World of Many Clouds

The following is a guest post by Pat Adamiak, senior director of cloud solutions, at Cisco Systems Inc.

 

Here on the final day of VMworld 2012 Barcelona, I can't help but be reminded that cloud is currently one of the hottest topics in the IT industry.

 

We believe that cloud represents a fundamental shift in how IT will be delivered and consumed.  In these early days of cloud, public cloud has often received much of the media attention, with conversation often centered on whether massive scale is necessary to be a viable cloud provider.

 

While there is indeed a strong role both now and in the future for scale providers, we're also seeing an equally interesting  trend as well - the emergence of more highly differenced cloud services, focused on addressing industry specific needs,  such as application types, compliance requirements or geographical differences.

 

Cisco Cloud GraphicAs we work with both cloud service providers and end customers, we've seen that a one-size-fits-all type cloud solution is not always preferred. As a result, we are seeing the rapid rise of what we refer to as 'a world of many clouds.'

 

For example, a financial institution has entirely different application and service requirements from a high-end gaming company.  Or for that matter, from those of a federal or provincial government. The result is an increasingly rich tapestry of clouds that will mark our future - some public, some private, and some hybrid.

 

Connecting many of these different elements efficiently and seamlessly - and with flawless security -requires sophisticated interplay within the datacenter and across the many flavors of networking that interconnect the datacenters, clouds and cloud service customers.

 

As part of this, the industry is already making rapid progress towards developing open, programmable networks, which feature APIs to support rich interaction between cloud software and the underlying network, as well as increasing virtualization of the network, computing and storage.

 

Leading cloud solution providers are already embracing the transition to more robust and customizable cloud offerings. A great example of this is Savvis, which has been cited by industry analysts as both a visionary and leader in the critical infrastructure as a service market.

 

By upgrading its data centers and switching to an IP Next-Generation Network last year, Savvis combined its expertise in serving vertical markets with a cloud solution that can provide on-premise levels of performance, availability, security and flexibility. It has done an excellent job of preparing for and benefiting from the world of many clouds.

 

Cloud is the future of IT services. This future will not be made of one giant monolithic cloud but, rather, a world of many clouds. These different clouds will be unique in how they are able to serve specific market segments with tailored offerings. Those providers, like Savvis, with the ability to roll out customizable, vertically focused clouds, will have a significant advantage in the race to capture marke t share in the growing cloud space. 

When it comes to the technology behind business expansion in Asia, all roads are beginning to lead to Singapore.

 

We've seen strong interest in cloud computing fuel demand in Singapore, and now we're seeing that region take the lead in IT outsourcing as well. In fact, a global study commissioned by Savvis recently found that a minority of Singapore organizations - 42 percent - keep most of their IT infrastructure in-house, compared to 54 percent globally.

 

In five years, this study predicts 98 percent of Singapore's IT leaders will outsource most of their infrastructure and move it to the cloud.

 

Why is Singapore beating the rest of the world to the advantages of outsourcing - and ultimately - the cloud? The convergence of economic, cultural and technologic trends makes Singapore the right place at the right time for IT outsourcing.

 

Here's why - 

 

Economic expansion. Rapid business growth requires efficient infrastructure deployment and management. Outsourcing can alleviate the burdens of infrastructure management, while also lowering the total cost of ownership - essential elements for expanding businesses that need to focus on establishing a regional presence and driving revenue.

 

Strategic accountability. As multinational organizations charter new territories, CIOs must concentrate on delivering business value, not managing heavy, day-to-day infrastructure tasks. With technologies unfolding in cloud at an ever-evolving pace, outsourcing becomes a strategic, forward-thinking decision for these leaders.    

 

Fluctuating IT demand. Expanding business volume and changing end-user needs call for scalable IT solutions. In Singapore, 70 percent of Savvis survey respondents said the most important benefit of cloud computing was the ability to scale up and down to meet computer, storage and bandwidth consumption.

 

Soaring Internet user base. Skyrocketing Internet usage throughout the Pacific Rim, and in Southeast Asia particularly, is attracting the attention of major multinational technology companies, which need regional data hubs to accommodate the growing network and data consumption. Singapore, with its strong government support for technology and its business-friendly climate, is well-situated to house such hubs.

 

Fewer legacy systems. As a culture, newer Singapore organizations are less reliant on legacy systems, and that's translating into fewer security reservations. Our study found security to be the top barrier to cloud computing among IT leaders globally - except in Singapore, where it ranked seventh on executive concerns.  

 

The best thing global CIOs can do for their organizations is support processes that enhance their company's competitive advantage. Those who realize that are turning to cloud computing and IT outsourcing to achieve improved efficiency, scalability and collaboration. And they're doing so in Singapore.

 

Mark Smith is managing director, Asia, at Savvis, a CenturyLink company.

Security professionals know (and have likely experienced) situations where security testing can be tricky. Obviously, to get the best and most accurate data, it's mostly the production environment that bears our consideration - but because of production impacts, our ability to gather data in that environment can be limited. In other words, because it can be intrusive, sometimes there are restrictions on what, when and how we can test.

 

For example, exercises like penetration testing or vulnerability scanning may need to line up with an already-scheduled downtime or be conducted during an off-hours time (e.g., the middle of the night) or during a major holiday. Not only does this make for dissatisfied testing staff (who wants to spend Thanksgiving doing work?), but also can undermine the quality of the test (since a major application release may not necessarily coincide with when the testing window happens to be).

 

And penetration testing or vuln-scanning is actually relatively non-intrusive compared to other things that we might ideally do if uptime wasn't an issue. For example, we might want to install and run customized data discovery scripts to look for sensitive data like credit card numbers, SSNs, etc. How many shops allow security teams to craft, install and execute custom scripts or software arbitrarily against the production environment? Not many, that's for sure.

 

So this is the way that things have been since time immemorial. But the virtual data center is changing the dynamics somewhat. Why? Because now, using virtualization technology, we have the ability not only to seamlessly and transparently "capture" a live image of a production machine to safeguard against downtime, we can do other things as well: for example, make a clone that can be "re-homed" outside the production environment. Meaning, not only can we minimize downtime from things like application installs and patching using snapshots (which most of us in a virtual data center are already familiar with doing), but we can actually also use the features of virtualization products to make available types of security testing that might not otherwise be possible.

 

Of course, you can't just make a clone and start thrashing away willy-nilly. You still need to exercise caution, but you can open up options that just wouldn't be feasible otherwise.

 

What can this do for you?

So, it's the old Catch-22: You can't test against a development or a QA image because it doesn't give you useful results (due to, among other things, differences in configuration between those environments and production), but you also can't test in production because the risks of downtime resulting from a security test exceeds the value of doing it. We've all been there. A virtual data center, though, has different properties than a legacy one - and some of those properties offer flexibility and resiliency that can help here.

 

Using these properties to your advantage to facilitate security allows you to do things that you might not otherwise be able to support. For example, do you want to search the local file system for data (for example using a purpose-built tool to support that like ccsrch) but don't want to suffer the performance impact or potential downtime risk associated with installing and running software in production? A cloned image could help you do this. Want to try a shiny new buffer overflow you found in metasploit but don't want to risk bringing down the impacted service? A cloned image might support this too.

 

But it's not just cloning that provides benefits. Suppose you want to install a piece of security software into that environment. Just like you might use a snapshot to help you quickly roll back patches, you can use snapshots to facilitate that install as well.

 

But you still need to be careful ...

So advantages are there ... but you do still need to be careful. Why? Because keep in mind that applications are often interdependent - and references to external hosts may exist in the clone the same way that they exist in the original VM. Meaning, just because you successfully clone a production web server so you can use it to test for cross-site scripting issues, doesn't mean you might not still impact the production database (because that web server could, for example, still point to the backend production database). In fact, depending on the application architecture, just turning on the cloned image could cause a production impact if you're not careful.

 

A safe bet to mitigate issues of this type is to isolate the cloned image so that there's no possibility of having it impact production (meaning, structure the environment you put it in to disallow outbound network traffic). You may find that - depending on what type of testing you want to do - you may also need to bring over clones of other hosts as well (for example, database or middleware tiers) in order for the application to perform well enough to test. In essence, you're creating a "mini-mirror" of the production environment for your security testing purposes.

 

Will it be perfect? Obviously not. However, you may just enable types of security testing that would be impossible otherwise.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

Cloud IconAre you considering adopting the public cloud for your enterprise? If so, why?

 

Over the course of the last several months, my team and I have talked to our top cloud customers to document their cloud use cases (i.e., the how?) and business drivers (i.e., the why?). We've identified five business drivers for public cloud, which are outlined below:

 

- Speed to Market

- Financial Efficiency

- Business Expansion

- Global Expansion

- Core Competency

 

What follows is a brief discussion of each.

 

Speed to Market:

Speed to market is undoubtedly one of the most attractive aspects of the public cloud. In a physical environment, to build an entire data center consisting of firewalls, load balancers, networking and virtual machines would take many months. Using the Savvis public cloud, all of these capabilities can now can be fully deployed and accessible within an hour or two.

 

Financial Efficiency:

This topic has started a lot of debate. Will public cloud save me money? (See here for a discussion of this topic: http://blog.savvis.com/2012/06/will-migrating-to-the-cloud-save-me-money.html). Is opex better than capex? The answer to both of these questions: It depends. What public cloud definitely does provide is the ability for your organization to be more financially efficient. As we like to say at Savvis, you have the ability for expenses to match revenue, or put another way, for capacity to match demand. This ensures the amount of money you have sitting in underutilized assets is minimized.

 

Business Expansion:

A new business or product launch is an exciting time and one that comes with both opportunities and risks. There is the risk that the business will not be successful and so there is a desire to minimize the associated upfront capital investment. Public cloud is a key enabler of this. There is no upfront capital expenditure and the environment can be turned off at any point in time. When that occurs, any opex expenses stop. On the other hand, if the business is more successful than originally anticipated, the ability to quickly expand infrastructure to support the load is of paramount importance. This ability is a key characteristic of the public cloud.

 

Global Expansion:

Global expansion and business expansion are very similar, and public cloud brings comparable benefits to the table for both of them. I'm talking to a number of customers who have a large presence in the U.S. and are looking to expand globally. With this desire to expand globally comes the initial thought to duplicate their infrastructure globally as well. It doesn't always make sense to take on the financial commitment associated with building out dedicated infrastructure when the global expansion is unproven. Savvis' public cloud allows for the deployment of data centers in a pay-per-use fashion within the North America, European and Asia Pacific markets.

 

Core Competency:

Core competency is the primary reason all of our customers turn to Savvis. IaaS is our core competency. That allows our customers to focus on theirs.

 

Public cloud adoption is first a business decision and then a technical one. When embarking on a project to embrace the cloud, having a strong business driver is the first step toward a successful outcome. So why are you looking to embrace the cloud?

 

Jeff Katzen is senior manager, cloud business solutions, at Savvis, a CenturyLink company.

The rapid growth of big data as a new category of "must have" solutions is hard to refute. Airport walls are now adorned with billboards selling the virtues of big data and almost every major IT provider has jumped onto the bandwagon to offer, or re-position, its products for this market. Case in point - I recently attended the Hadoop Summit and almost every major brand spoke about how it is leveraging and/or delivering big data solutions. Also, most of our customers are telling me that managing and harnessing their data assets are critical to the success of their companies.

 

The availability, accessibility and volume of data are growing exponentially while technical problems and business use cases are becoming more complex. Fortunately for companies, with these new technology solutions all data can more easily be visualized and analyzed in real-time and a paradigm for correlating and integrating information to create new insights is much more accessible.

 

Big data solutions allow customers to use their data to do even more powerful things than analyzing and modeling large volumes of information. These solutions allow all data types, including structured and unstructured, to be assessed and analyzed in new and unique ways. Equally important is the integration of legacy systems with social media data. Combining these solutions with virtualization and the cloud benefits can empower just about everyone in the business to affordably analyze large amounts of multi-structured data. For instance, big data solutions:

 

- Enable organizations to assess structured and unstructured data in their raw form and without the need to pre-model

- Remove the boundaries on what information can be analyzed

- Shorten the time-to-insight, especially as new data sources are added

- Improve the effectiveness of marketing and promotional campaigns by helping companies refine pricing models and lower overall customer acquisition costs

- Construct a complete picture of a company's customer to ensure the right message and products are being pitched to the best audience

- Combine transactional sales data with unstructured comments (e.g., product reviews) to provide insight into purchase behavior

- And much more

 

Given our customers' needs, we too are expanding our solutions to include big data capabilities. Recently Savvis entered a strategic agreement with Hortonworks, a leading commercial vendor providing innovation, development and support of Apache Hadoop. This agreement enables Savvis to offer cloud-based big data solutions for enterprises and provide customers with a way use our infrastructure with Hadoop to create valuable information through the searching, indexing and analyzing of sizeable quantities of information assembled from multiple sources.

 

What are you hoping to do with big data in your company? In subsequent blog posts, we will dig deeper into how cloud solutions can help with your data requirements.

 

Steve Garrou is vice president, global solutions management, at Savvis, a CenturyLink company.

Cloud Icon"Will migrating to the cloud save me money?" This is a question that comes up fairly often in my discussions with customers. The reality is that there is no clear yes-or-no answer. It depends on a number of factors.

 

If you're currently looking at cloud adoption for your enterprise and are approaching it with the viewpoint of looking to save money, that is a valid business driver. That being said, a complete replication of your existing data center in a public or private service provider cloud is not guaranteed to save money and from Savvis' perspective isn't the right approach. In a future post I'll talk to some of the common business drivers for cloud adoption that we are seeing.

 

It's understandable that IT executives are looking at cloud in this way. After all, in the traditional model of IT, as outlined in Figure 1, the business need drives the application and the application drives the infrastructure. So the thought is, regardless of what the infrastructure looks like, as long as it meets the needs of the application, then it should be OK. This together with the idea that public cloud is a multi-tenant environment and so costs are shared across multiple customers leads to the perception that public cloud is cheaper. This isn't always the case.

 

Infrastructure at the bottom of a waterfall of requirements

In an IaaS cloud that paradigm is changed around, as depicted in Figure 2. The business need still drives the application, which still drives the infrastructure, but now the infrastructure has the capability and expectation to meet the business need as well. But it doesn't only have to meet the business need today, it has to meet that need at every point in the future. As we all know, the only thing that is clear about the future, is that it's unclear ... cloudy, perhaps.

 

Infrastructure meets business needsA better way to approach the adoption of cloud is to first understand the different types of clouds that are available and what type of workloads would be suitable for each. I plan to write more about this in a future post but to be more specific, the types of questions to consider are:

- Where should you use a private cloud?

- Where should you use a public cloud?

- Where shouldn't you use cloud at all?

- And most importantly, how do you tie all of these different pieces together to form a cohesive solution?

 

By effectively answering the above questions you will be able to optimize your infrastructure to meet the needs of the application and business. Instead of saving money you will enable your company to be more financially efficient. If correctly planned and implemented a byproduct of this will be lower costs.

 

Cloud isn't a one size fits all proposition. Your cloud provider should know this.

 

Jeff Katzen is senior manager, cloud business solutions, at Savvis, a CenturyLink company.

Compensating controls in the cloud

Cloud IconAbout a month or so back, I was attending a tradeshow where I happened to overhear a passionate argument between sessions about the impact of cloud on risk management. It was one of those times when I was trying my best not to eavesdrop, but these two gentlemen were so vocal about their various opinions that it was hard not to hear.

 

The crux of the argument had to do with whether cloud made risk assessment easier or harder to accomplish. On the "easier" side was the argument that reviewing a cloud services provider once and using contractual language to "lock in" operational controls took several review steps out of scope. On the "harder" side, the argument was that the risk assessment process had to be done for each type of business process that intersected the provider since no one audit could account for every way that the provider would be used (i.e., "Today we use the CSP for public data and we audit their controls for that case, but the business could move private data there tomorrow once the vendor is approved").

 

It was an interesting discussion and, as you can tell, it stuck with me. I'm still not sure who was "right" in this particular discussion - they both made valid points. But it seems to me that there was something bigger left out of the discussion: namely, the impact of cloud on mitigating control selection.

 

Here's what I mean: No matter whose model of risk management you're using (ISO, NIST, Octave, etc.), there's more than just the assessment phase. After assessment, there comes risk treatment. In most cases, that means control selection.

 

Cloud changes this, it seems to me, quite drastically. Specifically, when you engage a cloud provider - whether IaaS, PaaS or SaaS - you are drawing a line in the sand. In effect you say, "Everything below X level of the application stack will be a black box." You are deliberately abstracting yourself away from some portion of the technical substrate. In an IaaS context, it could be that portions of the network leave your scope of control while the OS and platform stay in it. For the PaaS, you retain control over the app but you give up control over the platform ... and everything below that (OS, network, etc.). For SaaS, the whole potato is a black box (the application and everything below).

 

For some purposes, this is a good thing. The less that's in your scope of control, the less you have to deploy custom security controls to address particular issues. However, it's also important to remember that once a particular level of the stack goes from being "something you can manipulate" to "something you can't," you also lose the ability to deploy a compensating control at that level. This impacts (or at least should impact) your control selection.

 

As an example, say you have an application that's historically been hosted within your organization's infrastructure. If you discover an issue at the application layer (say the application is vulnerable to SQL injection), you have a number of options across every level of the application stack. You could, for example, update the app. Alternatively, you could implement monitoring in the database or middleware, or you could implement host-level controls or network-level monitoring. All these options are open to you because you control every layer of the stack.

 

In a cloud context, options are more limited. If you use a PaaS, you can't deploy an OS-level control because you don't control the OS. Is this an issue? Maybe not ... at least, not if you're planning for it. But the bigger issue is what happens when you move existing applications and business processes to the cloud. In that case, compensating controls can "fall on the floor" unless you've either A.) kept detailed records of compensating controls you've historically put in place mapped to the original risks so that you can gauge their efficacy in the new environment or B.) systematically re-evaluated each application to determine what compensating controls will need to be re-implemented. And, not to be a pessimist, but most firms aren't doing either of those things.

 

Now, I'm not going to say that every firm out there should start from scratch in their risk mitigation strategy when they move to cloud. But I will say that a move to the cloud - at least for firms that are serious about security - could be a useful time to evaluate risks in the applications and processes that they plan to move.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

 

********

 

Savvis Security Webinar:

"Eight Steps to a Secure Cloud Infrastructure"

June 6, 2012

Presented by Chris Richter, vice president of security products at services
http://ht.ly/aXlSC

 

Cloud IconI am astonished by how little practical or empirical data exists on the topic of cloud bursting. A quick Google search on "cloud burst" or "cloud bursting" yields, well, not much - that is, short of "Men Who Stare at Goats" references, questionable YouTube clips and a campy '80s "art rock" video. To further mystify the topic, all of these data points really revolve around the dubious and Fringe-esque claims of "cloud busting" (notice the missing "R") - or making rain by tampering with clouds.

 

Yet, the concept of cloud bursting (with the "R"), or horizontal application scaling into the cloud (i.e. moving compute workloads into an on-demand resource pool to access additional capacity), has come up in just about every one of my conversations with enterprise clients. Why? Because this could be one of the fastest and most impactful ways for customers to harness the power of cloud computing to grow applications and respond to seasonal, cyclical, or ramping demands ...  and it's really pretty straightforward if you have selected the right cloud provider.

 

I guess I shouldn't be too surprised about limited information on the topic, considering that cloud capabilities vary greatly from provider to provider (see our evangelism efforts on Not All Clouds are Created Equal). Therefore, there is not one easy three-step guide on how to cloud burst. But, hypothetically, if one were to exist it might look something like this - assuming you have your workloads already virtualized:

 

1. Define what you will be bursting. Of course, this relates to applications - but the important question is which layer within the application architecture is ideally suited to a cloud bursting use case? Is it the web, application or database layer within a traditional three-tier relational database-driven application stack, or are we talking about a flat file, NoSQL or big data bursting scenario?

 

2. Select your target cloud. This directive is tightly correlated to how you responded to the first step, since cloud service providers each handle tiered models and distributed data models differently. Most enterprises tend to prefer highly secure and high-performance cloud that makes it easy bring in workloads.

 

3. Convert your source images and upload. Based on the provider you have selected, it's time to bring in your image. Is it a VMDK, OVF, XenVM or something else? It's common that VMs in even Open Virtualization Format should adhere to some service provider-specific configurations. Tools like VMware Studio, Platespin and others can be used to convert workloads.

 

Now that you have identified your applications, chosen a provider and converted your image to interoperate with your cloud, as well as uploaded it, you are practically there! However, there are still several factors to consider, and cloud vendors handle these very differently:

 

- How will you launch your workloads? From a template, from a clone or from a dormant VM/instance?

 

- How will you connect, and how much data do you intend to push over this network connection? Is it a point-to-point network, MPLS, EVPL or VPN, and is it production data, metadata, sensitive data or management traffic?

 

- How automated should this solution be? An API can provide full automation, but will require coding and additional business logic in your applications. Is the cloud portal you have chosen easy enough to operate to take advantage of cloud bursting?

 

- How will the cloud handle your security policies? Does the cloud you have chosen have the governance and maturity you would expect for you data? Can you even bring your own policies into the cloud? After all, the cloud holds your data, shouldn't it be able to support your existing IT policies?

 

- How will you handle load balancing? Will you need local and possibly global load balancing that can be dynamically updated to include the new workloads you have bursted into the cloud?

 

- How will you charge back? Does your cloud bursting solution make it easy to charge back internal and external customers and set spend limits, controlling cloud sprawl and avoiding the auto-ballooning of cloud costs?

 

Whether you are cloud bursting or busting, as the great Lil' Wayne eloquently put it, "Make it Rain." Optimize your existing workloads and select the right provider - one that cannot only help burst your workloads onto enterprise-class cloud platforms, but also help you develop the IT strategies you need to grow your business.

 

Aditya Joglekar is director of cloud business solutions for Savvis, a CenturyLink company.

Harnessing the Power of Big Data

While at eTail West, an industry trade show for retail, brand manufacturer and travel and hospitality firms, with SiteMinis, Savvis' mobile hosting solution partner, it became clear that "big data" has emerged as a hot buzzword and trend. Companies are beginning to see the challenges and opportunities in big data and looking to either be an earlier adopter or, at a minimum, pay attention to better understand this market and get ahead of the curve.

"Big data," in general, refers to data from both unstructured and structured sources, including machine-generated information (i.e., click stream activity, log data, network security alerts) and social media sources respectively. These information sources are generating data stores that are getting very large, growing and changing very quickly, and struggling to fit within traditional database architectures. Companies are realizing that the real advantage is not about just having the data, but harnessing it to gain big insights at a reasonable cost.

Luckily, today's alternative hardware delivery models, cloud architectures and open source software bring big data processing within reach. For brand manufacturers, retail and travel and hospitality firms, this is especially good news. In an increasingly competitive market, big data provides the visibility and insights to more effectively market to existing and potential customers. However, there are certain technical requirements companies must consider to capture the potential of their data resources. Technology requirements include policies related to privacy, security, intellectual property and liability in a global environment.

Savvis is currently working with clients on big data solutions and roadmaps so they can exploit the power of valuable data assets to strategically address business requirements. Clients want to better segment customers to more precisely tailor products and services, improve the development of next-generation products and services, and improve performance and reduce variability in business operations - better forecasting with real-time data, for example. By better tapping into the data resources a company collects, these strategic benefits are well within reach.

I encourage you to stay tuned as Savvis continues to evolve its big data solutions and offerings.

Steve Garrou is vice president, global solutions management, at Savvis, a CenturyLink company.

Cloud IconMany organizations are understandably looking at cloud-based solutions for database applications. But did you know there's a way to secure the advantages of cloud without the associated performance penalties of virtualization?

 

IT decision-makers want these advantages - instant scalability and pay-per-use, for example - but ultimately many have found that using cloud-based resource managers are not cost efficient.

 

The upfront capital costs associated with a typical modern environment can amount to more than $100,000 for the initial database license plus $20,000 per year in ongoing maintenance costs. These expenses are big obstacles for those considering moving to an enterprise-class relational database management systems (RDBMS).

 

On the front end with most virtualized environments, the database vendor does not recognize "sub-capacity" licensing. In most instances, your only option is to license your server as if it were non-virtualized. Therefore, even though you are seeking the benefits of consolidation through virtualization, you receive very little license and maintenance benefit because you are still required to pay for licenses as though the server is not virtualized.

 

When it comes to maintenance, there are other challenges. While virtualizing a server does allow you to consolidate onto a smaller number of physical servers, it does not reduce the maintenance tasks associated with the OS or RDBMS platform. In fact, it may even complicate them somewhat by introducing another layer of complexity with the hypervisor software sitting between the physical hardware and the guest OS.

 

In the past, the database world has been approached as "one database server, one app." This methodology is outdated. Hardware today is vastly superior to what it was even three years ago, and continues to improve by leaps and bounds each year. Although processor speeds have started to stagnate somewhat due to the limits of current manufacturing technology, core density and overall processing power in a server continues to rise.

 

Upon first glance it may appear that you can run more apps on a server if the OS is virtualized. However, that thinking trades server sprawl for VM sprawl and you are only addressing part of the issue.

 

My recommendation is to avoid the extra OS and the hypervisor layer and start using the resource manager (resource governor in MSSQL) to its full potential. This tool allows you to run multiple workloads on the same system and give various resource groups differing priorities. You can even change those priorities based on the time of day or system load, automatically according to rules that you define, all without rebooting your system.

 

At Savvis, we explored many avenues to come up with a database offering that provides users with everything needed for the agility associated with cloud and wrapped it up in a pay-per-use model. The result, Savvis Symphony Database, proves that virtualization is fine for many applications, but it's not the only way to deliver the benefits of cloud.

 

Our approach was to get the hypervisor out of the way. With Database resource managers, you can essentially carve up the system without the hypervisor layer. In this model, Savvis shoulders all of the costs of licensing and management out the entire system, but you only pay for what you use.

 

Database resource managers - like they are used in Symphony Database - allow you to assign users to a particular resource group that has CPU, memory and I/O minimums allocated to it. All of the users assigned to that resource group share those resources. One of the best features of the resource manager is that those "restrictions" only kick in when the system is fully loaded. That is, if there are any free resources on the system, all users are free to use it on a first-come, first-served basis.

 

Following this technique allows you to assign various applications a "floor" of resources that will always be available, but still allows users to "burst" into unused capacity. This permits vastly simplified management of the database because you manage and maintain only the database and the single OS image as opposed to the multiple images and databases in a VM-based model. Following this model allows you to easily scale your database hardware horizontally over time, versus buying the largest box that you think that you may need in the future.

 

In summary, before going out and buying a huge server with lots of cores and installing virtualized servers, take a look at the true costs of maintaining that system. Then compare that with benefits in both performance and manageability of having a single database and OS image to maintain. Once you learn about all of the features of the resource management tools already built-in to Oracle and MSSQL, I think that you will agree that database resource managers are worth strong consideration as an alternative.

 

Jonathan Vincenzo is technical director, product engineering, Savvis Symphony Database, at Savvis, a CenturyLink company.

What's on the horizon for CIOs?

Cloud IconAs we head into 2012, I find myself asking what's on the horizon for CIOs? The economic turbulence of the last 24 months has led to a period of careful planning and ensuring business stability.

 

According to the IT leaders I've spoken to recently, that focus on balancing the business to deliver potential growth, whilst maintaining flexibility, hasn't changed. However, they are telling me it's still important to find more cost-effective solutions that help drive down IT costs.

 

Our recent global survey of CIOs, IT directors and heads of IT, reflects this sentiment. Budgets are less restrictive than 18 months ago, but this is still a critical time for managing IT costs and efficiency. Important lessons have been learned in recent years about how to maximise budgets and IT performance.

 

These lessons have had a profound impact on attitudes toward IT outsourcing. Organisations across the globe continue to embrace technology as a means of delivering first-class IT support. Right now, European IT leaders tend to take a more cautious view of outsourcing compared to their North American counterparts.

 

However, despite this, our research indicates a positive attitude toward outsourcing IT is set to spread globally. Within the next five years, organisations in the UK are expecting to outsource their IT infrastructure more than their colleagues in Europe and the U.S. The vast majority of IT decision-makers forecast that within 10 years, their IT infrastructure will not be managed in-house.

 

IT leaders realise that cloud computing technology has a significant role to play either now or in the future within enterprise organisations. Eight out of 10 of the enterprises surveyed currently use cloud computing in their organisation and two-thirds of those users have adopted it during the last 12 months.

 

CIOs tell me they are rapidly adopting cloud services because it offers them access to scalable computing on demand, improved reliability, reduced total cost of ownership and economies of scale.

 

Take a look at the full report at http://savvis.itleadership.info.

 

Neil Cresswell is managing director, EMEA, at Savvis. 

Why "sprawl" matters for security

cybersecurity.jpgEarly on in my career, I recall having a conversation with the data center "guru" for a large organization (I won't tell you who it is, but you've heard of them) during the course of an audit. We were winding down the standard data center discussion topics (cooling, backup power, etc.) and I happened to ask the seemingly innocuous question, "So what does this system do?"

 

I wasn't trying to stir up trouble - I was just making conversation.

 

But lo and behold, his answer was a surprising one (at least to me, greenhorn that I was). It turns out he didn't know, but that's not the surprising part (after all, not everyone can know the details of every system in every environment). Instead, the part that surprised me is that not only did he not know the answer - but nobody else did either. It turns out that recordkeeping was so disorganized, inventories so out of date, that quite literally there was no way to know what that system did. And this was true for a large number of others as well; there was a whole "shadow class" of systems where answering simple questions like "what does it do?" and "who maintains it" were quite literally unanswerable.

 

This situation is of course a logistical and security nightmare: those systems may or may not be patched, they may or may not be able to be down without disruption, they could (and did) fail unexpectedly leaving everyone scrambling to figure out who maintains it (sometimes nobody), who could get access to it (also many times nobody), and who to call to figure out if it was working again (usually a non-technical business user who didn't understand why anyone was asking in the first place).

 

What causes this?

You'll probably recognize this as one of the scenarios that virtualization technologies (and in particular data center consolidation efforts) promise to help prevent. But fast forward to today - after many organizations have spent significant dollars on consolidating and organizing the data center environment - and all too often, organizations find themselves in a very similar position. This time though, instead of a shadow caste of physical servers that nobody knows about, it's virtual images - just as disorganized and just a problematic, only they're virtual instead of physical.

 

It's called "VM Sprawl" - and it's something that every cloud user (be they a user of vendor-supplied technology or an internal broker of virtualization technology) should know about and have a proactive plan to prevent.

 

The cause of this isn't rocket science. The situation arises when "one off" VMs - or snapshots of VMs - are created informally or on an ad-hoc basis without a specific plan ahead of time for how to decommission in the future. In a large cloud deployment (or in an informal one like a private cloud) where multiple individuals or groups loosely manage the infrastructure, the temptation to just "whip up" a new VM or install a new virtual appliance is high. For example, in response to a request from a high-profile business unit or pushy internal staff. The longer those slices persist, the more likely it is that the person creating them will forget that they're there - and others may be reluctant to delete them because they're not sure if they're still being used.

 

It can also arise in situations where the process is tightly controlled but inventories are not. For example, when the inventory doesn't reflect the actual state of the environment. In that case, a new VM might get fielded but left off the official inventory. Individuals coming across it later may not be able to easily remove it because they're not sure who's using it or for what.

 

Trying to get out from under this problem once it occurs isn't easy. The most effective strategies involve prevention rather than remediation. Effective strategies to prevent this situation from occurring in the first place have to do with organization in two areas: deployment and inventory.

 

For deployment, a process that enforces approval and justification for new VMs and automates record-keeping around those two things makes sure that only VMs that are deployed are those that have documentation around why they are deployed, who will be using them, and when they can go away. For inventory, a process that enforces reliable, reality-based information about what VMs exist (and who created them), makes sure that information about what lives where is available somewhere. Ideally, tie these two processes together - so that approval history is connected to inventory.

 

There are remediation strategies that you can use to try to solve the problem once it occurs (the specifics of which will vary based on your organization and what model of cloud computing and virtualization you're employing), but suffice it to say that prevention tends to be cheap while remediation tends to be expensive. This is why prevention is such an important thing to plan out ahead of time.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

Cloud IconThis month, I had the pleasure of speaking with Sramana Mitra, a strategy consultant and entrepreneur based in the Silicon Valley. Sramana and I spoke about cloud computing and its impact on the data center. We spent a lot of time speaking about key trends in cloud computing. During the conversation I underscored that cloud use (not to mention IT choices in general) must be driven by a business need.

 

Until recently, we have seen a number of enterprises that "got in front of their headlights" around cloud, trying to adapt it to all use cases. Organizations need to measure and monitor the impact of cloud technology on addressing business need so they have full visibility whether or not their IT organization is executing at high levels. IT organizations that have a strong link to the lines of business and are tracked against business objectives are more successful. With cloud, many organizations got away from the business driver and were allured by the technology and having the features of the technical solution lead the conversation. 

 

Other key topics we discussed included:

 

  • Differences between cloud computing and the data center
  • Mitigating risk in the cloud
  • How cloud is face-lifting commercial models

 

To read the full article go here. Note that this is a six-part series.

 

Steve Garrou is vice president of global solutions management at Savvis, a CenturyLink company.

Cloud IconThe emerging cloud-centered IT landscape has many CIOs wondering what their roles will look like in the near future. The confluence of major forces for change - including cloud migration, outsourcing, ubiquitous computing and IT-enabled corporate strategy - has led to this reflection and created uncertainty about the next step in the evolving IT leader role.

 

CIOs are resilient and evolutionary forces in technology are nothing new. In fact, the role of the CIO has been in continual formation, evolving from "information systems manager" in the late 1970s and changing in each era of computing - Mainframe, Distributed Computing, Web and now Post-PC. At each step, an IT leader assumed new responsibilities and titles. However, despite the changes spanning these eras, my sense is that, for the most part, the CIO role was fairly consistent across companies and industries: Whatever IT a company had, the CIO or VP of IT ran it.

 

Based on numerous encounters and conversations I have been involved in over the past couple years, it's clear that there are huge variations in the current responsibilities assigned to CIOs. IT leaders are operating at varying levels of the organization, with differing, frequently changing missions, and I'm seeing increasingly divergent definitions for the role across companies. These are ominous conditions that I think could impact the CIO in existential ways. Yes, I think the entire role of CIO could become a casualty due to a general lack of consensus as to what the title actually means.

 

And while that viewpoint may seem extreme, think about it: At many large companies, non-IT executives are being empowered to make their own IT decisions and many business units are selecting their own IT solutions, merging the front and back offices in an IT-enabled business strategy. As executives get more comfortable with IT ownership, as consumerization of corporate IT gets more prevalent, and as business IT gets less and less asset-centric, IT decision making will continue to decentralize. This distribution of IT functions across the executive ranks is impacting the role and even the lifespan of the CIO now.

 

But fear not! The CIO role can endure and the road to extinction can be avoided. CIOs need to recognize that radical changes are beginning to permeate their industries and their companies. The ways people communicate, learn, work, play, organize, govern and conduct commerce are being impacted by ubiquitous computing. These changes are serving as a catalyst for exploring new opportunities and creating an opening for forward-looking IT visionaries. Call it the silver lining, if you will. CIOs cannot ignore the real opportunity they have to spearhead the introduction of entirely new business models and applications based on ubiquitous computing while radically changing the cost structures underneath their legacy systems.

 

I encourage CIOs to not only understand how ubiquitous computing will change their industry, but to be vocal about how to move the business to respond to new opportunities. CIOs have huge credibility within their businesses in matters of technology and often see opportunities that others miss. Those who can give voice to these ideas will thrive regardless of their title.

 

Finally, I wanted to note that I shared additional thoughts on the CIO role with Data Center Knowledge earlier this month. To read that content, click here.

 

Bryan Doerr is chief technology officer at Savvis, a CenturyLink company.

Cloud IconWe all know it: economic factors drive cloud. As I outlined on this blog last month, that sometimes means it's hard to add unanticipated security controls to a new cloud deployment (since costs of controls eat into savings projections).

 

We talked about some tools that can be used to limp along until funding can be secured to meet the security requirements and deploy appropriate controls (it's January now, so maybe FY'12 dollars are in effect already taking that pressure off). What we didn't talk about though is the inverse: the budgetary expectation that the legacy environment will shrink. It seems like a given - and maybe not such a big deal at first blush - but it has consequences. And it means security organizations need to start planning now so as to not get blindsided when this happens.

 

Budgetary Changes and Economic Drivers

Think about it this way: for a deployment like a virtualized data center, the expectation is that costs will decrease over the long term, right? That's a self-evident statement being that the goal of cloud is to reduce - or at least make more efficient - overall technology spending in the organization. However, what is the specific trajectory of that long-term reduction? The way this plays out can have an impact.

 

It usually consists of a "balloon" expense immediately followed by a long tail of spending drop-off. Why the immediate increase in spending? Keep in mind that many virtualization projects mean maintaining two environments in parallel: spinning up the new virtualized DC and at the same time decommissioning the legacy physical DC. So costs might be immediately up, but then ultimately fall off.

 

For security organizations, this is important to understand. Why? Because if the organizational long-term roadmap contains decreased investment in IT overall, that means reductions in security controls as well. The same forces that make cloud more cost effective (economies of scale) make it harder to maintain certain security controls in the legacy context. That's because at the same time that cloud is successful due to economies of scale, shrinkage of the legacy environment means decreases in economies of scale in that environment.

 

What Does that Mean for Security?

This means that funding for existing security controls will ultimately shrink, impacting what we can keep deployed, what we can spend on personnel to maintain controls, and so forth. But this reduction is deceptively slow. Why? Because of that spending "bubble" we talked about - it can take between one to two years for the first reduction in spending to occur. And because budgetary changes are "stepped" (i.e., occurring in year-by-year increments), it might be three years before the first real constrictions are felt. But when they hit, it's huge.

 

So it doesn't take a fortune teller to see what's coming down the pike. If you're a security pro in an organization that has a multi-year plan for reduced technology that includes reduced spending, it's only a matter of time before you get hit - hard - by a cut budget. In other words, start planning now.

 

One exercise I find helpful is to divide security controls up into groups along economic lines. Meaning, take the existing controls and processes we have now and categorize them according to what they protect (data center, workstations, network, etc.), annualized hard-dollar cost and annualized soft-dollar cost. Having this data can help you decide which controls will naturally erode as environments shrink (i.e., data center controls) vs. those that are going to stay relatively constant regardless of environment (e.g., user provisioning).

 

Obviously the specifics of the controls will vary according to environment so I won't go too far down that path other than to point out that planning here is required. The temptation is to ignore this situation and leave planning for down the road. Don't do it. Because the controls that you can quickly cut when blindsided by a huge budget reduction aren't the ones that you necessarily would choose to cut if given some time to prepare and think about it.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.PK

The Savvis blog officially relaunched exactly one year ago today. We wiped the slate clean and leaned on some of the brightest minds in the industry to share their thoughts on everything from cloud to colocation to horseless carriages (true story; click here if you don't believe me).

 

In honor of the one-year anniversary, it felt appropriate to highlight the posts that have been read the most over the past 12 months. If you've been following the blog since the start, you may want to revisit these highlights. If you're a newcomer, there are some gems here that are worth a read.

 

Thank you for reading. We look forward to continuing to serve as a source of industry news on key topics and critical issues in 2012. If you have any suggestions for topics, comments, etc., send an email to cloud@savvis.com or contact us through Twitter at http://www.twitter.com/Savvis.

 

And now (drumroll please), here are the top 10 posts of the past year, listed in chronological order:

 

Cloud computing in Singapore set to expand alongside Asia economic growth

Feb. 8, 2011

By Mark Smith, managing director, Asia

 

Public sector IT and the winter at Valley Forge

March 1, 2011

By David Shacochis, vice president, global public sector

 

Balancing latency vs. cost

April 19, 2011

Guest post by David Kelly, chief technology officer, enterprise, at Thomson Reuters

 

What is your company's mobile strategy?

May 10, 2011

By Kevin Conway, global director, consumer brands

 

Beyond the data centre SLA: The end-user view of Web applications

June 2, 2011

By Steve Falkus, product marketing director, hosting and cloud services

 

Five security questions to ask your cloud provider

June 29, 2011

By Ed Moyle, senior security strategist

 

What to look for in a SaaS infrastructure services provider

July 21, 2011

By Larry Steele, technical vice president, Software-as-a-Service

 

Big data: Information security downsides (and upsides too!)

Aug. 3, 2011

By Ed Moyle, senior security strategist

 

5 critical assessments your organizations must complete before moving to cloud

Oct. 3, 2011

By Steve Garrou, vice president, outsourcing and cloud services

 

5 free security tools every cloud user should know about

Dec. 19, 2011

By Ed Moyle, senior security strategist

Cloud IconWhen it comes to cloud, planning is everything. This is the case when it comes to every aspect of a cloud migration, and includes in no small measure security as well. However (surprisingly, given the importance of security in a cloud migration), sometimes security and economic goals clash in a cloud deployment.

 

This happens because many cloud migration efforts are economically driven - and security isn't free: either from a planning standpoint or from a control deployment standpoint. So the addition of controls can eat away at projected cost savings - especially when security parameters are not understood fully at the project outset. Because of this, security teams sometimes find themselves in a situation where they need to add controls to meet regulatory requirements or address risk areas, but because a migration is already "in flight," those controls aren't budgeted. Oops.

 

This leaves security organizations with two alternatives: 1) Do nothing and drop the control on the ground, or 2) Do something at minimal cost.

 

Doing nothing isn't usually a recipe for success, so option 2 - doing something on the cheap - can be a lifesaver. Fortunately, there are a plethora of free tools - software and resources - that organizations can look to in a pinch to fill in gaps. Note that I'm not addressing soft costs here - staff time is staff time ... and that's never free (well, unless you have interns, I guess). I'm just talking about what you can do to meet controls without having to go back to the budgetary well.

 

I've tried to outline a few - that you can get up and running quickly - to address particular situations as they arise. These aren't the only ones by any means. I've tried to pick out short term "gap fillers" for this list. There are literally hundreds (if not thousands) of excellent free tools out there that let you do everything from log correlation to asset management to monitoring in the cloud (and out of it for that matter). The difference is that not all of them are "spin up/spin down." For example, you can use a tool like GroundWork (monitoring) or snort (IDS) that are every bit as feature rich as commercial counterparts - but once you have it up and running, are you going to want to spin it down again in three months? Probably not. So while those tools are great (can't stress this enough), I didn't include them on the list.

 

What I did include were tools that you can get up and running quickly, that fill an immediate need, and that doesn't commit you long term. Meaning, you don't lose (much) data or have to retool the environment (much) should you decide to stop using them later.

 

Free Data Discovery

Finding out where your confidential and/or regulated data is prior to (and let's not forget during and after) a cloud move is always useful. You'd be surprised what data is located where in a large or even medium-size enterprise. There are a number of free tools out there that help you search assets and locate certain types of (usually regulated) data. MyDLP, OpenDLP and the cardholder-data focused ccsrch can help data in automated fashion. All of these tools have merit. Although I personally found the step-by-step installation instructions for MyDLP to be particularly helpful in getting up and running quickly - and the ccsrch tool's simplicity and efficiency make it a good choice if you want to focus just on credit cards.

 

Free Compliance Tookits

Evaluating a vendor's security posture and control deployment sometimes gets done prior to picking a vendor; but sometimes (like when security or IT isn't consulted in that process), it doesn't. But many regulatory requirements require specific validation of vendors. In that case, it's on us to do that after the fact. Now sure, general-purpose information-gathering materials like the Shared Assessments (formerly FISAP) Standardized Information Gathering questionnaire are great, but let's face it, they're cumbersome when applied to a hosting provider. That's why the Cloud Security Alliance's GRC Stack - specifically the Cloud Controls Matrix (CCM) and the Consensus Assessment Initiative (CAI) can help. Why redo the work when you can reuse what's already done for you?

 

Free Two-Factor

Many organizations require two-factor access as part of remote access policy. Although it's one of those things that many times organizations overlook in the planning process. WikID - an open source two-factor authentication platform might be something you can look to for meeting the requirement short-term. It's easy to set up, and doesn't require per-user hardware to provision in order to get up and running.

 

Free Network Analysis

Most folks probably already know about wireshark ... you knew it was coming, right? Sometimes you just have to know what's going on over the wire.

 

Free AV

Fungible as many organizations perceive it, people are sometimes surprised when it comes to AV during a move. Why? Because many commercial AV platforms are licensed per client. A physical-to-virtual move many not result in a one-to-one mapping between existing physical hosts and virtual images. Particularly in the interim period while you stand up the virtual infrastructure. This means (sometimes) that you need more AV licenses - depending on your licensing arrangements with your current vendor.

 

What happens when you discover this mid-effort? Going off to secure funding for more AV licenses in the middle of a move isn't a fun conversation - and because it's a regulatory requirement (for example under the PCI DSS), just making do without isn't a good idea. One solution is to leverage free AV tools like ClamAV in the interim. Yes, long-term management is an issue in supporting another product over/above commercial tools you might be using on-prem. But to fill a short-term need while you sort out the licensing? Why not?

 

Maybe some of these might be helpful - particularly in Q4 when budgets are frozen anyway.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

Professional Services IconHow can I transform my enterprise to become cloud-centric? There is no right answer to that question. But there is an answer to the question "How can cloud serve my business needs?" There IS a way to harness the power of cloud to drive your business agenda rather than thinking the other way around.

 

Lots of times I hear my clients asking me "What should my enterprise cloud strategy be?" and "How can you help me accelerate into the cloud?" In my opinion, those are not the right questions to worry about. The concern shouldn't be how to become cloud-centric. Cloud is just one way to service your IT needs.

 

Instead the question should be "How can my infrastructure be more business-centric?"

 

We should first try to understand what the needs or challenges are of your business - is it time to market, resiliency or having to align IT spend with business outcomes? Then we should see what kind of enterprise IT architecture (that includes infrastructure and operations architecture) you need to adopt in order to meet those needs and challenges. In that quest for target state architecture, I'm sure cloud can play a pivotal role.

 

Having said that, there are some simple considerations that can simplify your approach/thinking around making cloud work for your business. They are: Workload, Technology, Efficiencies, Security and Business Case.

 

I plan to tackle each of these considerations one at a time on this blog, starting here with the most important consideration: Workload.

 

What does your workload look like? If you were to map the workload demand would it look like a human heartbeat - with ups and downs in very short intervals? Or is it much more seasonal - where it lies low most of the time and spikes up periodically? The distance between peaks is a very important factor in deciding whether or not something should be moved into the cloud.

 

While on one hand cloud is very well-equipped to handle sudden spikes in workload, there is a "cost" or overhead to RAPID provisioning and decommissioning. In a completely variablized cloud commercial model, the unit cost of resource (like compute) is naturally higher than a fixed-term-based model.

 

Oftentimes, we use the "pay by the drink" analogy when we talk about the commercial model of cloud. Well, it is very true - when you order drinks by the glass versus buying a bottle, what's more expensive? Obviously, by the glass. So, since, the variable unit rate is much higher than a fixed-term unit rate, unless there is a substantial amount of "rest" period in the workload, it doesn't make economical sense to leverage cloud for your infrastructure needs.

 

Now, that doesn't mean you SHOULDN'T use cloud in all such situations - you might have another compelling reason why you should. All these considerations are exclusive to each other. Even though one of them might stop you from thinking about cloud, the other ones might out-weigh the negatives and still justify the usage. So, I hate to sound like a consultant, but it DEPENDS on what your BUSINESS needs and priorities are ... that's what will drive your decision.

 

So, what kind of workload IS suitable for the cloud? A workload that is seasonal - retail applications that typically spike during holidays, financial workloads that peak up during period-endings, educational applications that peak up during admission season or non-production environments of usually very stable and static applications in production that might undergo patches a couple of times a year are just some of these prime applications.

 

In all these situations, the amount of time where the peak is happening is much lesser than the "off-peak times" and the peak loads are somewhat predictable. So, even though you are paying a much higher unit rate when you are using the cloud resource (such as compute), it is much lesser than what you would have paid if you had procured all of the infrastructure that you need at peak load and let them idle for most of the year.

 

So, hopefully, based on the above discussion you have a better idea now how to assess your workload for suitability in the cloud. In my next blog entry, I'll talk about Efficiencies in the cloud.

 

Kaushik Ray is practice head, integrated technology solutions consulting (iTSC), at Savvis.

Colocation IconAs they move through different points in their lifecycle, it is common to see companies change their mentality around colocation. The overhead of managing their own increasing colocation equipment rises in parallel with the complexity and size of their business. This steers them to start planning a move to managed services or the cloud because they realize that it is a better use of resources to leverage the expertise of their service provider's technology specialists to manage infrastructure, freeing them to deliver and migrate apps and features rather than maintaining their own IT infrastructure.

 

This type of a scenario has led to an increase in demand for service providers that offer a full portfolio of services ranging from colocation to managed hosting to public, private and hybrid cloud services. However, developing the facilities and the capability to integrate this full range of technologies has been a major challenge for many colocation providers.

 

The biggest of these challenges is effectively managing the data center. Running a data center is similar to attempting to keep a vehicle on the road 24 hours a day, 365 days a year without stopping - yet driving as efficiently as possible. Even if you started with the best equipment in the world, planning and then implementing the necessary rolling maintenance is critical if single points of failure and outages are to be avoided. The evolution of technology is helping, bringing cheaper UPS, generator and cooling technologies together with planning, automation and monitoring tools. But as yet, one of the most valuable assets in colocation provision continues to be experience.

 

The desire for fine-tuned control over systems has been one of the primary needs that colocation has satisfied. For most clients, a sufficient level of control is currently available in the cloud, which eliminates the burden of configuring and maintaining equipment. Therefore, to maintain relevance in the future, colocation providers need to evolve and become a bridge to a wider range of managed services. This approach will provide a base for effectively connecting an organisation's unique IT configurations and intellectual property costs to the wider range of services required to support that technology. The parallel provision of colocation as host for, and part of, the full spectrum of cloud options is where the future lies for the industry.

 

Drew Leonard is vice president, colocation product management, at Savvis.

SaaS IconWhat key considerations should you look for when evaluating SaaS solution vendors? We obviously think of the business requirements as the starting point. However, when you are evaluating a SaaS vendor, how often do you ask about their infrastructure or cloud services? When you evaluate a SaaS solution, there are some key areas that I believe you should focus on.

 

Security

First, what are the security requirements? Can the SaaS vendor prove that they are SAS 70 compliant? SaaS vendors should use reputable colocation or hosting providers that follow strict guidelines and audits. Ask questions about who is managing and operating their network connectivity, firewalls, log file management, web application firewalls and access and identity management. If they answer "multiple providers," you should probe deeper here, because when there is a problem (and there will be a problem), how they respond to the issue depends on the number of third parties that are involved through resolution. Also ask for information about the colocation provider's facilities. How are they secured? Where are they located? ... just to name a few.

 

Flexibility

Next, how flexible is the SaaS solution? Can the SaaS vendor offer additional services like private network connections to legacy systems, shared or private compute services and various storage options? These examples seem obvious, but I'll bet you will find those who have rigid offerings because they have partnered with a service provider with limited capabilities.

 

SLAs

Another area that needs to be evaluated is the SaaS vendor's SLA policy. What is the architecture of their SaaS solution? Is the SaaS solution fault-tolerant and does it have the right redundancy in place in case of a failure? The answers to these questions usually come out during the negotiation process and while finalizing agreements. This is too late. The definition of SLAs needs to be identified earlier in the evaluation phase. Furthermore, SLAs should be aligned with the cloud provider that they are running on.

 

Disaster Recovery

Disaster recovery is another area that is overlooked or is treated as just a checkbox. There are many important pieces during a disaster. How often do they test their disaster and recovery process and procedures? Are their employees geographically dispersed? Is their infrastructure dispersed? When it comes to the infrastructure, some cloud providers have very different infrastructure implementations in various geographies. Make sure your SaaS vendor's solution is the same across all geographies and that they test their processes regularly.

 

Global Reach

Lastly, as I just mentioned above, is to inquire about geographies. Can the SaaS solution meet your current and expanding global needs? Has the SaaS vendor partnered with a cloud services provider that offers cloud services around the globe? This is extremely important if performance is critical or the data must be stored within a particular geography.

 

Good luck on implementing your next SaaS solution, and I hope you find these tips helpful as you evaluate vendors.

 

Larry Steele is technical vice president, Software-as-a-Service, at Savvis, a CenturyLink company.

What is enterprise cloud?

Cloud IconYou may be thinking, "What is enterprise cloud?" As you know, not all cloud infrastructures or providers are the same, and not all methods offer the full value IT requires. Within the cloud arena, public and private clouds are well established. A new model, enterprise cloud, is emerging.

 

Enterprise clouds offer the same benefits as private and public clouds, including flexibility, quick provisioning of compute power, and a virtualized and scalable environment. Similar to private clouds, enterprise clouds provide "private access" and are controlled by either a single organization or consortium of businesses; services are delivered over the Internet, removing the requirement to purchase hardware. Commercial-grade components provide the usability, features and uptime required.

 

Enterprise cloud not only delivers cost savings, but, more importantly, provides a range of security options, and unprecedented speed-to-market with vastly improved collaboration among business partners and customers. Enterprises realize tremendous value in this approach because of its ability to allow them to innovate as well. For businesses that want to make IT faster, better, cheaper, more agile, enterprise cloud will likely be your solution of choice. Corporations and government agencies that are reluctant to outsource their information services are likely to embrace this model as well.

 

For example, enterprise clouds are ideal for organizations that want to minimize risk and expenses of trialing new service and application options. There are no upfront capital expenses and new projects can be brought to market instantly or shut down just as quickly if they fail to give corporations a new sandbox in which to pilot offerings. Enterprise clouds allow organizations to create secure workspaces to enable the partners and customers a superior forum for collaboration.

 

Savvis' enterprise cloud is a VMware-based service differentiated by an array of built-in security features, as well as many optional managed security capabilities. Savvis built its cloud solutions using the same trusted suppliers - including Cisco, HP and VMware - used by enterprise customers in their own data centers. The cloud services are divided into "tiers," providing different levels of performance and availability for different types of application needs. These services are delivered in a multitenant way and can also be delivered as a single tenant.

 

For customers with complex IT needs, Savvis offers multiple solutions, including colocation, managed services and networking solutions. These solutions, when deployed, are fully integrated for customers and supported by robust infrastructure SLAs.

 

Find more information about Savvis cloud services here.

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis, a CenturyLink company.

An application without connectivity is like a train attempting to run without tracks. Enterprises are moving more and more of their mission-critical applications to enterprise cloud environments every day. But are they ensuring that the train tracks ahead can safely and efficiently handle the ever-increasing load?

 

Reliable, redundant connectivity is central to the value proposition of cloud-based computing. The underlying network infrastructure must be robust and flexible enough to support the demands of the applications running on it. Applications like Oracle, SQL and SAP require predictable performance. Video and voice transmissions are sensitive to network stability issues such as packet loss and jitter. ERP and CRM systems are essential to business operations, but also require a high-performance systems environment no matter where employees are worldwide.

 

Once these types of applications are moved into the cloud, the challenge becomes ensuring that they all are globally available with the same levels of security, consistency and control associated with local environments. Private connectivity into a cloud environment via a high-performance, high-capacity network ensures that end users can securely access these applications, and any related data, as quickly as possible. Utilizing Ethernet as a global access method for better scale and accessibility, network technologies such as multiprotocol label switching (MPLS) and virtual private LAN service (VPLS) make it possible to provide high levels of performance, minimal down-time and end-to-end Quality-of-Service (QoS) prioritization for essential applications such as mission-critical financial or e-commerce systems. Value added services, such as network-based storage, optimization, security -- including firewalls, virtual private network (VPN) access and denial of service protection --and load balancing, offer IT managers additional built-in capabilities.

 

Enterprises must also focus on the interaction between and among their applications. For certain applications, such as stock trading, shaving a few milliseconds off network latency may provide a key competitive advantage. The positioning of multiple, interdependent applications in network-adjacent locations creates a highly efficient community of interest, enabling application owners to optimize application interaction, utilize off-line storage and minimize connectivity costs. Each application cluster provides strong economic synergies that drive the further growth of communities of interest in markets as diverse as media, gaming, voice, and video services.

 

Moving applications into the cloud is a big step for an enterprise. But it is not the only step. Choosing the right connectivity to support these hosted applications is just as important. It must be substantially more robust and secure -- and possess more value-added data delivery and bandwidth management capabilities -- than what is typically built using traditional IT models. If your cloud connectivity is not sufficient, the flow of information is interrupted, and the train, and your business, will grind to a halt.

 

Dennis Brouwer is general manager of converged cloud solutions at Savvis.

What is enterprise security?

cybersecurity.jpg

While I know that some practitioners are going to scoff when I ask the question "What is enterprise security?," I'm going to ask it anyway.

 

You see, great leaps forward very often start with questioned assumptions. Ptolemy assumed (based on a set of perfectly logical assumptions) that the sun rotated around the earth. It was only when subsequent thinkers questioned his universally held theory (in many cases at great personal cost to themselves) that a cataclysmic advance in humankind's understanding of the solar system became possible.

 

The point is, if we don't stop every once in a while to question what we believe, we can hold on to outmoded assumptions way past their "sell by" date. And when it comes to the security of the information we steward in our organizations, outmoded assumptions create risk. In other words, if you assume things about your environment that (maybe) were true once - but aren't now - you put yourself in a situation where conclusions you base on those assumptions may very well be false.

 

Take an assumption like this one: "Two devices on the same isolated network segment communicate more-or-less privately." Maybe that's true. But if you're wrong - like if the segment doesn't stay isolated or someone moves one of the devices off that segment? Risk.

 

The answer to the question "What is enterprise security?" is neither static nor a given. And while many organizations on the edge of change are rethinking and embracing what "enterprise security" means and adjusting accordingly, just as many are clinging to outmoded definitions about what's "inside" vs. "outside" the enterprise and what's "security's job" vs. not. These boundaries just aren't as meaningful as they used to be.

 

"Enterprise" and "security" are borderless

First, it's important for security practitioners in today's IT shops to realize that the definition of "enterprise" is changing. A few years ago we in security talked casually about the "disappearing perimeter" (remember that?), but for today's security practitioner an appropriate question might be, "What perimeter?"

 

If it wasn't true before, it's certainly true now: Enterprise security and location of resources are unrelated. From a location-of-access standpoint, take the trend of mobility to its ultimate conclusion: Users employ an array of mobile platforms to send email, modify documents and close deals - or they access critical applications from home machines not provisioned by the organization. But the data we hold needs to be protected just the same. Just because devices accessing critical resources aren't coming from some arbitrarily drawn geographical border doesn't mean that the security of those resources is any less relevant.

 

On the other hand, "enterprise" isn't defined by location of computing resources either. This time, take cloud to its conclusion: Critical business applications sit on dormant virtual machine images in redundant, geographically distributed data centers. These images and are spun-up on demand in response to user requests, live just long enough to service the request, and then are spun down to conserve energy, bandwidth and CPU cycles. Enterprises reallocate storage and processor resources on the fly across the globe in response to user demand, business volume, time of day or any number of other factors specific to their business. Are you free from the need to care about security because your data is hosted outside your data centers? No.

 

In both cases, security is still a critical factor of supporting the organizational mission. But the temptation - particularly when we're strapped for resources or under the gun to deliver a critical task - can be to draw a line in the sand and decide that certain technologies are outside the boundary of our security plan because they're implemented by a vendor or because they leverage devices we didn't provision. But nothing could be further from the truth. In fact, this just makes security more important rather than less.

 

"Enterprise" is defined by data; "security" by relationship

So if geographic location doesn't define what's in the enterprise, what does? In my opinion, it has to be the data. When geographical boundaries no longer define what's "inside" vs. "outside" and security isn't tethered to particular systems or applications, the answer has to be to focus on what we're ultimately trying to protect: the mission of the organization. And the embodiment of the organizational mission is the data the organization creates, processes and stores.

 

Said another way, information systems used by an organization process and store data for a particular purpose; so the data those systems operate on are the raw materials that the organization uses to complete that purpose. Everything that goes into the processing and storage of that data - no matter where it's located or at what third party - is in scope from a security standpoint and therefore must be included "enterprise security."

 

This is true even when the data is outside of your organization's direct control. Say for example your hospital outsources storage of your medical records. If your medical records get exposed inappropriately, do you honestly care whether it was the hospital that accidentally lost them or whether it was a service provider? I don't. I have a relationship with the entity that I trusted with my data. And I trust them to only share that data with trustworthy organizations. So when someone violates that trust and puts users at risk, users are going to hold accountable the entity they trusted in the first place.

 

Just like the data defines what the enterprise is, so also is "security" defined by the chain of relationships along which that data travels. If the data is compromised, the responsibility for failure to protect that data rests with the organization with the relationship to the data owner. If confidentiality, integrity or availability of that data are keys to supporting the organizational mission, the organization is the one that takes the hit. If the organization is acting as a steward of that data on behalf of someone else, they are the ones with the relationship to the data owner and are therefore the one to take the hit when security fails to protect it.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

four t's clipart.jpg

Over a three-month span earlier this year, I had the opportunity to play a leadership role in the TechAmerica Foundation's Commission on the Leadership Opportunity in U.S. Deployment of the Cloud (CLOUD2). This industry panel, organized at the request of the Obama administration, on July 26 published our titled "Cloud First, Cloud Fast: Recommendations for Innovation, Leadership and Job Creation." Our leadership team recently had the opportunity to share this report with multiple committees on Capitol Hill.

 

This report, specifically encouraged by the Federal CIO and the Department of Commerce, is focused on the future of cloud services in the American economy. Our report establishes a forward-looking roadmap of maturity measures that will enable the United States economy to maintain its leadership position in the cloud services marketplace worldwide.

 

One of the things that inspires me every day is the fact that our industry plays such a key role in the stability and recovery of the American economy. Technology sector leaders are sought out as key political supporters by both the left and the right. Technology sector job creation is one of the few bright indicators in the current landscape. Many view government IT reform as a critical step in achieving increased government productivity and budget control.

 

Our recently-released report issues 14 recommendations across four categories. These categories I like to call the "Four T's Of Cloud":

 

  1. Trust - We recommend participation in international certification frameworks, investment in identity management ecosystems, standardization of national breach law, and further academic cloud research.
  2. Transnational Data Flow - Our commission recognizes the need for digital due process that allows cloud providers and cloud customers to clearly understand their obligations and protections under the law. The report also recommends that the U.S. align with international privacy frameworks, and show leadership by allowing appropriate government workloads to operate in transnational cloud environments.
  3. Transparency - With the cloud market clearly concerned about vendor lock-in and interoperability, the report calls for cloud providers to develop disclosure frameworks around the operational status of their environments, and to offer data portability tools that enable public and private customers to access their data freely.
  4. Transformation - Recognizing that cloud computing is in many ways a business model as opposed to a new technology, government procurement experts on our commission made specific recommendations around federal budgeting, regulations and incentives, which could spur market adoption and maturity. We also call for continued investment in broadband infrastructure and ICT education, to ensure the supply of key materials this industry requires.

 

As we developed these recommendations for maintaining leadership in the cloud market, many members of our CLOUD2 commission drew positive and negative comparisons to other technical markets such as the financial services industry and the Internet protocol backbone. Many look for parallels to these modern technical markets for examples of what works and what doesn't when trying to show leadership in a global market.

 

The primary contrast that jumps out at me is the method for evaluating services. Older, more mature, technical markets tend to be evaluated by simpler criteria. An investment vehicle is given a risk rating and is either profitable or not profitable. An Internet link is given a quality-of-service, and it is either up or down. When evaluating the cloud services market, the criteria for a "successful" offering is far more subjective and diverse. Buyer criteria ranges widely across service plans, computing models, operating systems, application support, provisioning time, automation capabilities, security tools, transparency measures, portability factors - the list goes on and on.

 

These "Four T's of Cloud" and their corresponding recommendations serve to highlight just how complex our cloud services market can be. The cloud market is as challenging to regulate as the Telecommunications Industry or the Financial Services industry, and is further complicated by the rate of technological change inherent in the industry. New software, hardware, and business models can all change the face of our entire industry in a matter of months, making it hard to set any type of long-lasting regulatory policy.

 

These globally aware, progressive recommendations made by the CLOUD2 report are a good set of guidelines against which cloud services can be developed and improved. Here at Savvis, everything we do in our cloud services roadmap will be examined against the Four T's, and evaluated, in part, by how well we are advancing their objectives in the years to come.

 

David Shacochis is vice president, global public sector, at Savvis.

Moving to cloud is a big decision, but the transition to the cloud alone will not be the panacea for all your infrastructure woes, as the hype may lead you to believe. A few months ago, I compared how cloud is a lot like relocating or buying a home. I posed many considerations and questions that showed a stark similarity between them - and how much thought and consideration needs to go into each transition for it to be successful.

 

I recently sat down with Savvis' consulting team. These experts have spent thousands of hours helping customers prepare and transition to enterprise cloud. During our conversation, the team outlined the top considerations that organizations need to address to position themselves to select the best cloud type for their enterprise, and to achieve a successful transformation.

 

Answering - or not answering - the following five of questions can have a significant impact on whether or not the organization realizes the promise of cloud infrastructure.

 

Decide whether you are going to maintain two infrastructures or consolidate.

Different requirements determine if the organization is going to augment its existing infrastructure with cloud, or use cloud to consolidate. Knowing the business and technical drivers that are moving the organization toward cloud will determine which path to take. Most organizations we work with implement a hybrid approach using cloud to achieve specific levels of flexibility and value not just cost savings.

 

Understand what applications are currently running in the existing environment and expectations for moving certain solutions to the cloud.

Mobility and growing data needs are placing new requirements on applications and services. It is important to analyze the applications in your environment and understand who is using them, how they are being used and what applications can be eliminated. Understanding the applications and the workload parameters will help to best distribute your assets and prep your user communities for the move.

 

Analyze the architecture of the application environments.

Virtualization has helped organizations lower storage and data center costs. Virtualization creates a pool of manageable, flexible capacity. Automation and orchestration take that pool of resources and enhance its manageability based on business policies and service-level requirements. The decoupling created by virtualization, combined with defined service offerings and automation, greatly enables cloud computing. In addition, companies that have virtualized their applications have already gone through a segmentation process and have the foundation for understanding what bridges are needed between the different infrastructure components. Applications that are on horizontally scalable systems and configured in clusters streamline the transformation and reduce upfront work as well.

 

Determine how much capacity you need to run the applications; are the capacity requirements seasonal or variable?

Knowing your application capacity requirements will ensure you meet your investment applications. While cloud allows per unit pricing, this approach is still more expensive than purchasing capacity in bulk. Based on our experience, most organizations can predict 70 percent of their capacity requirements. Cloud is a superior infrastructure for applications and user communities that have variable or seasonal capacity requirements.

 

Assess compliance and security requirements.

To move to the cloud, organizations must identify which applications are PCI compliant and define clear application security requirements. Some applications may never move, but knowing those services and solutions that require higher levels of security will help define if a dedicated cloud approach is better than an open one. Regulatory compliance policies and other internal procedures will inform what needs to be enforced on the cloud.

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis, A CenturyLink Company.

It's a fact of life in today's world: More and more organizations are adopting cloud and cloud technologies. Recent surveys from IDG, for example, suggest that 57 percent of firms surveyed are already in the cloud, with another 31 percent planning to move to the cloud in the next year. Putting those figures together, this means that 88 percent of firms are either in the cloud already - or will be there imminently (i.e., 12 months or less).

So the end-state for these firms is pretty well-known - they're going to migrate some portion of their environment to the cloud. However, the mechanics of how they get there - in other words the path that each individual organization follows on the way to that defined end-state - varies tremendously. This includes different cloud models, different architectures and different types of service providers. 

And while some of those organizations have their IT department riding shotgun (i.e., acting as technical "navigator" and adviser) during this transition, it's not always the case. In fact, evidence suggests that many organizations leave IT out of the loop entirely in some cloud planning scenarios. Forrester, for example, cites statistics that suggest upwards of one-half of cloud buyers may be outside of IT.  

Leaving IT out of cloud planning is a reality in many cases - and while there are of course "many paths to the Buddha," it's also important to recognize that some paths (like leaving IT out of the discussion in cloud planning) are harder to follow than others in a few different respects. So while lack of IT involvement in a cloud deployment is something that we know is happening, it also can (and does) have some unintended consequences from an information security standpoint. 

Why Leave IT Out at All?
Folks who work in IT probably have one question upon hearing that; namely, "Why leave IT out of a cloud deployment at all?" There are a few reasons why this can happen in practice.  

First, there is a perception in some firms that IT is a "stumbling block" to forward progress. In many cases, this perception is baseless; however, it can be understandable why a business partner might feel the way they do. For example, a business partner might not understand the need for adequate preparation or technical planning. In other cases, the perception could be legitimate (let's face it, there are some IT shops that culturally have a high amount of inertia). 

But true or not, the fact is that IT can sometimes be perceived by the business side of the house as something that would slow down a deployment. And in a world where some vendors advertise their ability to circumvent IT participation (seriously, they do), no wonder some firms feel this way. 

In addition to perception-related reasons, don't discount the fact that in some cases cloud migration carries with it some reduction in IT budget and/or staffing. This is obviously not going to be welcome news to folks actually in the impacted department. In those cases, IT may be purposefully left out of the discussion until the full impact to the IT organization can be determined and quantified. In some cases, this means leaving IT out of the discussion entirely until a migration is well under way.  

Lastly, don't forget ignorance. Not everyone will realize that IT should be involved.  

Security Impact
So what does it matter if IT is left out of a cloud transition? From a security standpoint, there can be a few impacts. It's important that firms recognize this. To name a few possible areas of concern: 

  • Technical impact - Certain types of deployments (e.g., IaaS, PaaS) can shift how/where applications and critical services are located. This can introduce new data pathways that didn't exist before or obviate assumptions made about security made under the old model. For example, an assumption along the lines of "We don't need to encrypt this traffic because the database server is on the same VLAN as the app server" makes less sense when the assumption (that they're on the same VLAN) ceases to be true.
  • Regulatory Impact - In some cases, there may be regulatory drivers that impact the data. Your IT department may be working closely with the compliance office to track and manage something like PCI DSS compliance (credit card data) or HIPAA compliance (medical records). If you start replicating certain types of data outside of your data center to an environment that may or may not have been certified to implement security controls you need, you introduce risk.
  • Operational Impact - Certain controls, such as those for security, may operate only within a particular context. Changing the context (for example, by moving to the cloud) may impact how operation of that security control functions.

There are other impact areas that are possible, of course. Those listed above are only a few example areas of places where impact could potentially happen. The point is that IT is typically chartered with overseeing the technical landscape and overall environment from an information security standpoint - by going "around IT's back" to the cloud, it logically follows that information security can therefore be impacted. This creates complexity - and puts the firm as a whole in a position of increased risk.

Now risk isn't always bad ... but unless you're Evel Knievel, it pays to think through relative merits of risks before you take them on. Meaning, it's important that organizations think through the level of IT involvement in a cloud deployment to determine whether level of interaction is appropriate given the possible security impact. Business partners should be thinking about this when IT isn't involved, and IT should be thinking about it when they learn of a cloud migration they're not fully engaged in.

This isn't to say that IT should be involved in every deployment (good idea though it is, situations do not always permit the optimal case to play out), but organizations that purposefully leave IT out of the conversation should be even more careful about how they approach the technical deployment. "More careful" means that they plan carefully, that they reach out to all stakeholders, and that they fully maximize how they leverage service provider technical expertise. After all, in many cases they're operating without a net.

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

Is multi-tenancy too difficult?

In the last several years, we have seen many independent software vendors (ISVs) transition their legacy applications to a Software-as-a-Service (SaaS) model. Some of these ISVs entered the SaaS market by deploying various forms of transitional architectures, i.e., multi-instance and multi-tenant architectures. These architectures have served ISVs well by providing lower barriers of entry to meet their customers' requirements as well as putting them in a position to learn how to implement, support and manage their software on behalf of their customers.

 

On the heels of these transitions we have also heard debates surrounding multi-instance and multi-tenant architectures, and which is better between the two. Pure-play SaaS companies have demonstrated tangible results with their multi-tenancy architectures. As such, I think it is time for ISVs to begin evaluating or re-evaluating their overall architecture and infrastructure as the market continues to accelerate SaaS adoption.

 

Multi-tenancy can mean different things depending on your point of view. As an infrastructure provider, Savvis sees and defines the term as our public and private cloud services. On our public cloud services, our ISV customers share discreet instances of various infrastructure elements (e.g., servers, storage, network and security). We also offer private cloud services on which an ISV has dedicated infrastructure (e.g., servers, storage, network and security). The ISVs utilize our cloud solutions (public or private) to then offer their software to their end customers.

 

No matter which type of cloud (public or private), many ISVs offer discreet instances (multi-instances) of their software to each of their customers on a shared infrastructure platform.

 

If this isn't complex enough, we now need to move up the stack to look deeper into the application and examine the associated complexities of delivering a SaaS application.

 

In a multi-instance implementation, the software is implemented in a "shared nothing" application architecture. What do I mean when I say this? "Shared nothing" means that the various layers of the application stack - which include, but are not limited to, the operating system, web server, application server, database, systems management and all the application code needed to support a single customer - are dedicated to each customer. So, each new customer that the ISV adds will require their own instance of this same configuration. While these "cookie cutter" implementations look good in the beginning, as sales grow and the complexity and management of the ISV's collective base of customers increases, ISVs will see increased licensing and support costs.

 

So, what creates the complexity in a multi-instance implementation? One area is the daily operation and licensing costs of the application. Each layer of this technology stack requires patching, version upgrades and bug fixes. Many ISVs have begun to solve these operational complexities by utilizing cloning capabilities which address this issue by treating the entire stack as one logical unit. While this is a good start, the reality is that each customer implementation will have unique nuances. Cloning only addresses part of the problem, while shifting the complexity inside the application.

 

It is no small feat to address all of the application changes required to convert an application into a highly scalable multi-tenant application. There are way too many layers inside an application for me to address in this blog post.

 

For example ...

 

Start by analyzing the database to understand the relationships between tables and rows and how, when and where they are accessed and updated by customers. You'll also need to analyze how each individual user is impacted by these updates. Don't forget to match up these impacts based on what entitlements and access users have to see what data and when.

 

If your application architecture leverages some form of session management to allow fast access to data that is frequently accessed by users and doesn't change often, you'll have yet another layer of complexity to fold into your analysis. You'll also want to make sure that you collapse into your analysis any pricing impacts based upon user and group roles in the application and their relationship to accessing data.

 

So what does an ISV do; pack it up and say it's too hard? Absolutely not! First, do the analysis and understand what deficiencies you actually have preventing the move to multi-tenancy. Next, depending on your application style (e.g., .Net or Java), talk to an applications infrastructure services provider like Apprenda or Corent Technology. These providers offer help and have built emerging services to specifically address complexities faced by ISVs migrating to application multi-tenancy and cloud.

 

If you are interested, I recently presented a webinar on this topic titled "The Death of an ISV: How NOT to Succeed in your Move to SaaS." To view the webinar, click here.

 

I welcome your feedback and look forward to hearing from you on this topic.

 

Larry Steele is technical vice president, Software-as-a-Service, at Savvis, a CenturyLink company.

SiteMinis CEO Marci Troutman

This is the second entry in a series of blog posts featuring Savvis clients answering five questions about their business and IT solutions. Marci Troutman, founder and chief executive officer of SiteMinis, shares details about her company's use of cloud computing.

 

1. Can you share some background about SiteMinis and how it uses IT outsourcing?

Founded in 2004, Siteminis offers brick and mortar and e-commerce companies an easy way to take their websites mobile to reach an explosive customer base and generate new revenue.

 

SiteMinis leverages a unique mobile website technology platform to deliver custom mobile SiteMinis CEO Marci Troutmanwebsites, enabling these companies to extend their brands to the growing mobile universe and provide a better mobile experience to consumers making buying decisions. Working on more than 95 percent of all legacy and smartphones, SiteMinis simplifies the move to the mobile web and delivers a fast, turnkey solution to engage mobile consumers.

 

Working with industry leaders, such as Savvis, SiteMinis delivers a comprehensive offering that eliminates the need for customers to find separate resources for their hosting, site development and other needs. Customers are ensured of the best component for each portion of the solution.

 

2. What led SiteMinis to cloud computing?

Siteminis determined that cloud computing met the needs of the unique and rapidly growing mobile space. The cloud model allows Siteminis to maintain laser focus on our core competency, the mobile web, at the same time maintaining a best-in-class IT Infrastructure with our partner Savvis.

 

3. How has cloud improved and impacted your company's mobile website technology platform?

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

 

4. What benefits are website owners and end-users seeing since you implemented cloud solutions?

Cloud computing fills a perpetual need of IT: Siteminis clients have a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software.

 

Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT's existing capabilities

 

5. What advice do you have for other companies considering cloud?

If you need to spend more time building your business and less time maintaining your IT infrastructure, cloud computing can help your company.

 

Through a flexible, easy-to-use cloud interface, cloud computing implementation delivers cost savings, high performance, scalability and security without all the headaches that normally come with a 'build it yourself' implementation.

 

To read a SiteMinis news release about its partnership with Savvis, click here.

Mobile Website Photo-WEB.jpg

I've learned quite a bit while manning the booth, walking the floor and attending conference seminars this week at eTail Boston. Generally, mobile and ecommerce are the hottest topics.

 

Using this space, I wanted to highlight what I learned from the first two days of this conference.

 

Marketers Love Mobile Apps

We have a strong presence here - a joint booth with mobile cloud partner SiteMinis. We're talking about our new enterprise and template-based mobile app

platforms. Really cool stuff.

 

Most of the people who stop by our booth are involved in marketing. Their biggest challenge with mobile apps is getting IT, developers and interactive agencies all in line and moving at the speed required to get campaigns and other information to market.

 

Our template-based platform with SiteMinis is truly a do-it-yourself solution. People have raved about the simplicity, and they can see how a tool like this gives marketers the ability to create their own content and sites, alleviating their need to go to IT, developers and interactive agencies. Instead of time-consuming headaches, they can create and deploy their mobile site in minutes.

 

Business Advantages Should Be Better Communicated

As I walk around the conference floor, I see a lot of booths with - quite frankly - poor messaging around their ecommerce and mobile products.

 

They're not communicating the real business advantages. Ideally, you'd like to see them showcase the business challenges they are facing, describe their solution and highlight the benefits of using their product.

 

I also don't see a lot of products, other than ours, that focus on the underlying platform. If information is going to be stored in the cloud, for example, they want to know if there's a trusted provider behind the product.

 

Mobile Marketing Will Explode

I've heard numerous experts from companies like Dell and Brookstone deliver presentations and participate in panel discussions, and a common theme has emerged: Despite all the hype, mobile marketing remains in the infancy stage. However, they all believe it's going to explode in the near future.

 

Experts here at eTail Boston advise marketers to go out and try as many mobile marketing measures as possible. Find out what works best. Don't just sit back and not do anything.

 

Scalability Key to Web-Based Ecommerce

During a panel discussion, I heard an interesting exchange about planning for the next holiday season. Christian Friedland, president of Build.com, really honed in on making sure that whatever plans you have for website scalability, know that you'll need more.

 

The point is to make sure you have an infrastructure platform that can scale up to where it's going to be during the holiday season. And then test, test, test the system so you can see how it will react to all that traffic.

 

Another important point deals with the application side of the equation. Not only do you need scalable infrastructure, your applications also have to scale with the infrastructure. Don't forget that. Ultimately, you are supporting the client experience. It should be a good one.

 

Comparison Shopping, QR Codes and More

In addition to being used for direct shopping, mobile phones are being used for tasks such as comparison shopping and to scan QR codes.

 

Think about it: When you're at the electronics store, how many times have you looked over to see somebody scanning the bar code to see if he's getting the best deal? Heck, I've done it myself.

 

QR codes - which link to product information, videos and more - are also becoming popular.

 

The point here is that mobile phones are an evolving tool. Mobile not only impacts brand companies from a mobile commerce and sales perspective, it's continually impacting them in new ways.

 

While we can build solutions to meet today's needs, there always will be something new. People don't know the full impact of mobile and all the different things that are going to come to fruition.

 

That's all for now. I hope to learn more today, as well as dive into best practices and emerging strategies of mobile shopping during the Mobile Shopping Summit on Thursday.

 

Kevin Conway is global director, consumer brands, at Savvis.

Technology - as we all know - changes quickly. What sometimes changes even faster are the buzzwords. And the newest one is "big data."

 

It's a cutesy name for a powerful concept: specifically, the concept that data has utility - and when datasets increase exponentially, so also does the utility of that data increase nonlinearly. In other words, opportunities to make use of the data for some productive purpose compound along with the size of the dataset, as does the complexity of management.

 

This should be of particular interest to organizations going through a cloud transition. Why? Because efforts to virtualize, centralize and standardize inevitably lead to centralization and aggregation of data. As that centralization occurs, data that may have been dispersed and diluted throughout the enterprise under the old model becomes concentrated in the new.

 

Dilute data (where data is spread over the entire enterprise and stored/maintained only at tremendous expense) becomes "data as singularity." Like a black hole, our data becomes extremely powerful (though difficult to harness) due in part to its density.

 

While this is extremely powerful for IT generally, for those of us who are chartered with maintaining the security of that data, it's a mixed blessing - there's an upside as well as a pretty clear downside. Let's take a look at both at a very high level.

 

Security Downsides

It goes without saying that centralized, extremely large volumes of data carry a significant security impact. First of all because it makes a heck of a target for a crook. Can you think of a more appealing target for someone who wants to get their hands on your organization's crown jewels? I can't.

 

Just like this data is potentially valuable to you, so also is it valuable to an attacker. Not to mention that enforced separations when a portion of the data is compromised go away as centralization occurs. In other words, because the data is centralized, any exposure is total exposure.

 

However, it's not just the "target-worthiness" of the data alone that constitutes a risk. It's also because the size of the dataset makes implementing security controls unwieldy. Can you imagine, for example, the engineering challenge associated with encrypting an Exabyte of data? Consider the tried and true tool in security -- linear search (i.e., how many AV, DLP, and IDS solutions work). "Big O of n" becomes "Big OMG" (sorry, couldn't help it).

 

So not only does the data have a huge bull's-eye on it, but the tools required to implement technical security controls at this level are complicated to deploy. This is one of the reasons it pays to think through (and set up) security controls while the dataset grows too large.

 

Security Upside

But it's not all downside. There are a few security advantages that follow as a consequence of centralizing and expanding the dataset in this way. First of all, in a distributed-data model, understanding the universe of locations within the enterprise (and outside of it) where data lives can prove extremely daunting: to the point that asking the seemingly simple question of "where does the data live" may simply be unanswerable to many organizations.

 

As data becomes more centralized, while the specifics of the data storage at the central location becomes more complicated, the "sprawl" of data within the enterprise can be reduced. Note that this is highly dependent on individual circumstances - so your mileage may vary. Getting away from this sprawl has a tremendous benefit as we can centralize - and improve in so doing - security controls.

 

Secondly, the dataset itself can be analyzed to find fraud. Keep in mind that much of the data that is in the set will be security relevant (security logs, etc.). We're already seeing efforts by the Department of Homeland Security to analyze datasets to combat real-world security threats in certain situations. So also can your organization seek to mine the data for information about attack conditions and fraud. Depending on the nature of the data in scope, there can be opportunities here though obviously the specifics are up to you and take planning to implement.

 

Lastly, it's an opportunity to revisit the legacy environment and apply financial resources to bring security to the data. Anything that loosens the pocketbooks and allows investment in IT is a way for the savvy security practitioner to capitalize. Security is obviously a huge part of the data strategy for any organization, so getting out in front of the "big data" movement can be a huge win for security.

 

Ed Moyle is senior security strategist at Savvis, a CenturyLink company.

Moving to a managed cloud model for Software-as-a-Services (SaaS) delivery makes a lot of sense for independent software vendors (ISVs).

 

However, it's key to first conduct research and ask the right questions before outsourcing to cloud. ISVs should know what to look for in a SaaS infrastructure services provider and what types of questions to ask.

 

Security

When it comes to cloud, most of the questions I receive are around security. In short, cloud can be as safe as any other form of IT infrastructure: It's as safe as the security measures you have in place.

 

Ask potential service providers whether they can filter out threats at the network level - it's a much more powerful method of protecting your IT infrastructure than doing it on site. Ask how they minimize exposure to common threats. Ask how they identify and assess system and application vulnerabilities. Do they offer 24/7 monitoring, management and response?

 

Service Levels

Single service-level clouds may not fit all applications. As an ISV, you either offer a standard service level to customers or have varying service levels based on your software tiers and other factors.

 

Be sure to review your potential cloud provider's capabilities carefully. Remember: SLAs you offer cannot exceed what your service provider is capable of providing.

 

Explore the service provider's standard and emergency change windows and procedures. When does their SLA "clock" start ticking? Things do go wrong from time to time, and how your service provider responds to those issues will affect your SLAs to your customers.

 

Lastly, how redundant is the service provider's cloud environment? It doesn't start and stop at the hardware, network and storage layers but also continues into the facilities (i.e., power, battery backup, redundant and varied paths for network into the building). There's nothing wrong with asking for a data center tour.

 

Hybrid and Flexible Solutions

ISVs running in the cloud may want to tap into their legacy IT environment to get to market faster.

 

The availability of hybrid cloud solutions - the tying of private and public clouds to each other and to legacy IT systems - is important to solve IT issues related to temporary capacity needs (i.e., bursting) and to address periodic, seasonal or unpredicted spikes in demand.

 

Ask if the potential vendor's assets work together to fully embrace the cloud model and deliver a combination of colocation, managed services and network that best suits your immediate and future needs. This capability enables the flexibility you need to both maintain your traditional licensing business and transition into SaaS. The vendor you choose should help you navigate the transition, no matter what your scenario entails.

 

Pricing

Vendors tend to price their clouds differently. Make sure you compare "apples to apples" and not just what vendors market; an instance of computing in the cloud may mean different things across vendors. To get the full picture, compare and contrast solution pricing versus individual element pricing.

 

Ask about what features (i.e., storage fees) are included in data center services. Are backup, security and support services included? What are the costs to add network connectivity options?

 

SaaS Expertise

In the end, the ultimate factor - in some instances even a deal-breaker - should be SaaS expertise. Look for a service provider with experience building solutions specifically for ISVs. Ultimately, the vendor should be able to help you figure out the right solution and roadmap to meet your business needs. If they don't specialize in offerings for SaaS companies, look elsewhere.

 

Cloud enables ISVs to implement their offerings in any market in record time. However, true cloud computing for ISVs needs to go beyond just an array of flexible storage and processing capacity. Be sure to conduct research, ask questions and find a solution that meets your needs.

 

Larry Steele is technical vice president, Software-as-a-Service, at Savvis, A CenturyLink Company.

In the next few weeks, I will be packing up my life in Philadelphia and moving to Chicago. I am, in fact, writing this blog on one of my many trips between the two cities to ready my new house and family for the move. As I think about all that goes into a move to a new city, I can't help but see the similarities between transitioning the data center to the cloud and buying a house and moving a family.

 

To go smoothly, the logistics for my move, just like a transition to the cloud, must be well prepped and nicely staged. I have had to weigh different priorities and answer many questions - frankly many of the same questions IT and Business executives face when considering cloud technology.

                                                                                         

The Considerations

Relocation

Moving to Cloud

Location, Location, Location

Schools, public services, ease of transportation, social life

Regulatory guidelines, latency, security, additional services

Budget

How much house can we afford: What are the incremental expenses we will need to consider such as taxes, utilities and other variable costs? How will these items impact the overall budget we must allocate for running our house?

How much will the cloud cost? Have I considered all requirements such as network, security and the number of applications that need to migrate? What ongoing expenses will I need to consider?

Services and Partners to Help With the Move

What resources do I need to pack, ship and unpack? What will I outsource and what will I do myself?

Which cloud provider do I want to partner with? Will I use its resources or in-house assets, a combination of providers or just one?

Logistics

When do I turn off my old utilities and when do I start new ones? Who in my family is coordinating the process? How do I inform friends I am moving and where can they reach me?

How do I handle data migration and security? What do I tell users? How do I prep users to access and locate applications and services that have moved?

Moving Day

Who will wait at the new house? Who has to care for closing the old house? What preparation do I need to make for my children?

What preparation do I need to make for alerting users about the move? When we make the switch how long will old services be available?

Ongoing Maintenance

How do I care for what was not done prior to moving? How do I fix leaking faucets and other items we discover as we live in the new house?

What happens when applications don't function? What happens if I want to move more services into the cloud or move some out?

 

Stay tuned, for more information on how to answer these important transformation questions. In the meantime, tell me what key considerations you have as you think through whether a move to cloud is right for your organization, and let me know of any restaurant recommendations you have in Chicago.

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis, A CenturyLink Company.

Every single one of us has been on the wrong end of a purchasing decision at one point in their lives. For me, one case of that was the Xbox. Everyone was talking about how great the Xbox was, the commercials looked awesome, reviews seemed to be overwhelmingly positive. But then I tried it out -- and it turns out it wasn't my thing.

 

Now sure, I know about "caveat emptor." I realize now - just as I realized then - that paying attention to what you buy is a top priority as a consumer. But sometimes the market creates conditions when fully evaluating a purchase is discouraged. When something's really new ... when everyone is saying how great that thing is ... when everyone else seems to be buying ... or when we only have limited time to act. Well, under those situations, sometimes getting caught up in the frenzy happens.

 

And while "caveat emptor" is easy during a steady, clear-headed purchasing decision (i.e., one based on reasoned and careful analysis), it's harder to be careful in a purchasing decision made under pressure.

 

This is happening to companies right now with cloud. Quite literally, almost everyone is moving to the cloud; I've seen statistics that suggest upwards of 70 percent are already in the cloud and other sources suggesting that 80 percent of new applications will be developed for cloud going forward. There's quite a bit of transitioning going on.

 

And in the rush, organizations are moving all sorts of services to the cloud, and in the charge to get their own efforts under way, some organizations are making moves that might not necessarily be the most appropriate from a security perspective. Here are a few easy-to-ask questions that can help make sure you and your service provider are on the same page when moving resources to the cloud:

 

Question 1: What level of service am I buying?

Remember, service providers sell multiple different kinds of services to different customers. They might have an environment appropriate for federal customer built around NIST 800-53 controls; they might have a healthcare environment built around HIPAA security; they might have a retail environment built around PCI. They might have a low-security environment with very few protections at all. It's very important that a customers' security organization understand what they are buying - particularly if the security organization is looped in after a purchase is in progress.

 

Question 2: Is your environment certified?

One of key benefits to security from an outsourcing relationship has to do with streamlining the audit process. Ideally, you should be able to just hand an auditor a list of the controls employed by that cloud provider and let them go to town. But without certification (i.e., unless someone has actually gone off and validated that environment), the assurances you can have are slim. Ask your provider to provide proof. Whether it's PCI-DSS certification, SAS70 audit or other certifications, ask them to give you the ammunition you need to provide in a format that's easily used and consumed by your auditors in turn.

 

Question 3: What can you offer in writing about security controls you provide?

It's never good to assume. Ask for statements about control deployments in writing ahead of a purchasing decision. If need be, work that response in so that cloud providers are contractually obligated to meet the bar you have defined.

 

Question 4: What happens if SLAs get missed?

Missing an SLA - particularly in a security context - can be a big problem. Say your service provider fails to notify the right people of a breach until eight days after it occurred? If you're talking about California, where failure to report within the time constraints of their breach disclosure law is illegal, there could be serious ramifications - potentially stiff fines or other regulatory action. Define ahead of time whether - and how - your service provider is accountable from the get-go.

 

Question 5: Who's doing what? Put that in writing too.

Some security controls come standard with different service levels and types of services purchased. It's important to understand what your vendor will be doing to support you from a control deployment and operations perspective and what you will have to do yourself. Remember, personnel change - so it's important to get these facts in writing as well.

 

Ed Moyle is senior security strategist at Savvis.

CIOs have seen their roles shift from technical to strategic planning with a focus on the latest technology and trends while also looking at innovative ways that IT can help achieve business objectives. With the markets jittering about a double-dip recession, infrastructure utility approaches such as cloud are likely to get an even greater boost.

 

The consumerization of IT plays into this need for innovative and new delivery models for IT. Employees demand increased mobility and businesses scramble to comply and empower their employees to work any place at any time.

 

However, I want to remind you again that one should not go blindly into cloud thinking it's all about cost savings or that it will be the panacea to all headaches relating to IT. Rather it is a "tool" to help optimize spending amidst shrinking budgets while continuing to accelerate growth and productivity. To use the tool effectively, organizations will need to transform their thinking about the role of IT and revamp their IT departments to best understand when to leverage a more standardized, cloud-based model and when to retain assets and expertise in-house.

 

In its May 2011 report "IT Infrastructure and Operations: The Next Five Years." Forrester Research, Inc. emphasizes that the next five years is about economics. Based on my tour of customers, yet, economics is important, but with an increased focus toward improving competitiveness and organizational agility rather than merely to drive down costs.

 

Forrester seems to agree and emphasizes in its May 25 report "I&O Execs Must Determine Which Applications Should Move to Cloud" that to contain costs and increase productivity, IT organizations, in general and infrastructure and operations (I&O) in particular, have started thinking in terms of "IT industrialization": a rationalization of IT processes and tools that would lead to more flexible, predictable and reliable services.

 

Key to realizing these benefits is not just using IT to automate processes and tools, but to be an expert at finding the right service delivery platforms for the task. Business processes, from purchasing products to customer service to payroll, are all accelerated through automation provided by IT services. Coupled with the right quality of service, they improve productivity to make the enterprise more competitive.

 

Forrester highlights "two technological changes [that] have the potential to effectively offer a solution to solve IT's future productivity issues: automation takes care of diversity [and] ... cloud computing shows potential economies of scale." These two concepts have and will continue to change the face of how services are sourced and how they are deployed.

 

Forrester emphasizes the balance between traditional IT and new delivery methods and is spot on when it says the traditional approach to "throw more people at the problem" is no longer efficient: Staff augmentation is subject to the law of diminishing returns, which can turn counterproductive and quickly encounter financial and operational limits. IT must overcome these limits by improving productivity by an order of magnitude over the next five years.

 

Forrester reinforces, "Cloud computing is not replacing traditional outsourcing. It simply adds some new outsourcing options, giving I&O teams greater choice, which ultimately leads to greater value. But you have to understand the breadth of options and what makes them different to gain the most benefit."

 

Forrester's diagram [see graphic] illustrates that companies need to understand the value each delivery approach can provide and which is best suited for their unique organization and needs.

 

Forrester - Rightsourcing.jpg 

As you've read from me many times, cloud is only a piece of the IT puzzle (a corner piece at that) and the applications, benefits and ramifications need to be considered and understood in advance. Don't underestimate the impact of your infrastructure choices on the rest of your IT environment. The worst decision is to go blindly into a single model and think it will be the solution to all woes.

 

Steve Garrou is vice president, outsourcing and cloud services, at Savvis.

Study after study has shown that if you are a Web-based business and your landing pages are slow to load you will lose business. You will also pay a second penalty, losing search rank, making it harder to recover after fixing site problems. Likewise, an overloaded site can quickly turn a marketing success into a PR problem as clothing store Reiss found out last week.

 

Most companies know this, so they ensure that they have a solid SLA in place with their data centre provider covering the performance and availability of their cloud or colocation space so that their apps stay up and are on a platform that should deliver the designed responsiveness.

 

Unfortunately, guaranteeing that the lights are on, the platters are spinning and that the bandwidth is in place is not enough to ensure success for a Web-based business, your customers do not connect at the cage or even at the edge of your provider's WAN. Instead, your Web apps must traverse thousands of miles of fibre over multiple networks before reaching their destination. The variables these routes impose play a key role in the overall delivered responsiveness of your applications, and need to be monitored and reported on so that action can be taken to ensure that each end-user's quality of experience (QoE) remains high.

 

Savvis' End User Experience Monitoring (EUEM) service, which is powered by Gomez, can analyse performance both from the internet backbone in over 150 major cities, for an overview of performance, or drill down to customer level via tests run on a network of more than 150,000 end-user desktop computers located in multiple countries. We usually recommend that alerts from the network are copied to our own systems management teams so that we can start investigating issues and recommending ways to resolve them as soon as they arise.

 

The payback from end-user monitoring can be almost instantaneous. We recently set up an initial monitoring profile as a test for a client for just 24 hours. The tests highlighted that a particular code block was causing loading to appear to pause. The diagnostic information we provided enabled the customer's development team to modify the application so that the page now performs better.

 

As you run EUEM analysis over longer time scales you can establish trend data that informs capacity planning and allows for exception monitoring that can aid early fault detection. You can also use EUEM for strategic planning.

 

A great example of this is when planning to roll out a service to a new market. If, prior to roll out, you use the EUEM network to run test transactions against the application from your target market, you can compare this to the performance norms to identify if there are any particular local bottlenecks that need to be addressed, for example, by considering moving the load from that market to a more local data centre or modifying the application to split the transactions into smaller parts.

 

With this sort of flexible capability, I believe EUEM should be considered as part of every Web service infrastructure contract as a complement to standard SLAs. Used fully EUEM will help ensure that not only is the site up, but that it is delivering. Without EUEM can you honestly say you know how your customers see your apps?

 

Steve Falkus is product marketing manager, EMEA, at Savvis.

Do you know why the instrument panel in a car is called the "dashboard"? It has to do with the days of horse-drawn carriages.

 

In that time, the dashboard was exactly that: a plank of wood to protect the driver from mud and spraying debris when the horse would run (i.e., "dash"). In the context of the time, it was an important safety feature - one that has long been replaced with different safety measures used in the modern automobile (i.e., seat belts, airbags, anti-lock brakes.)

 

My point in bringing this up is simple - context matters, particularly when it comes to risk. Technological paradigms shift, and each new paradigm brings new/better/faster ways of doing things. But each paradigm has its own set of risks as well. A safety measure in one paradigm (for example, the dashboard in a horse-drawn carriage) is seldom directly substitutable for a safety measure in another (for example, a seatbelt in an automobile). So risk in each paradigm needs to be evaluated in light of the unique security context of the paradigm in use.

 

Cloud computing is no exception when it comes to this truism. Cloud computing changes - in some cases drastically - how organizations approach IT; it's a new paradigm. There are clear advantages from cloud computing (resource overhead, ease of management, fluidity), but to achieve the greatest effect, those advantages should be part of a broader picture that also accounts for the unique threats, countermeasures, assumptions and security best practices that are unique to cloud computing.

 

Organizations that approach risk in a formal way - for example, quantitatively - know that the assumptions that go into a risk model can cause the outcome to vary greatly. For example, a risk analyst, looking at an application might follow a process similar to this one:

 

- Enumerate threats
- Gauge vulnerability and likelihood of threat occurrence
- Determine consequence or impact
- Determine countermeasures based on those factors
- Assess residual risk

 

Risk management 101, right? But what happens when any of those values change? They could, depending on what's migrating, how and to where.

 

Take, for instance, the example of an n-tier application; say for example the servers that support that application are moving into a multi-tenant, virtualized environment from a dedicated, on-premises one. In this case, assumptions about controls and countermeasures might have been made based on the context of original deployment - both positively ("we need to do background checks on vendors and other visitors because they'll be in close proximity to the datacenter") as well as negatively ("we don't need to bother encrypting that traffic because it's all internal"). Either of these two decisions - made because of assumptions of how the application will be managed and hosted - could impact the overall security of the application post-migration.

 

The point? Organizations that take risk seriously should think hard about how they approach risk when moving to the cloud. In other words, risk should not be assumed to be constant between pre- and post-migration - and a good time to dust off your organization's risk management plan is during a cloud transformation.

 

So if you're thinking about outsourcing (and who isn't?), engage internal risk managers - and pair them with personnel with cloud expertise (be they internal or external) to review your security strategy and alert you to where problem areas are. Ask questions - of your service providers and your internal application experts, and make sure both risk and security are on the table from day one as you plan your transformation efforts. 

 

Ed Moyle is senior security strategist at Savvis.

About Us

A global leader in cloud infrastructure and hosted IT solutions for enterprises.

more »

Connect with CenturyLink Technology Solutions

@CenturyLinkTech

Twitter