Conditional Access Unveils the Wolf in Sheep’s Clothing

August 23, 2016
One of the buzz phrases we hear these days is “the rise of the mobile workforce.” Well guess what? The mobile workforce isn’t on the rise—it’s here, it’s here to stay, and most organizations have caught on already. Gartner estimates that by 2017, 90 percent of organizations will support BYOD to at least some extent, and by 2018 twice as many employee-owned devices will be used for work as enterprise-owned devices. This makes a lot of sense. One, employee satisfaction is highest when workers can use the devices of their choosing whenever and wherever they want (and we’ve seen time and again that they’ll do this regardless of corporate compliance rules); and two, BYOD maximizes IT budgets by limiting cost and broadening access. In that same report, Gartner found that the cost of supporting user-owned tablets was 64 percent lower than buying and supporting enterprise-owned tablets. That’s to say nothing of the enhanced productivity that both user and organization enjoy. BYOD is a win-win.

Breach or business as usual: which is which?

Of course, security is a major concern, and one of the most common forms of data breach occurs when someone is not who they appear to be. A 2013 Verizon data breach investigation report found that over 70% of network intrusions exploited weak or stolen credentials—meaning someone posed as an authorized user, got in, and caused untold damage. Worse yet, this wolf in sheep’s clothing could avoid detection for weeks, months, or even years because they had the proper credentials and evaded common safety nets. One of your first lines of defense is to spot fishy logins: is someone accessing data from far away, at a weird time? However, since mobility solutions allow employees to work wherever they are—anywhere in the world, at any time—how do you tell what’s a breach, and what’s just business as usual, without disrupting your business?

Conditional Access keeps your data safe—smartly

Let’s say a Sales VP is on a business trip to London, and recently accessed the corporate SharePoint from her personal iPad. An hour after that login, her account attempts to login again, but this time from an IP address in Romania. Sound plausible? Or even possible? That looks like a case of compromised credentials, and is exactly the kind of thing you want your mobile management solution to catch. If your organization is using Microsoft Enterprise Mobility Suite (EMS) to manage devices and applications, you already can identify odd behavior signaling a possible breach with Conditional Access. Powered by EMS, Conditional Access is the industry’s most comprehensive access solution. By setting customizable requirements including location, device compliance, behavior, and risk, you get identity and threat mitigation that works for your organization and your mobile users. Conditional Access can also help with compliance, by making sure only compliant devices can access data. If you’re interested in learning more about how Conditional Access and Enterprise Mobility Suite with Long View Advantage can prevent data breaches and transform how your organization works, give us a call today.
Read More >

How to Prevent Absent-Minded Data Breaches

August 16, 2016
It seems like every few months, another large, prominent organization has to announce a data breach to their customers. In 2014, hackers exposed 100 TB of data from Sony, including everything from email passwords to unpublished movie scripts. Also in 2014, the Home Depot was breached for information for 55 million credit cards, and 145 million eBay customer records were exposed including their names, physical addresses, dates of birth, and more. Your business might not even have 100 customers, but there’s still information you want to keep safe. When confidential information is exposed, you lose clients, customers, and your competitive advantage (and put your business at risk for legal action). Even if you’ve got a good firewall in place and have taken the necessary steps to prevent against hackers, sensitive data can still be exposed. And it can make its way out in the most easily preventable way imaginable: sending information to the wrong people. Sure, a message about lunch plans sent to the wrong George in finance might be harmless. But graver mistakes happen all the time. Look no further than the US presidential election, where an aide for Donald Trump emailed the campaign’s strategy against Hillary Clinton to the wrong guy with the last name Caputo—instead of a campaign adviser, it went to a reporter for a preeminent political publication. Ouch.

Keeping business and leisure separate

Email flubs are just one example of how easy it is to accidentally expose confidential data. Bring-your-own-device (BYOD) enhances productivity, but also increases the opportunities to compromise security. Think about your average user, who looks at email on his or her smartphone. Those emails include attachments that they may want to save for later for quick, offline access. Documents, photos, screenshots—those all get saved in the same place as a user’s vacation photos. Then there’s no guarantee that this information isn’t going somewhere it’s not supposed to.

Information only in front of the right people

The goal is to ensure that the right information is only in front of the right eyes, no matter:
  • What device it’s accessed from
  • What device it’s sent from
  • What device it’s sent to
  • What application accesses it
  • Whether it’s accessed online or offline
So what’s an organization to do? The answer is to set up rights management with policy-based permissions rules. Solutions like Google Docs and Dropbox have this at a very basic level, where you have to give users permission to access a link, but once they’re saved to a device or information is copy-and-pasted elsewhere, all bets are off. Microsoft Office 365 comes with a more comprehensive solution in Microsoft Azure Rights Management Services (RMS). It protects documents as they move across SharePoint, Exchange, and OneDrive, and maintains permissions while saved online and offline. With features including one that won’t let you send a sensitive file to someone who’s not authorized to receive it, RMS is something the Trump campaign wish they had.
Read More >

Secure Communication in Office 365

August 9, 2016
According to a 2014 report, more than 80% of employees use non-approved software-as-a-service (SaaS) applications at their jobs. This means that users are potentially exposing organizational networks to cyber-attacks and malware that result in identity theft and stolen IP—the kind of events that tarnish reputations and hurt business. In addition, these apps are managed by outside vendors and may experience disruptions that hurt productivity. Some of the more popular SaaS apps used on the job are for communication—namely, instant messaging. AOL Instant Messenger, Facebook Messenger, and Google Chat are just some of the popular options workers use outside of IT’s knowledge, without your organization’s standards for security and compliance. When sensitive files are being discussed and shared across these applications, you’ve got a security breach waiting to happen. So what’s an organization to do? If you’re using Microsoft Office 365, all you need to do is make users aware of the following recent updates that make life easier (and more secure) for users and your business alike.

Real-time chat in Office 365

With Office 365, you already get Skype for Business for video chats, conference calls, and instant messaging. Now, O365 has deeper integration with Skype for Business, so that users can start IM chats from within the documents themselves. Users can now open chats from within Word and PowerPoint documents while using desktop Office. This is particularly useful while coauthoring a document. Simply click a person’s thumbnail to start an IM conversation in Skype for Business. If working away from their desk, users can start chats in browser-based Word, Excel, PowerPoint, and OneNote for any document stored in SharePoint or OneDrive for Business. All they have to do is click the blue Chat button to start IM’ing with everyone editing in the browser at the same time. The new real-time chat feature keeps conversations secure, protects your IP, and makes it easier to use Skype for Business than to switch apps and work through a non-approved SaaS option. You can learn more about real-time chat and coauthoring at the Office Blog.

Send attachments from within documents

Another issue with unapproved chat apps is that files can be unsafely shared through them. Office 365 lets you share your Word, PowerPoint, and Excel documents from within the documents themselves. Simply click File, Share, and select how you want to share:
  • As an email attachment
  • Through OneDrive for Business
  • Send by instant message through Skype for Business
Again, Skype for Business’ integration with Office 365 makes sharing easier and more secure for your users, so that it’s more of a hassle to open up Google Chat and attach a document than it is to click File, Share. Plus, if a document is encrypted or has permissions, the document maintains those settings when shared. That’s another way to make sure the right information is only in front of the right people. If your organization is not currently using Office 365 but would like to learn more, give one of our Long View experts a call today.
Read More >

Comparing the HPE Hyper Converged 250 and 380

July 4, 2016

HPE offers two hyper-converged solutions: the HPE Hyper Converged 250 and the Hyper Converged 380. Let's start by covering a few features sets that are common to both appliances:

The HPE Hyper Converged 250 and the Hyper Converged 380 both offer:

An intuitive user experience — Management of both systems is handled through HPE OneView, which makes managing and monitoring these resources very simple.

Low-entry point — Both solutions allow you to start with as few as two nodes and build from there.

Fast scalability — Either Hyper Converged 250 or the Hyper Converged 380 nodes can be added in as little as 15 minutes for easy, fast expansion.

Simplified lifecycle management — Both systems allow you to perform functions such as setting up new VMs or updating firmware and drivers with just a few clicks.

One vendor support — Both solutions are supported 24X7 by HPE Technology Support Services.

High availability — Both solutions provide 5-nines availability levels.

Real-time alerts — Through System Center's real-time alerts, both systems make it easy to know what's going on in your data center even when you're nowhere near it.

The list of benefits offered by both systems could go on, but let's cut to the chase. What's the difference? One of the biggest differences for most of our customers is that while the HPE Hyper Converged 250 allows you to expand up to 4 nodes within a single chassis, the Hyper Converged 380 allows for expansion up to 16 nodes. Supporting more nodes per box can lower the need for additional infrastructure and may be a benefit, especially in larger organizations. Video: See how easy it is to set up and manage the HC380. On the other hand, the HPE Hyper Converged 250 has a unique advantage as well. It comes in two flavors: the HPE Hyper Converged 250 for VMWare and the HPE Hyper Converged 250 for Microsoft. Video: Hyper Converged 250 Chalk Talk

There are a few other nuances. There are three ways you can learn more:

#1 Download the HPE Hyper Converged 250 solution brief here. #2 Download the HPE Hyper Converged 380 solution brief here. #3 Talk to an actual human by reaching out to one of our Hybrid IT consultants here. I highly recommend #3.

Of course, you can also connect with me by adding a comment below or reaching out to me directly at

Read More >

4 Potential Pitfalls of Hyper-Convergence

June 27, 2016

In post #1 in this series, we defined hyper-convergence, then in the next post, we talked about some common use cases. Now, it's time to talk about the potential pitfalls of hyper-convergence and what you can do to avoid them.

Luckily, this will be a short post because there aren't all that many, but there are a few things that you should think about in the context of your overall IT strategy.

#1 Should you expand capacity in your on-premises data center?

Yes, hyper-convergence is easy to deploy and makes it easy to expand on-premises capacity. But should you? Before making any additional investments, take the time to consider whether the organization might benefit from housing those applications and databases in a public or private cloud. Learn more about the hybrid cloud

 #2 You need to understand your data storage requirements.

Again, because hyper-convergence is so easy to set up, it may lead you to deploy it in scenarios where it may not fit. For example, because compute and storage resources are sharing the same appliance, it can limit storage capacity. If you don't understand the storage requirements of your applications, performance can suffer. If you have concerns, our hybrid IT experts can help you work through them.

#3 While technology advances, human nature stays the same.

If you have a team of IT staff focused on provisioning and maintaining data center resources, they may find themselves with a lot more time on their hands. Of course, most IT departments already have far too much to do, so that may not be an issue. And, these freed-up resources can often be better utilized in other ways, i.e., focusing on how technology can meet the needs of the business instead of on how the business can meet the needs of the technology. Nevertheless, you need to think through how the improved efficiency brought about by a hyper-converged infrastructure may be perceived, at least initially, by the human resources in your IT department.

#4 Consider the full ROI when making the business case.

Many IT departments use a continual upgrade loop for data center resources, upgrading servers in one cycle, storage resources the next, etc. Spreading the investment out like this may look cheaper on paper, but it increases costs in the long run. Plus, it decreases the IT department's ability to respond to the dynamic needs of the organization. When something unexpected crops up, IT needs to go through a round of budget approvals, procure the resources, implement them, etc. That's hardly an agile IT environment. When making the business case for hyper-convergence, make sure you're looking at the total cost of ownership and not just the initial budget requirements. In our next post, we'll take a look at two hyper-converged offerings from HPE. As always, I'd love to hear from you.  Please add your thoughts in the comments section or reach out to me directly at    
Read More >

Making the Case for Hyper-Convergence

June 9, 2016

In our last post, we talked a little bit about what hyper-convergence is and how it has taken the idea of converged infrastructure to the next level. In this post, we'll do a deeper dive into the benefits of hyper-convergence and discuss some common scenarios where hyper-convergence can play a big role.

Hybrid IT: Not Everything Needs to be in the Cloud

While it can seem like everyone is moving to the cloud, there are a number of reasons companies, large and small, choose to maintain an on-premises data center. The most common reasons cited are concerns over security* or data availability. Of course, both public and private clouds offer tremendous benefits as well such as improved agility, faster provisioning of resources, lower investment in infrastructure, etc. *Security and compliance are often not as much of an issue as they are made out to be. Learn more here. Regardless of the reasons, many organizations opt for a hybrid environment where they are running compute and data storage resources across a mix of environments, e.g., an on-premises data center plus a private hosted cloud.

Benefits of Hyper-Convergence

Hyper-converged appliances are made for organizations like these. To understand why, let's take another quick look at the benefits of hyper-convergence that we identified in the last post. Here's that list again:
  • Less hardware to purchase and maintain
  • Smaller footprint
  • Expand capacity faster
  • One vendor for support
  • No component compatibility issues
  • Management is simplified
For Long View clients, these benefits address two main categories of need:
  • Efficiency - This is especially important for those organizations that do not have the IT bandwidth to manage complexity, e.g., remote offices, mid-sized organizations, over-burdened IT departments, etc.
  • Agility - Once you have the box, HPE's hyper-convergence solutions can have you up and running in minutes. No need to spend months ramping up capacity to support the company's latest initiative.
Video: See how the City of Los Angeles used the HPE Hyper-Converged Systems to solve issues with space, overheating, and unplanned network downtime.

4 + 1 Common Use Cases

So, if we think about these benefits, it's pretty easy to see how hyper-convergence fits into these four common use cases:
  1. Mid-sized organizations that don't have the IT bandwidth to maintain separate appliances such as servers and storage
  2. Organizations that are growing quickly or have an unanticipated surge in demand for resources
  3. Remote offices where IT staffing may be minimal
  4. Line of business units that need fast access to resources in support of key initiatives
Plus One: VDI — Without getting too far into the weeds, hyper-convergence also removes many of the complexities for companies deploying VDI (virtual desktop infrastructure). Webinar: The Benefits of Hyper-Converged Infrastructure for VDI Deployments.

Of course, nothing is 100% perfect. Now that we've covered what hyper-convergence is and how it can benefit your organization, in our next post, we'll take a quick look at some of the challenges you should be aware of when considering hyper-converged appliances. Until then, we'd love to hear from you. Either ask your questions in the comment section below, or reach out to me directly at

Read More >

Hyper-Convergence: What Is It and Why Should You Care?

June 3, 2016

There's a relatively new term being discussed in the world of hybrid IT: hyper-convergence. If you aren't already familiar with it, you should be because it is likely to play a big role in the future of your data center.

In this first post in our multi-part series on hyper-convergence, we'll talk a bit about what it is and the benefits it offers to your IT department and your organization. In later posts, we'll dig deeper into the use cases for hyper-convergence as well as take a closer look at a couple of leading hyper-converged hardware offerings from HPE so you can see how they can be applied to common IT challenges.

What is Hyper-Convergence?

Hyperconverged integrated system (HCIS) — Tightly coupled compute, network and storage hardware that dispenses with the need for a regular storage area network (SAN). Storage management functions — plus optional capabilities like backup, recovery, replication, deduplication and compression — are delivered via the management software layer and/or hardware, together with compute provisioning. ~ Gartner Magic Quadrant for Integrated Systems, August 11, 2015 In everyday language, hyper-convergence combines resources that are typically sold separately: servers, storage, networking, etc., into one package. For those of you managing an on-premises data center, a number of immediate benefits should jump out at you such as: — Less hardware to purchase and maintain — Smaller footprint — Expand capacity faster — One vendor for support — No component compatibility issues — Management is simplified We'll revisit some of these a bit more in later posts, especially when we get into HPE's hyper-converged offerings. What's the Difference Between Convergence and Hyper-Convergence? If it seems like "converged" infrastructure was all the talk in tech circles not too long ago, you'd be right. And to many, hyper-convergence may sound like nothing more than a "hyped-up" version of that technology. However, there are some very important differences. Converged infrastructure was a giant leap forward when it came to making it easier to procure and maintain data center resources. In the "old days" resources were purchased separately, often on different upgrade cycles and from different vendors. IT departments needed to be staffed with integration experts who could implement these systems and address the inevitable compatibility issues. With converged technology, different vendors came together to offer a preconfigured solution for the core components: compute, data storage and networking. And that certainly solved some issues such as compatibility between components and support. In theory at least, the vendors included in the packaged solution tested their offering ahead of time to ensure compatibility and gave customers fewer numbers to call when they needed support services. Hyper-convergence brings all of these resources together and puts them in one appliance. So while convergence does give you some of the same benefits as hyper-convergence, such as single vendor support and inherent compatibility, hyper-convergence provides additional benefits such as a smaller footprint and native unified management from one console. As importantly, data center expansion is also much simpler and faster with hyper-convergence. For example, you can add another HPE appliance with just a few clicks and in as little as 15 minutes. Imagine your CEO calls and tells you the organization is acquiring a company, and you need to expand capacity to accommodate 1500 additional users spread across three continents. It would feel pretty good to tell that CEO she can have it within the hour, wouldn't it? Video: See how easy it is to set up and manage the HC380. Now that we've discussed what hyper-convergence is, it's time to talk about scenarios where it can add the most value. In our next post, we'll dig deeper into the benefits of hyper-convergence to illustrate some of the most common use cases. Until then, feel free to add your thoughts in the comments section or reach out to me directly at  
Read More >

Innovative Disruption: Surviving and Thriving By Managing the ‘Cloud Handshake’

May 6, 2016
Part three in a three-part series. In part two of this series, we began discussing the shift cloud and SaaS have created in the software evolution, and their impact on a hybrid model. Given that we need to accelerate the evolution of this blended model to keep pace with competition, and ever-changing customer needs, what’s the best use of your limited development and IT staff resources? You’ll pick up some bandwidth as the management of SaaS applications shifts to the SaaS provider, and some infrastructure elements underneath are shifted to cloud platforms. But the integration, evolution, security and the need for a more agile end user experience improvements all remain. And whether you put your applications on hyper-scale public clouds like Azure, or on more localized offerings such as those provided by most MSPs, you still have to manage the cloud handshake.

The cloud handshake

As you look at your task list, and cross-correlate this with your IT staff bandwidth, you’ll likely draw the conclusion that managing the cloud handshake falls low on the priority list to your business needs. But this is exactly where the MSP can add the most value—and it’s exactly where MSP business models are evolving.

Managing blended IT

The future of the MSP is in managing the blended IT environment. The reality is that your deployment portfolio is evolving to a mix of in-house, hosted, SaaS and multiple cloud platforms. And managing this mix isn’t your core competency, nor need it be your priority. The companies that have begun moving to cloud, or embraced it completely, are growing at an extremely fast pace. New companies that are born in the cloud can make big waves, designing applications for the cloud that can then be run anywhere. We see manufacturing companies innovating in the cloud as they build out IoT, and independent software vendors (ISVs) gaining market momentum day by day—because today, if you don’t innovate, you fall behind.

Getting to a cloud ready state

As MSPs, we’ll help you innovate by providing the resources to move some of those apps to a cloud-ready state. We’ll also manage that mixed-environment state, so your IT team can immediately start managing their time more strategically, enabling business transformation as markets shift. In the MSP community today, leading providers are redesigning our business models to blended hosting—not only supporting the leading public cloud platforms, but helping you manage the cloud handshake with SaaS as well. As a business or MSP, we need to ensure we are transitioning the right tools and services to automate commodity type work to run efficiently at low cost, freeing up funds for innovation, investing more on R&D efforts, and creating more time to drive transformation to serve the disruption within our industries.

Remember the movie Terminator and the term Skynet, man vs the machine, I think it’s getting closer, but I think it’s really man with machine that will be the new mantra.

Go back to part one - Innovative Disruption: Applications and Cloud Are Creating Waves – Surf or Get Left Behind Go back to part two - Innovative Disruption: The Software Evolution, Blended Roles and Managing Hybrid IT
Read More >

Innovative Disruption: The Software Evolution, Blended Roles and Managing Hybrid IT

May 5, 2016
Part two in a three-part series.  In part one of this series, we discussed how important it is to embrace cloud and application innovations, the innovation cycle, and the changing role of IT.

In today’s business world, every enterprise has at least one relationship with a Managed Service Provider (MSP). That relationship has evolved over time. But now, it’s changing again—to the advantage of the enterprise and its customers (both internally and externally).

Looking back on the MSP relationship – resell, manage and integrate

MSPs entered the tech market when organizations asked them not only to resell, but also integrate and manage finished solutions. Reselling began in the late 1990s, as the internet became the foundation of our lives. At that time hosting providers, typically, played two roles:
  • Internet access - for individuals and companies; and
  • Renting server rack space - for corporate applications (mostly websites).
This evolved to rentable IT administrators, who took on the tasks of managing hardware, operating systems and, increasingly, the middleware and applications that ran on those servers. In this world of hardware evolution, the hosting market was a lucrative and a well-protected space, with players like IBM doing very well—until cloud computing came along and started the software evolution.

SaaS enters the picture and creates a disruption in application management

With the introduction of Software as a Service (SaaS), applications were delivered and managed directly by the manufacturers. Salesforce started in this market by targeting smaller firms, with lower enterprise-grade expectations and line of business budgets. By the time SaaS started penetrating the enterprise market, its multi-tenant, highly scalable deployment model and its new pay-per-user business were hard for hosted providers to match—and the fight was on. Public cloud platforms added to the competitive threat by extending the SaaS basics to hosted applications. Now, application outsourcing, the core business of hosting, and even IT managing the show within an organization, are all under threat. Based on this scenario, you’d think the roles of IT departments and MSPs are diminishing as application management shifts in the SaaS based model, however that’s not accurate. In reality these roles are just starting to evolve in the business lifecycle.

Not a binary shift or a free management ride

What we’re seeing is the evaporation of traditional hosting/on-premises and application outsourcing/internal options, as more applications move to SaaS or cloud offerings. This isn’t a binary shift—nor are we getting a free ride from a management and monitoring perspective. Look a little deeper, and you’ll find that a large percentage of critical workloads don’t fit onto cloud platforms, nor can they be replaced by SaaS. It’s just that the definition of an application is shifting.

Managing the complexity of a highly blended mix

Let’s consider, as an example, the common business process of e-commerce. This is not one application; it’s a workflow that blends together multiple applications including ERP, CRM, commerce, machine learning, mobile and web, content management and various other elements. Now, if that organization has been around for more than 10 years, there are likely some pretty customized elements in that mix. We’re constantly refining this type of workflow to stay competitive, improve customer experience and adapt, as end users experiences and needs are evolving every day. Now, add the complexity of containers such as Docker. New applications will be built more and more around this technology; migrations or upgrades of traditional applications will be forced down this path; and some legacy apps, built in the traditional model, are out of scope but still need to be part of this framework. Hence, the whole dev-ops concept becomes extremely crucial as the development team and the IT team need to work more closely than ever, as cloud, containers, Bots, virtual reality, holograms, 3D printers, continue to drive innovation and disruptions in both technology and user experience. So given the changes we are seeing in applications and the shift to cloud, here’s the “best-guess” end result—a highly blended mix where certain elements are shifted to SaaS, others moved to cloud platforms, and a third group that can’t make the move but must still remain part of the picture. If we look a little further into the next evolution of MSP, it will evolve even more as the management of infrastructure and applications in turn become commoditized, businesses will be looking for an MSP that can provide business value through insights from integrated data or siloed data, to make smarter decisions. Hence you see suddenly big data, IoT, analytics and machine learning taking the centre stage.

Are you set up to handle and thrive in this digital disruption?

In part three of our three-part Innovative Disruption series, we’ll discuss the future of the MSP in this blended environment—in particular, managing the cloud handshake. Read part one - Innovative Disruption: Applications and Cloud Are Creating Waves - Surf or Get Left Behind Read part three - Innovative Disruption: Surviving and Thriving By Managing the ‘Cloud Handshake’
Read More >

Innovative Disruption: Applications and Cloud Are Creating Waves – Surf or Get Left Behind

May 4, 2016
(Part one in a three-part series)

Call it digital disruption, call it innovative disruption, or call it an algorithmic economy—but it’s clear that innovation continues to quicken at a breathtaking clip, with technology a key accelerator in the equation. An increasing number of us find little difference between work and personal life—we’re constantly connected - that’s where the digital world is headed. Businesses and consumers are getting closer all of the time, driven in part by their use of applications and cloud.

This is creating chaos not only for technology innovators, but also for their partners that provide solutions built on their platforms, and for the end customers who use them. It’s an exciting time for everyone involved in the shift to modern applications, as cloud has drastically improved the speed to market of new applications. As this shift happens, the role of IT or a partner who provides or manages IT, has never been more important, pushing them to deploy applications and provide end user access even more quickly, so businesses run more efficiently.

The innovation cycle; automation of manual work

The innovation cycle has a tendency to cause disruption within organizations, and even among technology providers. Specifically:
  • Innovation leads to the automation of what was previously manual work;
  • When innovation eases off the throttle, the technology becomes commoditized and the manual tendencies return;
  • Then the innovation cycle begins again.
  • During a time of innovation, generalists are often preferred over specialists, because they tend to be open and adapt better to change;
  • During lulls in the innovation cycle, you tend to rely on your specialists while your generalists, choose a niche to specialize in.
As companies move to the cloud, some may only see jobs being lost, often those who are less open to change. In reality, it’s a stark reminder to the IT industry of how innovation starts to automate manual work. And cloud is just like any other powerful innovative technology—its success depends on who is using it. Those tasked with its rollout and its implementation need to understand and be open to its advantages and strengths. Otherwise, you won’t leverage cloud to its full potential.

Let’s take a look at Exchange as an example

Take this typical scenario. Exchange administrators have traditionally spent the lion’s share of their time on managing the Exchange environment and keeping the lights on. Now, with their employer’s move to the cloud, they’re suddenly:
  • Providing insights and analytics on usage;
  • Empowering the business to make smarter decisions;
  • Improving collaboration;
  • Securing data;
  • Providing enhanced security; and
  • Ultimately improving the end-user experience.
They’re able to transform their role from an Exchange administrator focusing on maintenance to a business analyst, spending more time enabling the business by using cloud-based automation for more manual tasks. Similarly, on the infrastructure side, IT has traditionally spent a lot of time building out servers, storage and compute capabilities. Transformation to the cloud means that same IT professional is now devoting much of his or her time monitoring, optimizing, and addressing business problems, instead of building out capacity. The bottom line is that cloud is making IT more proactive in better enabling the business.

Adapting and embracing the possibilities

This tighter integration between IT and business, is encouraging these groups to learn what the other is doing, and make smarter business decisions as a team. Clearly, the role of IT is not going away, instead it’s evolving. It’s really about how you adapt to the change and embrace the possibilities that innovation brings to the business. Selling in IT has transformed from traditional product sales (specialists) to solution sales (generalists) during innovation cycles, to better understand business needs and design solutions that automate manual work and empower business agility and market innovation. Similarly, enterprise digital transformation is shifting talent and recruitment towards contemporary business cultures, fluidity, cross-functional learning, intelligent behavior and empowering motivated talent.

Is your business transforming quickly enough?  In a digital age, the winners take all.

52% of Fortune 500 firms since 2000 are gone! - Gartner (In part two of our three-part Innovative Disruption series, we’ll discuss the impact SaaS has on the evolving roles of the MSP and IT, and managing the complexity of a blended environment.) Read part two: Innovative Disruption: The Software Evolution, Blended Roles and Managing Hybrid IT Read part three: Innovative Disruption: Surviving and Thriving By Managing the ‘Cloud Handshake’
Read More >
1 4 5 6 7 8 28