Blog

Making the Case for Hyper-Convergence

June 9, 2016

In our last post, we talked a little bit about what hyper-convergence is and how it has taken the idea of converged infrastructure to the next level. In this post, we'll do a deeper dive into the benefits of hyper-convergence and discuss some common scenarios where hyper-convergence can play a big role.

Hybrid IT: Not Everything Needs to be in the Cloud

While it can seem like everyone is moving to the cloud, there are a number of reasons companies, large and small, choose to maintain an on-premises data center. The most common reasons cited are concerns over security* or data availability. Of course, both public and private clouds offer tremendous benefits as well such as improved agility, faster provisioning of resources, lower investment in infrastructure, etc. *Security and compliance are often not as much of an issue as they are made out to be. Learn more here. Regardless of the reasons, many organizations opt for a hybrid environment where they are running compute and data storage resources across a mix of environments, e.g., an on-premises data center plus a private hosted cloud.

Benefits of Hyper-Convergence

Hyper-converged appliances are made for organizations like these. To understand why, let's take another quick look at the benefits of hyper-convergence that we identified in the last post. Here's that list again:
  • Less hardware to purchase and maintain
  • Smaller footprint
  • Expand capacity faster
  • One vendor for support
  • No component compatibility issues
  • Management is simplified
For Long View clients, these benefits address two main categories of need:
  • Efficiency - This is especially important for those organizations that do not have the IT bandwidth to manage complexity, e.g., remote offices, mid-sized organizations, over-burdened IT departments, etc.
  • Agility - Once you have the box, HPE's hyper-convergence solutions can have you up and running in minutes. No need to spend months ramping up capacity to support the company's latest initiative.
Video: See how the City of Los Angeles used the HPE Hyper-Converged Systems to solve issues with space, overheating, and unplanned network downtime.

4 + 1 Common Use Cases

So, if we think about these benefits, it's pretty easy to see how hyper-convergence fits into these four common use cases:
  1. Mid-sized organizations that don't have the IT bandwidth to maintain separate appliances such as servers and storage
  2. Organizations that are growing quickly or have an unanticipated surge in demand for resources
  3. Remote offices where IT staffing may be minimal
  4. Line of business units that need fast access to resources in support of key initiatives
Plus One: VDI — Without getting too far into the weeds, hyper-convergence also removes many of the complexities for companies deploying VDI (virtual desktop infrastructure). Webinar: The Benefits of Hyper-Converged Infrastructure for VDI Deployments.

Of course, nothing is 100% perfect. Now that we've covered what hyper-convergence is and how it can benefit your organization, in our next post, we'll take a quick look at some of the challenges you should be aware of when considering hyper-converged appliances. Until then, we'd love to hear from you. Either ask your questions in the comment section below, or reach out to me directly at Glenn.Bontje@lvs1.com

Read More >

Hyper-Convergence: What Is It and Why Should You Care?

June 3, 2016

There's a relatively new term being discussed in the world of hybrid IT: hyper-convergence. If you aren't already familiar with it, you should be because it is likely to play a big role in the future of your data center.

In this first post in our multi-part series on hyper-convergence, we'll talk a bit about what it is and the benefits it offers to your IT department and your organization. In later posts, we'll dig deeper into the use cases for hyper-convergence as well as take a closer look at a couple of leading hyper-converged hardware offerings from HPE so you can see how they can be applied to common IT challenges.

What is Hyper-Convergence?

Hyperconverged integrated system (HCIS) — Tightly coupled compute, network and storage hardware that dispenses with the need for a regular storage area network (SAN). Storage management functions — plus optional capabilities like backup, recovery, replication, deduplication and compression — are delivered via the management software layer and/or hardware, together with compute provisioning. ~ Gartner Magic Quadrant for Integrated Systems, August 11, 2015 In everyday language, hyper-convergence combines resources that are typically sold separately: servers, storage, networking, etc., into one package. For those of you managing an on-premises data center, a number of immediate benefits should jump out at you such as: — Less hardware to purchase and maintain — Smaller footprint — Expand capacity faster — One vendor for support — No component compatibility issues — Management is simplified We'll revisit some of these a bit more in later posts, especially when we get into HPE's hyper-converged offerings. What's the Difference Between Convergence and Hyper-Convergence? If it seems like "converged" infrastructure was all the talk in tech circles not too long ago, you'd be right. And to many, hyper-convergence may sound like nothing more than a "hyped-up" version of that technology. However, there are some very important differences. Converged infrastructure was a giant leap forward when it came to making it easier to procure and maintain data center resources. In the "old days" resources were purchased separately, often on different upgrade cycles and from different vendors. IT departments needed to be staffed with integration experts who could implement these systems and address the inevitable compatibility issues. With converged technology, different vendors came together to offer a preconfigured solution for the core components: compute, data storage and networking. And that certainly solved some issues such as compatibility between components and support. In theory at least, the vendors included in the packaged solution tested their offering ahead of time to ensure compatibility and gave customers fewer numbers to call when they needed support services. Hyper-convergence brings all of these resources together and puts them in one appliance. So while convergence does give you some of the same benefits as hyper-convergence, such as single vendor support and inherent compatibility, hyper-convergence provides additional benefits such as a smaller footprint and native unified management from one console. As importantly, data center expansion is also much simpler and faster with hyper-convergence. For example, you can add another HPE appliance with just a few clicks and in as little as 15 minutes. Imagine your CEO calls and tells you the organization is acquiring a company, and you need to expand capacity to accommodate 1500 additional users spread across three continents. It would feel pretty good to tell that CEO she can have it within the hour, wouldn't it? Video: See how easy it is to set up and manage the HC380. Now that we've discussed what hyper-convergence is, it's time to talk about scenarios where it can add the most value. In our next post, we'll dig deeper into the benefits of hyper-convergence to illustrate some of the most common use cases. Until then, feel free to add your thoughts in the comments section or reach out to me directly at Glenn.Bontje@lvs1.com.  
Read More >

Innovative Disruption: Surviving and Thriving By Managing the ‘Cloud Handshake’

May 6, 2016
Part three in a three-part series. In part two of this series, we began discussing the shift cloud and SaaS have created in the software evolution, and their impact on a hybrid model. Given that we need to accelerate the evolution of this blended model to keep pace with competition, and ever-changing customer needs, what’s the best use of your limited development and IT staff resources? You’ll pick up some bandwidth as the management of SaaS applications shifts to the SaaS provider, and some infrastructure elements underneath are shifted to cloud platforms. But the integration, evolution, security and the need for a more agile end user experience improvements all remain. And whether you put your applications on hyper-scale public clouds like Azure, or on more localized offerings such as those provided by most MSPs, you still have to manage the cloud handshake.

The cloud handshake

As you look at your task list, and cross-correlate this with your IT staff bandwidth, you’ll likely draw the conclusion that managing the cloud handshake falls low on the priority list to your business needs. But this is exactly where the MSP can add the most value—and it’s exactly where MSP business models are evolving.

Managing blended IT

The future of the MSP is in managing the blended IT environment. The reality is that your deployment portfolio is evolving to a mix of in-house, hosted, SaaS and multiple cloud platforms. And managing this mix isn’t your core competency, nor need it be your priority. The companies that have begun moving to cloud, or embraced it completely, are growing at an extremely fast pace. New companies that are born in the cloud can make big waves, designing applications for the cloud that can then be run anywhere. We see manufacturing companies innovating in the cloud as they build out IoT, and independent software vendors (ISVs) gaining market momentum day by day—because today, if you don’t innovate, you fall behind.

Getting to a cloud ready state

As MSPs, we’ll help you innovate by providing the resources to move some of those apps to a cloud-ready state. We’ll also manage that mixed-environment state, so your IT team can immediately start managing their time more strategically, enabling business transformation as markets shift. In the MSP community today, leading providers are redesigning our business models to blended hosting—not only supporting the leading public cloud platforms, but helping you manage the cloud handshake with SaaS as well. As a business or MSP, we need to ensure we are transitioning the right tools and services to automate commodity type work to run efficiently at low cost, freeing up funds for innovation, investing more on R&D efforts, and creating more time to drive transformation to serve the disruption within our industries.

Remember the movie Terminator and the term Skynet, man vs the machine, I think it’s getting closer, but I think it’s really man with machine that will be the new mantra.

Go back to part one - Innovative Disruption: Applications and Cloud Are Creating Waves – Surf or Get Left Behind Go back to part two - Innovative Disruption: The Software Evolution, Blended Roles and Managing Hybrid IT
Read More >

Innovative Disruption: The Software Evolution, Blended Roles and Managing Hybrid IT

May 5, 2016
Part two in a three-part series.  In part one of this series, we discussed how important it is to embrace cloud and application innovations, the innovation cycle, and the changing role of IT.

In today’s business world, every enterprise has at least one relationship with a Managed Service Provider (MSP). That relationship has evolved over time. But now, it’s changing again—to the advantage of the enterprise and its customers (both internally and externally).

Looking back on the MSP relationship – resell, manage and integrate

MSPs entered the tech market when organizations asked them not only to resell, but also integrate and manage finished solutions. Reselling began in the late 1990s, as the internet became the foundation of our lives. At that time hosting providers, typically, played two roles:
  • Internet access - for individuals and companies; and
  • Renting server rack space - for corporate applications (mostly websites).
This evolved to rentable IT administrators, who took on the tasks of managing hardware, operating systems and, increasingly, the middleware and applications that ran on those servers. In this world of hardware evolution, the hosting market was a lucrative and a well-protected space, with players like IBM doing very well—until cloud computing came along and started the software evolution.

SaaS enters the picture and creates a disruption in application management

With the introduction of Software as a Service (SaaS), applications were delivered and managed directly by the manufacturers. Salesforce started in this market by targeting smaller firms, with lower enterprise-grade expectations and line of business budgets. By the time SaaS started penetrating the enterprise market, its multi-tenant, highly scalable deployment model and its new pay-per-user business were hard for hosted providers to match—and the fight was on. Public cloud platforms added to the competitive threat by extending the SaaS basics to hosted applications. Now, application outsourcing, the core business of hosting, and even IT managing the show within an organization, are all under threat. Based on this scenario, you’d think the roles of IT departments and MSPs are diminishing as application management shifts in the SaaS based model, however that’s not accurate. In reality these roles are just starting to evolve in the business lifecycle.

Not a binary shift or a free management ride

What we’re seeing is the evaporation of traditional hosting/on-premises and application outsourcing/internal options, as more applications move to SaaS or cloud offerings. This isn’t a binary shift—nor are we getting a free ride from a management and monitoring perspective. Look a little deeper, and you’ll find that a large percentage of critical workloads don’t fit onto cloud platforms, nor can they be replaced by SaaS. It’s just that the definition of an application is shifting.

Managing the complexity of a highly blended mix

Let’s consider, as an example, the common business process of e-commerce. This is not one application; it’s a workflow that blends together multiple applications including ERP, CRM, commerce, machine learning, mobile and web, content management and various other elements. Now, if that organization has been around for more than 10 years, there are likely some pretty customized elements in that mix. We’re constantly refining this type of workflow to stay competitive, improve customer experience and adapt, as end users experiences and needs are evolving every day. Now, add the complexity of containers such as Docker. New applications will be built more and more around this technology; migrations or upgrades of traditional applications will be forced down this path; and some legacy apps, built in the traditional model, are out of scope but still need to be part of this framework. Hence, the whole dev-ops concept becomes extremely crucial as the development team and the IT team need to work more closely than ever, as cloud, containers, Bots, virtual reality, holograms, 3D printers, continue to drive innovation and disruptions in both technology and user experience. So given the changes we are seeing in applications and the shift to cloud, here’s the “best-guess” end result—a highly blended mix where certain elements are shifted to SaaS, others moved to cloud platforms, and a third group that can’t make the move but must still remain part of the picture. If we look a little further into the next evolution of MSP, it will evolve even more as the management of infrastructure and applications in turn become commoditized, businesses will be looking for an MSP that can provide business value through insights from integrated data or siloed data, to make smarter decisions. Hence you see suddenly big data, IoT, analytics and machine learning taking the centre stage.

Are you set up to handle and thrive in this digital disruption?

In part three of our three-part Innovative Disruption series, we’ll discuss the future of the MSP in this blended environment—in particular, managing the cloud handshake. Read part one - Innovative Disruption: Applications and Cloud Are Creating Waves - Surf or Get Left Behind Read part three - Innovative Disruption: Surviving and Thriving By Managing the ‘Cloud Handshake’
Read More >

Innovative Disruption: Applications and Cloud Are Creating Waves – Surf or Get Left Behind

May 4, 2016
(Part one in a three-part series)

Call it digital disruption, call it innovative disruption, or call it an algorithmic economy—but it’s clear that innovation continues to quicken at a breathtaking clip, with technology a key accelerator in the equation. An increasing number of us find little difference between work and personal life—we’re constantly connected - that’s where the digital world is headed. Businesses and consumers are getting closer all of the time, driven in part by their use of applications and cloud.

This is creating chaos not only for technology innovators, but also for their partners that provide solutions built on their platforms, and for the end customers who use them. It’s an exciting time for everyone involved in the shift to modern applications, as cloud has drastically improved the speed to market of new applications. As this shift happens, the role of IT or a partner who provides or manages IT, has never been more important, pushing them to deploy applications and provide end user access even more quickly, so businesses run more efficiently.

The innovation cycle; automation of manual work

The innovation cycle has a tendency to cause disruption within organizations, and even among technology providers. Specifically:
  • Innovation leads to the automation of what was previously manual work;
  • When innovation eases off the throttle, the technology becomes commoditized and the manual tendencies return;
  • Then the innovation cycle begins again.
Similarly:
  • During a time of innovation, generalists are often preferred over specialists, because they tend to be open and adapt better to change;
  • During lulls in the innovation cycle, you tend to rely on your specialists while your generalists, choose a niche to specialize in.
As companies move to the cloud, some may only see jobs being lost, often those who are less open to change. In reality, it’s a stark reminder to the IT industry of how innovation starts to automate manual work. And cloud is just like any other powerful innovative technology—its success depends on who is using it. Those tasked with its rollout and its implementation need to understand and be open to its advantages and strengths. Otherwise, you won’t leverage cloud to its full potential.

Let’s take a look at Exchange as an example

Take this typical scenario. Exchange administrators have traditionally spent the lion’s share of their time on managing the Exchange environment and keeping the lights on. Now, with their employer’s move to the cloud, they’re suddenly:
  • Providing insights and analytics on usage;
  • Empowering the business to make smarter decisions;
  • Improving collaboration;
  • Securing data;
  • Providing enhanced security; and
  • Ultimately improving the end-user experience.
They’re able to transform their role from an Exchange administrator focusing on maintenance to a business analyst, spending more time enabling the business by using cloud-based automation for more manual tasks. Similarly, on the infrastructure side, IT has traditionally spent a lot of time building out servers, storage and compute capabilities. Transformation to the cloud means that same IT professional is now devoting much of his or her time monitoring, optimizing, and addressing business problems, instead of building out capacity. The bottom line is that cloud is making IT more proactive in better enabling the business.

Adapting and embracing the possibilities

This tighter integration between IT and business, is encouraging these groups to learn what the other is doing, and make smarter business decisions as a team. Clearly, the role of IT is not going away, instead it’s evolving. It’s really about how you adapt to the change and embrace the possibilities that innovation brings to the business. Selling in IT has transformed from traditional product sales (specialists) to solution sales (generalists) during innovation cycles, to better understand business needs and design solutions that automate manual work and empower business agility and market innovation. Similarly, enterprise digital transformation is shifting talent and recruitment towards contemporary business cultures, fluidity, cross-functional learning, intelligent behavior and empowering motivated talent.

Is your business transforming quickly enough?  In a digital age, the winners take all.

52% of Fortune 500 firms since 2000 are gone! - Gartner (In part two of our three-part Innovative Disruption series, we’ll discuss the impact SaaS has on the evolving roles of the MSP and IT, and managing the complexity of a blended environment.) Read part two: Innovative Disruption: The Software Evolution, Blended Roles and Managing Hybrid IT Read part three: Innovative Disruption: Surviving and Thriving By Managing the ‘Cloud Handshake’
Read More >

What You Should Know If You Missed Cisco Partner Summit 2016

March 28, 2016
Freshly back from another year at Cisco’s annual Partner Summit and this year did not disappoint, with lots of announcements and some fantastic direction for 2016. Here are the major announcements and directions that you should know about.

1. Digital Network Architecture (DNA) automates network operations

  • As the next level of Software Defined Networking (SDN), Cisco’s newly announced Digital Network Architecture (DNA) is an open, software-driven architecture that automates your enterprise network operations.
    • The programmable architecture frees your IT staff from time consuming, repetitive network configuration tasks.
    • Turn up network functions with a few clicks and serve customers in engaging new ways as soon as you think of them.
    • All while lowering costs and reducing risk.
    • See a live online demo of this new programmable architecture.
 

2. CliQr acquisition provides hybrid cloud orchestration

  • On March 1st Cisco acquired CliQr for $260 Million. As organizations move towards hybrid cloud architectures, the ability to manage software and applications across a multi-cloud environment becomes increasingly complicated, and CliQr helps manage that complexity.
  • With the acquisition, we can expect to see this application level management compliment Cisco’s existing InterCloud Fabric strategy to help clients manage their multi-cloud environments.
  • The CliQr team is expected to join the Cisco Insieme team to be integrated into Cisco’s overall SDN strategy.
 

3.    HyperFlex for a single point of management

  • Potentially one of the biggest announcements this year was the introduction of Cisco’s Hyper Converged platform “HyperFlex”. Hyper converged is a huge growth area where we are seeing clients take advantage of fully integrated compute, storage and networking platforms.
  • Cisco’s introduction to this space takes advantage of their already strong UCS platform and provides a single point of management across the entire hardware and software solution.
 

4.    Security – end-to-end coverage

  • Since the acquisition of Sourcefire we have seen Cisco put a much larger focus on their end-to-end security This year, more than ever, Cisco sees the opportunity to take advantage of their security portfolio. Although Cisco was late to the game with their Next-Gen Firewall solution they have come a long way and are now the #1 preferred security vendor selected by CIO’s (graphic below).
  • Cisco’s strategy of covering the entire attack continuum (before/during/after) is a major focus for the organization and seems to be hitting home with customers who have struggled with too many products/vendors to cover this previously.
 

5.    Spark offers both cloud and hybrid options

  • As mentioned in my Collaboration Summit blog, Spark will continue to be a major focus for the collaboration team. Spark Message/Spark Meeting/Spark Call are all available both as a cloud offering or a hybrid cloud. The hybrid option allows customers to utilize their existing voice/video on-premises infrastructure to fully integrate with Spark cloud to provide all the great new features being released with Cisco Spark.
  • To review, the 3 primary solutions with Cisco Spark include:
    • Spark Messaging – this powerful collaboration tool provides persistent chat, document sharing and escalation to voice and video calls. This has been a fantastic tool to collaborate on a project with a single place to keep all communication. For me this has been a fantastic way to reduce email sprawl and keep all communications isolated under a single platform.
    • Spark Meeting – utilizing the benefits of the cloud Spark Meeting seamlessly enables the integration between the boardroom and your mobile device.
    • Spark Calling – Cisco Unified Communications and Hosted Collaboration Service (HCS) are a fantastic solution for our clients with +200 users, but a hosted cloud solution for the SMB market has been challenging. Spark Calling is going to completely revitalize the SMB telephony environment. With the ability to turn up a new phone quickly and easily by simply plugging in a new phone out-of-the-box and immediately offer enterprise class services at very low cost creates an excellent opportunity for our SMB clients.
  • Hybrid with some components available today and most available sometime in CY16 for both Canada and the US. Below is a quick snapshot of the offer packaging that will be available.
  • Start using Spark messaging for free today join at https://web.ciscospark.com and invite me to a room to continue the conversation.

6.    Cisco ONE Software and ELA

  • Cisco continues to revamp the way that they handle software and licensing.  For all customers the Cisco ONE licensing platform with the Smart Licensing portal can dramatically change the way you purchase and manage licenses. Software no longer needs to be tied to a specific piece of hardware and be re-purchased when the device needs to be refreshed.
  • Perpetual licensing and software bundling available with Cisco ONE can dramatically reduce TCO. Cisco ONE is available for the Data Center, WAN and Access layer portfolios.
  • Cisco’s software Enterprise License Agreements (ELA) are seeing huge success with mid – large size customers with offerings for both Collaboration and Security. The Cisco ELA program provides an “all you can eat” offering which can dramatically reduce licensing costs over the course of the contract. In one customer Long View saw an instant cost savings of 20%, with no additional costs for 3 years even with organic growth.
  Another busy year for Partner Summit with lots of information. If you’d like to hear more or continue the conversation please join me in my Spark room and I’d be happy to meet face to face or via video.
Read More >

Can Cloud Clear Software Licensing Complexity?

February 18, 2016

It’s time to stop making transactional decisions around technology and start having more strategic conversations that include your cloud solution architects and enterprise technology strategists. There is a balance to strike between on-premises and cloud/SaaS models, as part of a strategic conversation around the complexity in product offerings, licensing and consumption models, and the business value that each brings. It’s really about looking at all the angles and having an open mind to new ways of doing things.

Cloud challenges the norm.

The transformation to cloud can be a challenge because it’s new, not everyone is aware of the options, and it’s a totally different way to consume. With more technology providers providing cloud and on-premises options for products, more due diligence is required to understand the differences in the models and the implications to the business around cost, agility, and flexibility to name just a few. Making decisions around cloud requires understanding the business requirements and mapping those to the available solutions.

Can cloud reduce complex licensing headaches?

Vendors have come to realize that complex licensing schemes do not retain customers. More and more customers are looking for technology combined with ease of use, and if not provided, they are quick to look at alternatives with less onerous investments. That is why we are seeing competitive innovation in the industry to drive to services oriented products that solve business problems and overcome challenges. Manufacturers are aware and hence you see the trend of license simplification. I recommend customers look at innovation and consider cloud options strategically, especially as vendors invest in cloud and create programs to incent customers to migrate to the cloud.
  • Take Oracle as an example. Oracle’s on-premises licensing model is complicated if you don’t use their hardware or hypervisor. On the other hand, Oracle licensing in the cloud, whether on AWS or Azure, is based per VM with the freedom of hypervisor and a reduction in complexity. Oracle’s savvy licensing team has morphed to allow customers to license in an easier way in the cloud, while at the same time paying only for what is consumed, which is great for your failover and disaster recovery instances.
  • Or Windows. Who would have thought that Microsoft’s Windows operating system would move to a user model with no more software assurance rules like MDOP, VDA, per device? Microsoft has innovated to stay competitive in the market, while also simplifying licensing as much as possible. We also see this with traditional Office device licenses moving to Office 365 user based subscriptions as an option.
  • Who started the trend? Adobe moved first to a subscription only offering, while Microsoft, Oracle and others still provide perpetual and subscription offerings, allowing for a slower transition. Moving to a subscription model doesn’t just help customers; it also helps manufacturers by reducing the need for compliance audits that can sometimes hurt customer relations.

So what can cloud offer?

  • Know what you pay for: in a cloud model, you know exactly what you are using and what you are paying for, so it’s easier to be compliant with vendor licensing agreements. This keeps audits at bay, and let’s your compliance managers sleep well at night. Ask IT operations managing the on-premises datacenter how painful it is to manage the different licensing schemes and maintain licensing compliancy and server sprawl.
  • Reduce shadow IT: taking charge of cloud purchases can also reduce employees’ personal use of credit cards, if cloud subscriptions are owned within the organization, reducing shadow IT. If not, it’s too easy with Azure and AWS for individuals to go around IT, if cloud can answer their needs quickly and simply.
  • Clearer measurement: with cloud purchases centralized, IT is also in a better posture to measure the environment and what’s happening in which department versus today’s sometimes “cloudy” IT sprawl. Show me a CFO who wouldn’t love to see a pay-per-use model where there is no wastage on unused subscriptions/capacity.
  • Innovation and support: by thinking big picture, you can focus on transforming internal customers smoothly to cloud when it is the right model, and provide the necessary architecture and managed services to support them, either internally or with a partner that provides managed support.

How to manage the speed of innovation?

Cloud does simplify some traditional IT problems but it also adds a new set of challenges. Changing traditional mindsets and adopting new approaches is just the start. As cloud helps your technology keep pace with the speed of innovation in today’s market, how will you handle the operational changes it drives? How will you support your end users with constant updates and upgrades, instead of a more traditional long term roll out process? Because the vendors are innovating more and more quickly, you need to be ready to support your users, applications and infrastructure, whether you do it yourself or ensure your partner is capable of supporting the speed of change across all three.

In the future not too far away…

I think in the future we will go to manufacturers and choose a rate card and plan similar to buying a wireless plan from a mobile provider now, based on the services we need and not just on product licensing schemes. Technology vendors know they can no longer stay with traditional product and licensing approaches, using audits and complex licensing schemes. We are already at the point where the focus has shifted to product capabilities, efficiency and simplification so that IT can help drive business innovation. Today’s cloud marketplace accelerates innovation even faster, with integrated 3rd party offerings built on Azure and AWS, expanding the list of possibilities for customers. Make sure you're taking advantage of the speed of innovation to reduce complexity while enabling business value!

Learn more about hybrid cloud and end user experience.

Read More >

Securing the End User

January 28, 2016

What does the future of security really look like?  Here are some security facts to keep in mind throughout 2016 and beyond:

End Point Security Protection

Security breaches cost on average of $3.79 million, and in the case of enterprises, the cost can be extraordinary (Target’s breach cost $1 billion) according to Ponemon Institute, a security research center, in conjunction with IBM. Unfortunately, they are also increasingly common. In healthcare, for example, two recent articles claim that 1 in 3 medical records in the US will be breached this year and that 45% of those breaches will be caused by lost or stolen laptops.  That works out to 48 million records breached from the end point and $7.39 billion in costs to healthcare (at a cost of $154 per record). Bring-Your-Own-Device (BYOD) initiatives (whether formal or driven by users) have increased the risk of losing data through the end point. Users commonly connect to corporate data through multiple devices, only one or two of which are managed by IT.

Microsoft Enterprise Mobility Suite (EMS)

Microsoft has addressed the issue by enabling security on the end point, application, and content level with the launch of Enterprise Mobility Suite and its bundled products. This unified solution limits access to content in several important ways and watches for anomalous behavior that could signal a breach.

Level 1 - Securing the device

Enterprise Mobility Suite includes mobile device management through Microsoft InTune, which delivers application and device management through integration with System Center 2012 Configuration Manager, all via a single management console.

InTune also provides comprehensive settings management for mobile devices, including remote actions such as passcode reset, device lock, and data encryption. InTune can even remove corporate data and applications when a device is unenrolled, non-compliant, lost, stolen, or retired from use.

Level 2 – Securing the application

Securing not just corporate applications, but apps provided by 3rd party SaaS companies is also important. A recent study showed business partners were responsible for 22% of security breaches.

EMS provides unified identity for single-sign-on for thousands of popular apps like box.com, mailchimp.com, and salesforce.com. User experience improves with automatic authentication using the active directory password, and IT gets better control of data by enabling or disabling access.

Level 3 – Actions within an application

Microsoft EMS can also limit user access to content within an application. For example, using Azure Rights Management, companies can restrict access to documents based on their permission levels within SharePoint- regardless of where the document is stored. Without active user credentials, the document remains safely encrypted and inaccessible even if it has been forwarded and saved on a local hard drive.

Administrators can also limit app functionality like copy, cut, paste, and save, within the managed app ecosystem. They can also prevent users from copying corporate information into their personal storage.

Microsoft’s Data Loss Prevention (DLP) solution, well known in Exchange, is now available with SharePoint too.  DLP watches for sensitive information like social security or credit card numbers and prevents their transmission by redirecting users and notifying IT.

Securing the End User is an important part of any information security strategy, and it goes far beyond simply training and retraining on policies.  These Microsoft tools help IT enforce best practices and remain in control of sensitive corporate data.

Read More >

Data Visualization – Power BI, $30 Oil and the Switchboard Operator

January 26, 2016

With oil at $30 a barrel, every marginal penny of cost counts. You can't work any HARDER. You simply MUST work smarter. Every oil and gas organization on the planet is in a race to explore reservoirs of data (pun intended) in search of cost savings, some for means of survival.

History repeats itself, how to scale?

At the turn of the last century, when a burgeoning industrial economy was booming, Alexander Graham Bell commercialized a new technology dubbed the telephone. Now the telephone had great promise. To talk to anyone, you simply picked up a handset and asked the switchboard operator to be connected to anyone that was connected to the same switchboard. An unknown futurist said "Someday soon, you'll be able to instantly speak with anyone in the world." The unknown pundits, who were also good at math, retorted "Are you kidding? Even if we could connect all the switchboards, do you know how many switchboard operators the world would need?"

Fast forward a hundred years or so, to infinite connections.

"Someday soon, you'll be able to instantly connect to and visualize any data source in the world." In much more learned terms, the modern-day futurists at Gartner say that we're there, that 2015 was a watershed year.

Traditional BI and analytic models are being disrupted as the balance of power shifts from IT to the business.

The rise of data discovery, access to multistructured data, data preparation tools and smart capabilities will further democratize access to analytics …

No report writer required!

To make a call, all you need is a phone, a telco account, and a phone number; no switchboard operator. To run self-service BI, all you need is data, access credentials, and a self-service BI tool; no BI report writer. There are many self-service BI platforms that proclaim dominance in the business intelligence market. If I were a betting man, my money's on Microsoft’s Power BI.

Innovate or be superseded…

Firstly, and most importantly, Microsoft is setting a new pace of innovation. Download the recently announced Power BI Desktop and you can expect updates every 30 to 45 days. Peruse the growing list of commercial data sources in the Azure Marketplace and you can expect new sources every visit. Secondly, we can't ignore Microsoft's muscle when they flex it in the personal productivity space. The patterns are way too familiar. WordPerfect was superseded by Word. Harvard Graphics spawned the launch of PowerPoint. VisiCalc and Lotus were replaced with Excel. Tools like FoxPro and Progress were consolidated by Access. I predict that so too will be the fate of data visualization tools like Tableau, Spotfire, Cognos, and BusinessObjects in favour of Power BI. So grab your well files, your facility construction costs, your land files, some regulatory data, some commercial oil and gas data, a forward price curve, and explore. It makes good cents!
Read More >

Why ITO Value Realization Fails

January 8, 2016

We often hear about IT Outsourcing (ITO) agreements that failed to meet the expected business outcomes.  In fact, failed provider-client relationships usually tend to get more press then successful ones and this unfortunately promotes a belief with some people that outsourcing does not work. The reality is that failure to realize anticipated value from an ITO agreement is often the result of improperly set expectations and requirements between both the provider and the customer and poor governance throughout the life of the contract.  What follows are some of the common causes of ITO value realization failure and how they can be addressed.

 

What to outsource?

Only outsource what can justifiably and quantifiably be done better and/or at a lower cost by a service provider. The idea is not to arbitrarily get rid of all IT functions but rather, to refocus IT resources on activities that matter to the business.

Not just about cost reduction

Some customers’ ITO plans put too much emphasis on IT cost reduction and not enough on business outcomes. Cost of service is definitely an important factor in outsourcing but when it becomes the sole source of focus, organizations often end up only with the “same mess for less”. In addition, true IT costs are not always fully taken into consideration and limited to direct operating costs leaving items such cost of errors and cost of controls unaccounted thus skewing the Total Cost of Ownership for IT.  This leads to skewed IT Total Cost of Ownership and savings falling short of expectations.

Measurable Performance

Contracts are sometimes filled with terms or service attributes that cannot easily be measured or quantified.  While terms like “best of breed”, enterprise class, strategic and innovation all sound good, they are also very subjective and difficult to quantify. Make sure service agreements are based on measurable performance items.

Realistic Service Levels

How do expected provider service levels compare to current service levels? Internal service level agreements (SLA's) or objectives are not always clearly defined prior to outsourcing and are negotiated without formal requirements and impact analysis. Negotiated service levels must be attainable and must also measure what matters to the business. Very much like business continuity planning, the criticality and resumption of business processes should not be defined without a proper impact analysis.

Risk

One of the advantages of ITO is the transfer of some of the operational IT risk to third party.  However, it does not mean all risks are eliminated and does not relieve a company from due diligence around risk identification, assessment and mitigation. Like car insurance is no remedy to distracted driving, organizations must keep risk management a priority. Outsourcing can bring other risks such as loss of IP, non-performance, provider business failure, etc. that must still be managed and mitigated.

Governance

Although listed last, governance, or more specifically the absence of it, is the most frequent cause of ITO agreement failure. Contracts are in place for reference and should be used as last resort for conflicts that cannot be resolved otherwise.  The old adage "an ounce of prevention is worth a pound of cure" really applies here. Strong and effective governance from both the client and provider side will prevent issues from escalating to a point that can jeopardize the relationship irreparably. Governance must be applied in a spirit of partnership and in the best interest for both parties.

None of the items discussed above are overly complex and with the right amount of planning, can all be addressed. If a single item should be highlighted as the most important to remember, it is that agreements cannot be successful if they lean too much in favor of one party.  Both client and service provided must get value from the relationship for sustained success.

Learn more about the Economics of IT.

Read More >
1 4 5 6 7 8 28