Uncovering the Next Infrastructure Blind Spot: SSL

By: Ananda Rajagopal, Vice President of Product Management AnandaRajagopal

Visibility: the Merriam-Webster dictionary defines it as the “capability of affording an unobstructed view”. In the world of business, visibility delivers relevant insight, which can be the difference between just-in-time action and a missed opportunity. This is why traffic-based visibility powers the business of NOW! Yet, the nature of traffic visibility is such that underlying shifts in payload types and patterns requires solutions that can readily adapt to these shifts and provide an unobstructed view of traffic to the administrator.

Many security and network administrators are facing up to an underlying shift in enterprise traffic: a growing portion of it is encrypted within SSL. According to an independent study done by NSS Labs, anywhere from 25%-35% of enterprise traffic is encrypted in SSL and is growing further every month. In some verticals, that number is already higher. By itself, this statistic would not cause a flutter but this is exacerbated by other findings on the state of today’s security and performance monitoring infrastructure:

  • Although inline devices such as ADCs, firewalls etc. have integrated SSL support, out-of-band monitoring and security tools often do not have the ability to access decrypted traffic to perform security and performance analysis. This allows SSL traffic to fly under the radar, creating a potential security loophole.
  • Consequently, performance management tools and many out-of-band security tools are either completely blind to SSL traffic or get overloaded if they decrypt SSL. In discussions with many of our customers, they have pointed out a drop in performance by almost 80% if the tool decrypts SSL.
  • Many security administrators are using larger ciphers for increased security today. A study by NSS Labs noted a performance degradation of 81% in existing SSL architectures.
  • Hackers and cybercriminals are increasingly using SSL sessions to dodge network security defenses. Indeed, a Dec. 9, 2013 Gartner report titled “Security Leaders Must Address Threats From Rising SSL Traffic” by Jeremy D’Hoinne and Adam Hils, “Gartner believes that, in 2017, more than half of the network attacks targeting enterprises will use encrypted traffic to bypass controls, up from less than 5% today”.

In short, the very technology that was supposed to ensure confidentiality is now being exploited by nefarious actors. These are precisely the reasons that have driven us at Gigamon to come with the next innovation in visibility—the industry’s first and only visibility solution with integrated SSL support. With built-in hardware to decrypt SSL sessions at high performance, this new capability provides visibility into a critical blind spot facing administrators today. It is not without reason that analysts, customers and our technology ecosystem partners who have been privy to this development are all agog with excitement!

This new capability is yet another proof point of what GigaSMART can offer to IT and security administrators. GigaSMART is a platform that allows advanced traffic intelligence to be extracted via various applications that can be dynamically enabled and run in combination on a common underlying platform. Contrast this with other visibility products that offer point features to address point problems with point hardware—over time, both capital and operational costs of managing point products rapidly add up until they can no longer offer visibility to the next blind spot the administrator seeks to uncover. Gigamon’s GigaSMART technology solves visibility challenges holistically with a platform-based architectural approach. If you are a Gigamon customer who has already invested in GigaSMART on any of the GigaVUE-H Series platforms, you do not need any new hardware to run this new SSL application! The benefits of this platform-based approach are considerable. Here are three examples related to SSL decryption:

  • You can service chain multiple GigaSMART applications together. Interested in sending encrypted traffic at a remote site to a centrally located data loss prevention appliance? Not a problem. You can run both the tunneling and SSL decryption applications on GigaSMART in combination. Want to monitor secure VM-VM traffic between specific enterprise applications and generate NetFlow records on that traffic? Amen! You can combine tunneling, SSL decryption and the NetFlow generation applications on GigaSMART to generate NetFlow records on encrypted traffic. Have a concern about data misuse after decryption? You can combine SSL decryption with the packet masking/slicing applications on GigaSMART to support compliance with regulatory and/or organizational policies.
  • By combining SSL decryption with clustering in a Visibility Fabric, traffic from low-cost edge ports in the visibility infrastructure is automatically routed to the node in the cluster that has SSL decryption capability. This eliminates the need for SSL decryption solutions to be distributed at multiple locations, saving cost and ensuring better security in key management.
  • By delivering ‘Decryption as a Service’ via the Gigamon Visibility Fabric implemented with GigaSMART, administrators can increase the overall performance of their tooling infrastructure. The SSL traffic is decrypted once and then delivered to every tool that needs it, such as IDS, DLP, anti-malware, and even APM and other non-security tools.

For those who think that visibility can be obtained through mere “tap aggregation”, think again. Visibility must provide insight into infrastructure blind spots. Visibility is about extracting traffic intelligence to increase the performance of security and operational tools connected to the visibility infrastructure so that administrators can get the right insight. The nature of visibility is such that new challenges will arise tomorrow that today’s visibility solution should be able to adapt to—something that a repurposed Ethernet switch is simply not designed for. After all, isn’t visibility about offering an “unobstructed view”?

For more information including example use cases, visit our webpage on SSL Visibility.

De-risk New Service Provider Technology Deployments: Addressing The Triple Challenge of Network Transformation

Andy_HuckridgeBy: Andy Huckridge,
Director of Service Provider Solutions, Gigamon

Operators are facing a slew of new technologies to roll out, but this time around there’s a difference. In the past operators have been able to deploy new technologies in series, that is, one after another. With the current new technologies, due to the interdependency on each other, they are linked. Therefore instead of deploying the new technologies in series, the deployment of one new technology forces the deployment of another new technology, and so on until all three new technologies are deployed. Gigamon has developed a strategy to explain the three technologies, the interdependencies between them, highlight why this is bad from the operators perspective and explore ways to overcome the resource crunch which will become evident by the deployment of a unified tool rail approach in parallel with the new technology rollouts.

Linear Diagram

What is the Triple Challenge & Why will it occur?

The Triple Challenge defines the operator’s predicament to be able to deploy next generation technologies, which are made up of:

IP Voice

  1. VoLTE, IR.92 primarily for Mobile carriers; PLMN
  2. VoWiFi, applicable to Mobile, Fixed or Cable providers; PSTN, PLMN & MSO
  3. VoIMS, as the underlying technology to all modern day IP Voice implementations

High speed transport pipes

  1. Multiple bonded 10Gb
  2. 40Gb
  3. 100Gb

Network Virtualisation

  1. Traditional server virtualisation
  2. Software Defined Networking
  3. Network Functions Virtualisation

The operator is faced with a number of decisions to make:

  • Virtualize the core first, then deploy VoLTE as a virtualized network function, or deploy VoLTE as a legacy function in their traditional network since the network is already in place?
  • Upgrade the core beforehand due to worries about DiffServ, MPLS transmission or QoS issues in general, or wait until bandwidth requirements placed upon the 4G/LTE RAN force the move of voice services from the existing circuit switched 2G RAN?
  • Upgrade core routers in anticipation of rising RAN traffic, or virtualize the core routing network elements first?

It appears there is no correct answer to whether the horse or the cart goes first. Indeed – it seems there is even a virtual horse involved. So with this level of uncertainty and all-encompassing network transformation, there is only one constant – the need to be able to monitor the new technologies completely and comprehensively; and the network changes involved to make sure the newly deployed technologies are working in the way the network equipment manufacturer has promised during the design phase and are satisfying expectation whence turned-up and actually deployed. It is said that the person who is wrong is the person who can’t prove they are right. Monitoring of packets-in-motion greatly helps to add the legitimacy required in the conversation between the operator and the NEM when deployments of new technology don’t quite go to plan.

Circular DiagramHere we see a graphical representation the resource hit and how one technology causes the in parallel rollout of the other “Triple Challenge” technologies:

This is due to the three technologies being interdependent; deploying any one will result in either of the other two technologies also being deployed. Thus catching the operator out with regard to the amount of resources needed to deploy the new Triple Challenge technologies.

Monitoring can play a great part in de-risking the deployment of these three new technologies, and being able to not only find the needle in the haystack, but to find the real needle as opposed to a fake needle, in a reduced number of haystacks

Accelerating the Deployment of the Software-Defined Data Center (SDDC) Through Active Visibility

By: Shehzad Merchant, Chief Strategy Officer at Gigamon Shehzad Merchant

The software defined data center promises to be a very dynamic environment. Micro-segmentation, network virtualization and on-demand virtual machine (VM) migration, all bring with them the promise of a highly agile, yet highly optimized data center. However, the move to the SDDC will not happen overnight and migration strategies that help IT administrators make the transition to the SDDC are going to be a key element in the transition to the SDDC and realizing its full promise.

One of the key elements of making the move to the SDDC is the ability of IT to manage, monitor and secure the SDDC while continuing to leverage their investments in their existing tools, as well as their human capital. This can be challenging at times. For example, network virtualization introduces the concepts of overlay and underlay networks. Overlay networks are typically virtual networks that provide tenant isolation as well as service isolation in addition to the separation of location and identity. The physical network infrastructure typically serves as the underlay network. Virtual overlays can be instantiated, extended and removed dynamically based on tenant subscriptions, service guarantees and VM mobility; all of which makes the underlying physical infrastructure more efficient. However, they also make the job of troubleshooting and monitoring more complex for several reasons. The dynamic nature of the overlays, the need to correlate and track traffic between the underlay and overlays, the existing departmental silos between the server and network teams – particularly when the overlays are instantiated in the server/hypervisor domain, but are routed over a physical underlay network – can all be barriers to rapid troubleshooting, performance optimization and security. Furthermore, they introduce multiple planes of traffic to monitor and secure. Similarly, VM migration can now occur over a segmented Layer 3 underlay network through the use of network overlays, thereby maintaining session continuity. This allows the underlying physical infrastructure to scale out through Layer 3 segmentation. However, it also poses a challenge from the perspective of application performance management (APM) and security monitoring. This is because the tools that depend on traffic visibility for analyzing application performance or for managing and limiting the threat envelope, can encounter blind spots when VMs move to different locations and their traffic is no longer visible to the tool at its original location.

In order to better address the operational aspects of managing, troubleshooting, and securing the SDDC, Gigamon and VMware have recently announced a new partnership that promises to simplify, and indeed accelerate, the migration to the SDDC through solutions that work in an NSX environment. The solutions extend the ability of IT Operations and Management (ITOM) to monitor and manage NSX environments while continuing to leverage their investment in their monitoring tools, as the data center evolves to a software defined model. Gigamon’s solutions will bring active, traffic-based visibility into dynamic virtual environments enabled by NSX, by automating monitoring policies to actively track VMs in an NSX environment thereby eliminating blind spots. The solution will bring visibility into east-west as well as north-south traffic flows in an NSX environment. In addition, Gigamon’s solutions will also enable active traffic- visibility into VXLAN-based overlays and physical underlays in the NSX environment, thereby simplifying and indeed adapting the traffic to the needs of the monitoring tools.

The role of traffic based visibility is only increasing as applications are virtualized and infrastructure moves to a software defined model. Looking at actual traffic provides a true assessment of real time conditions both from a performance monitoring perspective as well as from a security perspective. Gigamon, along with VMware, are committed to bringing solutions to the market that increase traffic visibility as the data center transforms into a more agile, software defined data center.

Putting the (Whitebox) Cart Before the (SDN) Horse?

By: Shehzad Merchant, Chief Strategy Officer at Gigamon Shehzad Merchant

The network today is more critical to the success of IT than ever before. As such, any disruptive change in networking has to be one that is assimilated into the production environment using a measured and carefully phased approach.

We are early in the SDN cycle and the deployment challenges associated with making SDN mainstream, including areas such as security, resiliency and scale, are still in the process of being ironed out.

One area that is still quite nascent when it comes to SDN is the area of monitoring, troubleshooting, and instrumentation. The ability for tools to monitor and manage SDN deployments is evolving, and with it, the ability to troubleshoot, manage, and respond to network issues in real time. All of this points to the fact that the success of SDN will largely depend on the quality of the implementations, the support model behind those implementations and the commitment of vendors to invest in quality, scalable and enterprise or carrier class SDN implementations.

However, we are seeing a big push towards cheaper bare metal and whitebox types of solutions leveraging merchant silicon in parallel to the interest in SDN. In isolation, these are both powerful and empowering trends; SDN for the operational simplicity it brings to the table, whitebox technology for driving down cost and opening up an eco-system of vendors.

But, this is worrisome because if history is any indicator, the adoption and maturing of a new disruptive technology or set of technologies, such as SDN, has typically preceded the commoditization of that technology. In other words, gaining a good understanding of a new technology, securing it, scaling it, and having the ability to manage and troubleshoot it, need to be resolved before the technology can be successfully commoditized.

Are we putting the whitebox cart before the SDN horse?

In my blog post on SDN Central, I explore why I think whitebox networking combined with SDN concurrently seems like taking on too much risk.

For the full blog post, visit SDN Central.

Enabling Multi-tenancy within Enterprise IT Operations

by: Shehzad Merchant, Chief Strategy Officer at GigamonShehzad Merchant

Multi-tenancy is a well understood term in cloud and carrier environments where multiple customers serve as tenants over a common infrastructure. However, the notion of multi-tenancy, the associated SLAs for each tenant, and the ability to virtualize the underlying infrastructure to isolate individual tenants, is quickly making its way into enterprise IT operations. Today, enterprise IT organizations have multiple departments such as security, networking, applications, among others. Each department is increasingly being held to stringent requirements for ensuring network and application availability, responsiveness, and a good user experience. This is leading to an increasing reliance on various classes of tools that provide the ability to monitor and manage the applications, network, security, as well as user experience.  Many of these tools leverage Gigamon’s Visibility Fabric™ for optimal delivery of traffic from across physical and virtual networks to these tools. As departments are increasingly held to their own SLAs and KPIs, they need to be able to autonomously carve out traffic delivery to the departmental tools, as well as independently configure, manage, and adapt traffic flows to the departmental tools without impacting other departmental traffic flows. And they need to be able to do all of this over a common underlying Visibility Fabric, which leads to a model where the Visibility Fabric needs to support a true multi-tenant environment.

With the GigaVUE H Series 3.1 software release, Gigamon introduces several enhancements to the Visibility Fabric that enable multi-tenancy and enable IT departments to optimize their workflows, reduce workflow provisioning times and provide for both privacy as well as collaboration among departments when it comes to their monitoring infrastructure.

There are three key aspects to these new capabilities.

  1. Enabling departments to carve out their own slice of the Visibility Fabric using an intuitive Graphical User Interface (GUI) that supports the workflow required for multi-tenancy. Empowering multiple tenants to apportion the Visibility Fabric each with their own access rights, sharing privileges and their traffic flows, through a drag and drop GUI-based model is a key step towards simplifying the provisioning model in a multi-tenant environment. Moving away from a CLI based approach to a GUI based approach is a key step towards improving workflows across departmental silos.
  2. Advancing Gigamon’s patented Flow Mapping® technology within the Visibility Fabric Nodes to support multi-tenancy whereby each tenant can carve out their own Flow Maps, ports, and actions, without impacting the traffic flows associated with other tenants. This is a significant architectural advancement that builds on Gigamon’s existing Flow Mapping technology to provision resources within the underlying visibility nodes based on the department’s (or tenant’s) requirements.
  3. Providing role based access control (RBAC) so that departmental users can work both collaboratively as well as privately over the common underlying Visibility Fabric.

These capabilities represent a significant advancement in how IT operations can take advantage of the Visibility Fabric to rapidly deploy new tools, enable real time or near real time tuning of the Visibility Fabric and better meet their individual SLAs and KPIs. Taken together, these key capabilities empower IT organizations to provide Visibility as a Service to their various departments.

For more information, please see the Visibility as a Service Solutions Overview.

Mobile World Congress 2013 Recap: Big Visibility for Big Data & Turning Big Data into Manageable Data

by: Andy Huckridge, Director of Service Provider Solutions & SME

It was quite a week at Mobile World Congress. With a record attendance of around 72,000 people, this show continues to grow and grow. Which made it the perfect place to showcase Gigamon’s technology aimed at solving the issue of big data for mobile service providers.

Subscribers continue to embrace mobile lifestyles and conduct work outside of the office while applications become increasingly mobile. At the same time more and more video is generated and consumed which takes up orders of magnitude more bandwidth than legacy voice traffic.

In fact, in advance of the show, Cisco released their 6th annual Virtual Networking Index (VNI) Global Mobile Data Traffic Forecast indicating that mobile data traffic is going to increase 13-fold by 2017. Whether the growth lives up to this estimate remains to be seen, but it will probably come close. That’s a potentially scary statistic for mobile carriers.

We’ve heard of the problem of “Big Data” most often applied to enterprise storage and analytics, but it is clear that this is a major issue for these carriers as well, as analyst Zeus Kerravala writes in Network World. Big Data applications are increasing the volume of data in carriers’ pipes, posing a unique, but not insurmountable challenge.

Operators need a solution that won’t result in going significantly increase expenses from tool costs as the sizes of the pipes and the amount of data in those pipes increases. Carriers are looking for ways to realistically keep their business costs in line with what their subscribers are willing to pay for a service, and to provide subscribers with the quality, uptime and reliability they expect. In order to do this, carriers need to understand the nature of the traffic flowing through the pipes, its ingress and egress points and where resources need to be placed on the network to ensure that service-level agreements are met.

The answer is to change the way Big Data is monitored. First, carriers require a solution that combines volume, port-density and scale to connect the right analytical tools to the appropriate large or bonded pipes. Second, the data must be conditioned through advanced filtering and packet manipulation, which reduces the amount of data arriving at each tool, while ensuring that the data is formatted precisely for the tool’s consumption. This way, each tool is able to process more data without needing to parse the incoming stream and steal processor cycles from the more important task of data analysis. Gigamon currently offers all of these features and announced a combined solution before the start of the show.

However, volume, port density and scale won’t be enough for mobile carriers in the future. Effective monitoring of Big Data calls for reducing the amount of traffic in a large pipe to make it more suitable to connect to an existing speed tool, at 1G or 10G. Gigamon announced the development of this concept during the opening days of the show. Using this method, the connected tools will continue to see a representative view of the traffic in the larger pipe and in a session aware and stateful manner. The tools are thereby not merely filtering traffic. They are reducing the amount, while keeping data flows intact, but at a lower speed feed within a smaller pipe. The carrier will then be able to concentrate on specific types of data, or take a look at the entire range of traffic in the larger pipe.

This holistic network visibility solution from Gigamon will enable mobile service providers to handle the Big Data issue and maintain current business models. But more importantly, maintain existing expense structures while running the big data services of tomorrow.

Mwc13_big_visibility_for_big_data_2Mwc13_big_visibility_for_big_data_1Mwc13_big_visibility_for_big_data_3Mwc13_big_visibility_for_big_data_4