De-Risking New Service Provider Technology Deployments – Introduction to the UVFA: Features & Apps

By: Andy Huckridge, Director of Service Provider Solutions Andy Huckridge

Unified Visibility Fabric™ Solution

Gigamon has pioneered the Unified Visibility Fabric solution. The Unified Visibility Fabric sits between the production network and the monitoring or management tools. It acts as a centralized fabric that delivers the relevant data from various networks under an administrative domain (including campus networks, branch/remote office networks, private cloud, or SDN islands that an enterprise or service provider may have), to a centralized set of tools that are connected to the Unified Visibility Fabric. In the process of delivering data from the production network to the tools, the Visibility Fabric provides a variety of functions such as filtering, replication, etc. to ensure that only the relevant data gets delivered to the tools. Traffic delivered to each tool can be individually tuned within the fabric, independent of the traffic profile of other tools in order to optimize the functioning of each tool. In other words, non-relevant traffic to a tool can be filtered out/dropped from the set of traffic delivered to that tool without affecting the traffic delivered to other tools. This can be done independently for each tool. The Unified Visibility Fabric takes care of replicating, filtering, and forwarding traffic based on each individual tool’s traffic profile. In addition to filtering and replication, the Visibility Fabric performs several other key functions in order to offload the tools. For example, some of these functions may include:

  • Packet Masking: Masking out certain sections of the packet data (social security numbers, for example) in order to ensure confidentiality of data.
  • Packet Slicing: Many tools do not need to see the entire data packet. They operate on say the first 128 or 256 bytes of the packet. In these cases, sending an entire 1500 byte Ethernet packet or a 9k byte Ethernet packet to a tool only serves to increase the burden on the tools. The Visibility Fabric can slice the packet down to only the size relevant to the tool before delivering it to the tools so as to optimize tool usage.
  • De-Duplication: In many cases, the network is tapped at multiple tiers (such as access, aggregation, and core.) As a consequence, the same packet may be delivered to a tool multiple times, once for each point where the network is tapped. In some cases, this leads to significantly higher processing overhead for tools which do not need to see the multiple copies of the same packet. In this case, de-duplicating the traffic and sending only a single copy of the packet to the tools is highly desirable and this function too can be performed within the Visibility Fabric

Other functions that the Visibility Fabric may enable include time stamping, deterministic sampling and delivery of data, among others.

TCBlog3_Fig1

Figure 1: An example of a network with tool proliferation

TCBlog3_Fig2

Figure 2: An example of a network with the Gigamon Visibility Fabric architecture deployed

Unified Visibility Fabric Architecture

The Unified Visibility Fabric consists of multiple components that taken together constitute the Unified Visibility Fabric architecture (UVFA).

Services
The Services layer consists of Visibility Fabric nodes that connect into the data/production network on one side and a set of tools on the other side. On the network side, the Visibility Fabric nodes provide a variety of options to connect into the network and collect data. These include TAP modules, inline bypass modules, as well options for connecting to the mirror/SPAN ports on network devices. A variety of speeds and connectivity options are available from 1Gb all the way to 100Gb as well as short reach and long haul options. On the tool side a variety of interface speeds and options are supported. The Visibility Fabric nodes provide a set of key services for delivery of data to the tools. These services include packet filtering, packet replication, packet time-stamping, as well as packet transformation such as packet slicing and packet masking. At the heart of the Visibility Fabric nodes is a key patented technology developed by Gigamon called Flow Mapping® which allows users to specify individual traffic delivery profiles based on the tools connected to the Visibility Fabric nodes. Where visibility is desired into virtualized environments, Visibility Fabric nodes are available as virtual machines that can be fired up on a hypervisor and tunnel VM traffic back to the Visibility Fabric and to the tools connected to the Visibility Fabric. Filtering and Flow Mapping are done within the VM-based Visibility Fabric nodes, thereby only tunneling relevant traffic back to the Visibility Fabric. Nodes are also available for remote/branch offices to provide local filtering and Flow Mapping capability within the branch/remote office and tunneling just the relevant traffic to the centralized tools.

TCBlog3_Fig3

Figure 3: The Gigamon Unified Visibility Fabric architecture

Management
The Management layer provides two key functions. It provides an intuitive GUI driven approach to manageability along with a centralized approach to bringing multiple Visibility Fabric nodes under one management umbrella. This can greatly simplify the deployment and management of Visibility Fabric nodes across islands of topologies such as campus networks, remote/branch offices, virtualized environments, and in the future SDN deployments as well. The other key function that the management layer enables is the servicing of multiple IT departments (such as security, applications, networking, etc.) which effectively function as multiple tenants to the Unified Visibility Fabric. Each tenant can specify what traffic they would like the Visibility Fabric to send to their tools along with which operations they would like the fabric to perform on their data before delivery to the tools. In effect, the management layer provides the ability to virtualize the Visibility Fabric.

Orchestration
The Orchestration layer will consist of a set of APIs* and programmatic interfaces* that will ultimately enable the Visibility Fabric to integrate with tools, applications, and orchestration solutions. In this sense the orchestration layer will become an enabler to orchestration of the Visibility Fabric. The APIs will be used for a variety of purposes, For example, to allow tools to integrate more tightly with the Visibility Fabric and provide just in time tuning of the Visibility Fabric. The APIs* may also be used to enable a set of application developers to develop visibility applications that take advantage of the Visibility Fabric.

Applications
The Applications layer consists of a set of applications that leverages the other components of the Visibility Fabric. The applications layer can provide a variety of enhancements and optimizations built on top of the Visibility Fabric. As an example, de-duplication is one such application. The de-duplication application enables tool optimization by recognizing multiple copies of packets that are tapped or mirrored across multiple points in the network then filters out the extraneous copies and delivers a single copy of the packet to the tool. Various other applications are currently under development. Taken together, the various components of the Unified Visibility Fabric architecture provide a versatile and comprehensive solution to addressing the growing challenge of visibility in the midst of industry shifts such as virtualization, cloud computing, mobility, and the consumerization of IT.

Benefits of the Unified Visibility Fabric Approach
The Unified Visibility Fabric is fundamentally changing the way data is delivered to tools. By consolidating and connecting tools into the Visibility Fabric instead of connecting them directly into the production network, several benefits are realized:

  • Less disruption to the production network—The Visibility Fabric enables a “wire once” model where the Visibility Fabric is set up once to TAP or SPAN at various relevant points from the production network. Any tools that need to be enabled can be conveniently added to the Visibility Fabric with no disruption to the production network. Traffic patterns to the tools can be changed, tools can be upgraded, taken down, etc. similarly without any impact to the production network.
  • Better tool accuracy and utilization—By delivering only the relevant data to the tools as well as reducing the processing the tools need to do through offload operations performed in the Unified Visibility Fabric such as packet slicing, or de-duplication, tools are better able to keep up with the traffic flow with fewer packet drops. This leads to more accurate analysis by the tools as well as better utilization of the tools.
  • Lower TCO (and therefore better ROI) — By centralizing the tools and delivering only relevant data to the tools, the number of tools and probes deployed can be significantly reduced. Furthermore, as the network infrastructure is upgraded, the monitoring and tool infrastructure no longer has to go through a “rip and replace” cycle. The tools continue to connect into the Visibility Fabric and the Visibility Fabric can be tuned to manage the data streams to the tools. Finally, as the Visibility Fabric optimizes the data stream delivered to the tools, the load on the tools is reduced resulting in more efficient utilization of the tools—which extends the longevity of the tools, as well as results in fewer tools and probes. All of this taken together reduces the total cost of ownership for the monitoring and management infrastructure.

Summary

The Unified Visibility Fabric architecture provides a new approach to monitoring and management of IT infrastructure. By centralizing tools and connecting them into the Visibility Fabric, significant cost savings and operational efficiencies can be realized. The Unified Visibility Fabric architecture provides pervasive visibility across campus, branch, virtualized and, ultimately, SDN islands and consists of four key components—Visibility Fabric nodes, Management, Orchestration, and Applications, which when taken together provide a scalable, flexible and centralized Visibility Fabric solution.

*Denotes future feature/capability

De-Risking New Service Provider Technology Deployments – What are the Deployment Inter-Dependencies?

By: Andy Huckridge, Director of Service Provider Solutions Andy Huckridge

Understanding the Technology Interdependencies

In the previous blog post we outlined the interdependencies between the three new technologies: IP Voice / VoLTE, high-speed transport pipes, and carrier network virtualization. These inter-relationships will make it difficult for the operator to gain confidence in rolling out each new technology, as well as challenging to pinpoint the source of problem areas. In this blog post we will detail a number of those interdependencies for further discussion.

What are the Interdependencies That Will Drive the Triple Challenge?

The below diagram shows the stages of a technology deployment and rollout, and denotes that whichever technology is used to start the process, resource constraints are experienced by the need to roll out the other two Triple Challenge technologies. Independent of the starting technology, the interdependencies and technology inter-relationships will cause the rollout of all three.

TC_3

Starting with the deployment of IP Voice / VoLTE
Deploying VoLTE leads to greater density 10GB, 40Gb or 100Gb transport pipe deployments
  • More data from packetized voice causes the need for greater bandwidth in order to guarantee dual bandwidth voice-RTP QoS
  • More data is driven into the core by the 2G to 4G/LTE RAN conversion which requires more bandwidth for enhanced LTE services
Deploying faster transmission pipes such as 40Gb or 100Gb leads to Carrier Network Virtualization
  • Removal of the final bottleneck: if the pipes are wide enough, the last bottleneck will now be the network elements (NEs) themselves
  • Do more while preserving existing CAPEX spend: Operators are able to go with “white box” and bare-aluminum solutions over single-use, non-upgradeable, dedicated, and often proprietary solutions
Deploying IP Voice / VoLTE leads to the deployment of Carrier Network Virtualization
  • New services will be brought out as virtual network functions (VNFs) on NFV-enabled networks. The ability to deploy a traditional upgrade on a legacy network may be short lived.
  • By virtualizing first, VoLTE could be deployed as part of a virtual EPC (vEPC) where a virtual IMS (vIMS) core could be deployed as a VNF.

 

Starting with the deployment of Carrier Network Virtualization
Deploying network virtualization leads to the deployment of IP Voice / VoLTE
  • Efficiencies can be experienced by collapsing multiple different service cores together
  • Carrier Network Virtualization allows cost reductions in the vEPC and new services to be deployed such as IP Voice / VoLTE
Deploying IP Voice / VoLTE leads to the greater density 10Gb, 40Gb or 100Gb transport pipe deployments
  • More data from packetized voice causes the need for greater bandwidth in order to guarantee dual bandwidth voice-RTP QoS
  • More data is driven in to the core by the 2G to 4G/LTE RAN conversion which requires more bandwidth for enhanced LTE services
Deploying Carrier Network Virtualization leads to greater density 10Gb, 40Gb or 100Gb transport pipe deployments
  • Virtualization of network elements with greater processing throughput pushes the bottleneck elsewhere
  • Now that compute and storage are elastic, the pipes have to be upgraded to deliver low latency, bandwidth contingent, transport and content delivery assurance, and be able to handle RTP service QoS

 

Starting with the deployment of greater density 10Gb, 40Gb or 100Gb transport pipes
Deploying greater density 10Gb, 40Gb or 100Gb transport pipes leads to the deployment of IP Voice / VoLTE
  • If the converged core is already running at increased speeds such as 40Gb or 100Gb, then the RAN coverage can be increased. Therefore, the 2G RAN is converted to VoLTE to enable the expansion of 4G/LTE data which produces more operator revenues
  • If 4G/LTE RAN is already running at increased speeds such as 40Gb or 100Gb and has available bandwidth, then IP Voice / VoLTE can be deployed. This allows for the re-farming of the 2G RAN to enable expansion of 4G/LTE data which produces more operator revenues
Deploying IP Voice / VoLTE leads to Carrier Network Virtualization
  • Reduced CAPEX spend by deploying a virtualized core, vIMS, instead of a traditional legacy IP Voice / VoLTE deployment
  • Virtualization will allow future ease of software upgrades, as well as ease of offering further enhanced or new software-defined services
Deploying greater density 10Gb, 40Gb or 100Gb transport pipe deployments leads to Carrier Network Virtualization
  • The cost of deploying wider transport pipes forces the move to virtualize to recover costs that would have been spent on single-use, often proprietary network switching or routing technologies. Use of SDN and NFV can save costs associated with routing and switching and allow the network to become more flexible
  • The ability to provide more efficient and elastic content farms and Big Data analysis is driven by high speed links
  • Elastic storage and compute are needed to power an operators future needs, even to offer cloud capabilities as a service to their residential and business subscribers/customers
  • In order to overcome subscriber bandwidth issues at the network edge, virtualized services are needed such as video transcoding, bandwidth treatments on-the-fly, and service treatments on-the-fly such as “throttling as a VNF”

How Will the Interdependencies Cause Network and Service-Related Issues? 

In the previous section we have demonstrated the interdependencies between the three new Triple Challenge technologies. Here we will explain the unique capabilities of a Unified Visibility Fabric™ architecture (UVFa) and how deploying one can bring a new insight to modern monitoring: understanding the inter-related deployment dependencies via cross-silo monitoring, allowing you to find the “needle in a haystack” faster, and oftentimes to vastly reduce the size of the haystack altogether.

In Order to Correctly Monitor the New Technologies, It Is Important to Understand What Is Needed and Why

  • IP Voice (VoLTE/VoIMS/VoWiFi) being based on RTP is a very sensitive service, complete visibility from edge to core is needed to debug complex transport/service layer inter-related issues
  • Bonded 10Gb, 40Gb, and 100Gb transport needs advanced processing across the fabric. Edge filtering and data optimization get the most out of the attached tools. Specifically today there are no tools capable of connecting to, nor monitoring 100Gb transport pipes
  • Carrier Network virtualization is a complex set of new technologies with no built-in monitoring capability. To deploy SDN or NFV is to remove the visibility from a large part of your existing network

Specific Issues Related to IP Voice/VoLTE

  • Effects of bursty traffic types and other RTP traffic types in the same transport pipe
  • Effects of server virtualization, network function or network element virtualization on RTP-based voice traffic
  • The effects of dynamic loading on RAN backhaul and RTP traffic QoS requirements

Specific issues related to 40Gb & 100Gb transport pipes

  • Effects of virtual servers being provisioned and de-provisioned, causing unpredictable traffic bursts
  • VNF provisioning overhead and monitoring needs
  • Multiple standards and changing technology associated with 100Gb transport pipes

Specific issues related to Carrier Network Virtualization

  • IP Voice/RTP QoS requirement overhead and associated transport pipe related issues
  • Effects of huge traffic draw on services and virtualization traffic-induced burstyness
  • Effects of vMotion and effects on other VNF’s / SDN controller decisions resulting in knock on traffic delay / jitter / latency or more generic throughput issues such as traffic fragmentation

Conclusion

There are clear interdependencies which will emerge when trying to deploy the Triple Challenge technologies. Monitoring can play a great part in de-risking the deployment of these three new technologies, and will allow service providers to fully understand these technology inter-relationships before deployment such that when trouble shooting, it is easier to find the real needle in the correct haystack.

De-Risking New Service Provider Technology Deployments – The Triple Challenge of Network Transformation

Andy HuckridgeBy: Andy Huckridge, Director of Service Provider Solutions

Hello Telecom professional! Welcome to this extended series of blog posts that’ll take an in-depth look at the Triple Challenge of Network Transformation which operators are currently experiencing. We’ll examine how subscriber trends and market forces are pushing operators to transform their network – and the ensuing resource crunch that will occur. We’ll also take a look at how a Visibility Fabric can help to de-risk the deployment of several of these new technologies – avoiding the resource crunch and helping to de-risk the rollouts.

We started the conversation about the Triple Challenge of Network Transformation back in September, since then we’ve seen several industry news stories that validate our thought leadership & commentary – related to how the Triple Challenge is affecting the Service Provider’s ability to deploy new technology in an agile and expeditious manner. As we look forward to 2015 and Mobile World Congress our approach to solving this dilemma is now more relevant than ever. So, let’s get started with a video interview as an introduction that’ll quickly explain what the Triple Challenge of Network Transformation is all about.

And the first post in this series, an updated introductory blog entry…

Operators have always faced a slew of new technologies to roll out, but this time around there’s a difference. In the past, operators have been able to deploy new technologies in a series, one after another. Due to the interdependency of current new technologies, they can no longer be deployed in a series. The deployment of one new forces the deployment of another new technology and so on until all three new technologies are deployed. This series of blog entries will explain the three technologies and their interdependencies – highlighting why it is bad from the operator’s perspective and exploring ways to overcome the resource crunch that will become evident.

TC_1

What is the Triple Challenge and why will it occur?

The Triple Challenge defines the operator’s predicament to be able to deploy next-generation technologies, which are made up of:

  • IP Voice
    • VoLTE – IR.92 primarily for Mobile carriers; PLMN 
    • VoWiFi – applicable to Mobile, Fixed or Cable providers; PSTN, PLMN & MSO 
    • VoIMS – as the underlying technology to all modern day IP Voice implementations
  • High Speed Transport Pipes
    • Multiple bonded 10Gb
    • 40Gb
    • 100Gb
  • Carrier Network Virtualization

The operator is faced with a number of decisions to make:

  • Virtualize the core first, then deploy VoLTE as a virtualized network function OR deploy VoLTE as a function on their traditional network since the network is already in place?
  • Upgrade the core beforehand due to worries about DiffServ, MPLS transmission or QoS issues in general OR wait until the bandwidth requirements placed upon the 4G/LTE RAN force the move of voice services from the existing circuit switched 2G RAN?
  • Upgrade core routers in anticipation of rising RAN traffic OR virtualize the core routing network elements first?

It appears that there is no correct answer to whether the horse or the cart goes first. With this level of uncertainty and all-encompassing network transformation, there is only one constant – the need to be able to monitor the new technologies completely and comprehensively. The operator be able to make sure the network changes involved are working in the way that the network equipment manufacturer has promised during the design phase and are satisfying expectations when turned up and deployed. It is said that the person who is wrong is the person who can’t prove they are right, therefore monitoring of packets-in-motion greatly helps to add to the legitimacy required in the conversation between the operator and the NEM when deployments of new technology don’t go to plan.

Here we see a graphical representation of the resource hit and how one technology causes the parallel roll out of the other “Triple Challenge” technologies:

TC_2This is due to the three technologies being interdependent; deploying any one will result in either of the other two technologies also being deployed. This often leaves the operator with too few resources to deploy the new Triple Challenge technologies. Monitoring can play a great part in de-risking the deployment of these three new technologies, and help find the correct needle in the correct haystack, whilst disqualifying many false positives.

Here is a video which accompanies this blog post.

SC14: Gigamon and High Performance Computing

By: Perry Romano, Director of Business Development – Service Providers PerryRomano

Glad to be part of Supercomputing 2014 where over 10,000 people attended and happy to participate in SCinet for the fifth year, which hosted more than 1.8 Terabits of bandwidth. With high performance computing becoming more and more important to society every day, the ability to effectively secure, monitor and manage that infrastructure is critical.  You can’t secure, monitor or manage what you can’t see! Gigamon was ecstatic to provide reliable, Active Visibility to both the high speed links and to specific traffic, simultaneously feeding all of the security and network monitoring tools that were present at the show. We also got the chance to speak with Trilogy Tech Talk about how Gigamon can help solve many of the challenges that come with high performance computing. We look forward to coming back next year!

For more information about Active Visibility and high performance computing, watch our video from the show floor.

Universities of Glasgow and Wisconsin-Madison Select Gigamon for High Volume Traffic Visibility

By: JT Eger, Manager of Corporate Communications JT_headshot

Organizations are constantly at risk when it comes to protecting their data against threats while maintaining performance, and higher education and academic institutions are no different. In fact, sometimes research universities have even more pressure to do more with less. I’m referring to the pressure of ensuring their network and applications are performing at world-class levels, regardless of vertical, and maintaining the highest levels of security to protect their research, data and network users all while saving as much money as possible to put back into the education and research.

Currently, Gigamon is working with a number of academic partners, including the University of Glasgow and the University of Wisconsin-Madison (both contributing research resources to the Large Hadron Collider), helping them to anticipate and mitigate potential security risk by enabling comprehensive active visibility for their networks while maximizing the ROI of the performance and security monitoring infrastructure. With thousands of students, faculty and staff using their respective networks each day, often from a variety of locations and increasingly through a multiplicity of devices, each institution needs to protect itself from the threat of malicious traffic and serious breaches without sacrificing performance, network quality or availability. Here, we take an in depth look at how Gigamon has partnered with each university.

The University of Glasgow

Founded in 1451, the University of Glasgow is a research-led institution with campuses in university-of-glasgowGlasgow, its suburbs and in several Glasgow teaching hospitals and reaches 20,000 students and 6,000 staff members. As one of the UK’s leading research centers and a member of the prestigious Russell Group of UK research universities, it contributes to research programs with a global impact, in fields that range from the rapid detection of malaria to the biggest particle physics experiment in the world: the Large Hadron Collider.

The University’s IDS (Intrusion Detection System) alerts its IT security department to potential network threats, but with 30,000 users, there’s a significant problem of scale. As Internet traf­fic has grown to more than ten gigabits per second, the mirrored port used on one of its Internet gateway routers was only able to monitor a fraction of the overall capacity, and it became less and less probable that the system could identify malware or cyber-attacks. The problem couldn’t be resolved simply by adding multiple mirrored router ports, so the University needed a technically viable and cost-effective way of upgrading its IDS system to detect hacking attempts and identify PCs infected with malware – all at speeds of multiple tens of gigabits per second.

Gigamon developed a tailored solution to meet the University’s specific needs that included mirroring external Internet traf­fic using Gigamon’s G-TAP optical network TAPs, which duplicate all the traffic passing over the 10Gb links. The system then uses Gigamon’s GigaVUE-HB1 Visibility Fabric™ node with hardware-based patented Flow Mapping® technology to isolate the traffic that needs to be sent to the IDS systems, and excludes irrelevant traffic. Today, the University of Glasgow has 40Gb speeds running in its core and its network monitoring capabilities have scaled to encompass all traffic coming across its 10Gb internet links, enabling detection of compromised machines, viruses identification and remedial action. In addition, the University is now able to operate its IDS systems on a cluster of commodity servers, as well as repurpose existing network monitoring and measuring equipment, both resulting in significant cost savings.

The University of Wisconsin-Madison

As one of the premier research facilities in the world, home to more than 100 research centers and programs, the University of Wisconsin-Madison processes and shareUWlogos massive amounts of data with other facilities such as CERN, home of the Large Hadron Collider. Following a significant network upgrade, which adapted the existing WAN design to cater to increasing volumes of data sharing and peering arrangements with partner facilities, the University’s network monitoring platform began dropping traffic. The main challenges included the need to monitor 100 percent of traffic on a 100Gb link plus 10Gb internal network traffic, as well as the distribution of traffic to multiple network security, monitoring and troubleshooting tools that have 10Gb network interface controllers (NICs). In addition to the ability to optically tap up to 100Gb with zero loss, UW-Madison needed the flexibility to dynamically configure the solution to send the tapped traffic to multiple departments.

Working with Gigamon, the University selected Gigamon’s Visibility Fabric to meet these challenges while ensuring ease-of-use and high volume processing. Gigamon passive optical TAPs, 100Gb high capacity line cards, and the GigaVUE-HD4 chassis-based fabric node now provide 100 percent of the monitored traffic and deliver it to security and troubleshooting tools. The result is that the University can now optically tap the two 100Gb Internet connections and 48 10Gb LAN ports to get 100 percent visibility of all north/south and east/west traffic. It’s able to send traffic from any point on its network to any team within the institution that needs it and is finally able to have visibility to 100 percent of traffic to monitor for security and network management.

 

Read the full case studies linked below for more!

You can also check out our recent solutions overview which discusses how security teams are turning to multi-tiered and parallel deployment approaches, leveraging the latest threat intelligence tools to protect their network: Active Visibility for Multi-tiered Security.

Uncovering the Next Infrastructure Blind Spot: SSL

By: Ananda Rajagopal, Vice President of Product Management AnandaRajagopal

Visibility: the Merriam-Webster dictionary defines it as the “capability of affording an unobstructed view”. In the world of business, visibility delivers relevant insight, which can be the difference between just-in-time action and a missed opportunity. This is why traffic-based visibility powers the business of NOW! Yet, the nature of traffic visibility is such that underlying shifts in payload types and patterns requires solutions that can readily adapt to these shifts and provide an unobstructed view of traffic to the administrator.

Many security and network administrators are facing up to an underlying shift in enterprise traffic: a growing portion of it is encrypted within SSL. According to an independent study done by NSS Labs, anywhere from 25%-35% of enterprise traffic is encrypted in SSL and is growing further every month. In some verticals, that number is already higher. By itself, this statistic would not cause a flutter but this is exacerbated by other findings on the state of today’s security and performance monitoring infrastructure:

  • Although inline devices such as ADCs, firewalls etc. have integrated SSL support, out-of-band monitoring and security tools often do not have the ability to access decrypted traffic to perform security and performance analysis. This allows SSL traffic to fly under the radar, creating a potential security loophole.
  • Consequently, performance management tools and many out-of-band security tools are either completely blind to SSL traffic or get overloaded if they decrypt SSL. In discussions with many of our customers, they have pointed out a drop in performance by almost 80% if the tool decrypts SSL.
  • Many security administrators are using larger ciphers for increased security today. A study by NSS Labs noted a performance degradation of 81% in existing SSL architectures.
  • Hackers and cybercriminals are increasingly using SSL sessions to dodge network security defenses. Indeed, a Dec. 9, 2013 Gartner report titled “Security Leaders Must Address Threats From Rising SSL Traffic” by Jeremy D’Hoinne and Adam Hils, “Gartner believes that, in 2017, more than half of the network attacks targeting enterprises will use encrypted traffic to bypass controls, up from less than 5% today”.

In short, the very technology that was supposed to ensure confidentiality is now being exploited by nefarious actors. These are precisely the reasons that have driven us at Gigamon to come with the next innovation in visibility—the industry’s first and only visibility solution with integrated SSL support. With built-in hardware to decrypt SSL sessions at high performance, this new capability provides visibility into a critical blind spot facing administrators today. It is not without reason that analysts, customers and our technology ecosystem partners who have been privy to this development are all agog with excitement!

This new capability is yet another proof point of what GigaSMART can offer to IT and security administrators. GigaSMART is a platform that allows advanced traffic intelligence to be extracted via various applications that can be dynamically enabled and run in combination on a common underlying platform. Contrast this with other visibility products that offer point features to address point problems with point hardware—over time, both capital and operational costs of managing point products rapidly add up until they can no longer offer visibility to the next blind spot the administrator seeks to uncover. Gigamon’s GigaSMART technology solves visibility challenges holistically with a platform-based architectural approach. If you are a Gigamon customer who has already invested in GigaSMART on any of the GigaVUE-H Series platforms, you do not need any new hardware to run this new SSL application! The benefits of this platform-based approach are considerable. Here are three examples related to SSL decryption:

  • You can service chain multiple GigaSMART applications together. Interested in sending encrypted traffic at a remote site to a centrally located data loss prevention appliance? Not a problem. You can run both the tunneling and SSL decryption applications on GigaSMART in combination. Want to monitor secure VM-VM traffic between specific enterprise applications and generate NetFlow records on that traffic? Amen! You can combine tunneling, SSL decryption and the NetFlow generation applications on GigaSMART to generate NetFlow records on encrypted traffic. Have a concern about data misuse after decryption? You can combine SSL decryption with the packet masking/slicing applications on GigaSMART to support compliance with regulatory and/or organizational policies.
  • By combining SSL decryption with clustering in a Visibility Fabric, traffic from low-cost edge ports in the visibility infrastructure is automatically routed to the node in the cluster that has SSL decryption capability. This eliminates the need for SSL decryption solutions to be distributed at multiple locations, saving cost and ensuring better security in key management.
  • By delivering ‘Decryption as a Service’ via the Gigamon Visibility Fabric implemented with GigaSMART, administrators can increase the overall performance of their tooling infrastructure. The SSL traffic is decrypted once and then delivered to every tool that needs it, such as IDS, DLP, anti-malware, and even APM and other non-security tools.

For those who think that visibility can be obtained through mere “tap aggregation”, think again. Visibility must provide insight into infrastructure blind spots. Visibility is about extracting traffic intelligence to increase the performance of security and operational tools connected to the visibility infrastructure so that administrators can get the right insight. The nature of visibility is such that new challenges will arise tomorrow that today’s visibility solution should be able to adapt to—something that a repurposed Ethernet switch is simply not designed for. After all, isn’t visibility about offering an “unobstructed view”?

For more information including example use cases, visit our webpage on SSL Visibility.

De-risk New Service Provider Technology Deployments: Addressing The Triple Challenge of Network Transformation

Andy_HuckridgeBy: Andy Huckridge,
Director of Service Provider Solutions, Gigamon

Operators are facing a slew of new technologies to roll out, but this time around there’s a difference. In the past operators have been able to deploy new technologies in series, that is, one after another. With the current new technologies, due to the interdependency on each other, they are linked. Therefore instead of deploying the new technologies in series, the deployment of one new technology forces the deployment of another new technology, and so on until all three new technologies are deployed. Gigamon has developed a strategy to explain the three technologies, the interdependencies between them, highlight why this is bad from the operators perspective and explore ways to overcome the resource crunch which will become evident by the deployment of a unified tool rail approach in parallel with the new technology rollouts.

Linear Diagram

What is the Triple Challenge & Why will it occur?

The Triple Challenge defines the operator’s predicament to be able to deploy next generation technologies, which are made up of:

IP Voice

  1. VoLTE, IR.92 primarily for Mobile carriers; PLMN
  2. VoWiFi, applicable to Mobile, Fixed or Cable providers; PSTN, PLMN & MSO
  3. VoIMS, as the underlying technology to all modern day IP Voice implementations

High speed transport pipes

  1. Multiple bonded 10Gb
  2. 40Gb
  3. 100Gb

Network Virtualisation

  1. Traditional server virtualisation
  2. Software Defined Networking
  3. Network Functions Virtualisation

The operator is faced with a number of decisions to make:

  • Virtualize the core first, then deploy VoLTE as a virtualized network function, or deploy VoLTE as a legacy function in their traditional network since the network is already in place?
  • Upgrade the core beforehand due to worries about DiffServ, MPLS transmission or QoS issues in general, or wait until bandwidth requirements placed upon the 4G/LTE RAN force the move of voice services from the existing circuit switched 2G RAN?
  • Upgrade core routers in anticipation of rising RAN traffic, or virtualize the core routing network elements first?

It appears there is no correct answer to whether the horse or the cart goes first. Indeed – it seems there is even a virtual horse involved. So with this level of uncertainty and all-encompassing network transformation, there is only one constant – the need to be able to monitor the new technologies completely and comprehensively; and the network changes involved to make sure the newly deployed technologies are working in the way the network equipment manufacturer has promised during the design phase and are satisfying expectation whence turned-up and actually deployed. It is said that the person who is wrong is the person who can’t prove they are right. Monitoring of packets-in-motion greatly helps to add the legitimacy required in the conversation between the operator and the NEM when deployments of new technology don’t quite go to plan.

Circular DiagramHere we see a graphical representation the resource hit and how one technology causes the in parallel rollout of the other “Triple Challenge” technologies:

This is due to the three technologies being interdependent; deploying any one will result in either of the other two technologies also being deployed. Thus catching the operator out with regard to the amount of resources needed to deploy the new Triple Challenge technologies.

Monitoring can play a great part in de-risking the deployment of these three new technologies, and being able to not only find the needle in the haystack, but to find the real needle as opposed to a fake needle, in a reduced number of haystacks