SC14: Gigamon and High Performance Computing

By: Perry Romano, Director of Business Development – Service Providers PerryRomano

Glad to be part of Supercomputing 2014 where over 10,000 people attended and happy to participate in SCinet for the fifth year, which hosted more than 1.8 Terabits of bandwidth. With high performance computing becoming more and more important to society every day, the ability to effectively secure, monitor and manage that infrastructure is critical.  You can’t secure, monitor or manage what you can’t see! Gigamon was ecstatic to provide reliable, Active Visibility to both the high speed links and to specific traffic, simultaneously feeding all of the security and network monitoring tools that were present at the show. We also got the chance to speak with Trilogy Tech Talk about how Gigamon can help solve many of the challenges that come with high performance computing. We look forward to coming back next year!

For more information about Active Visibility and high performance computing, watch our video from the show floor.

De-risk New Service Provider Technology Deployments: Addressing The Triple Challenge of Network Transformation

Andy_HuckridgeBy: Andy Huckridge,
Director of Service Provider Solutions, Gigamon

Operators are facing a slew of new technologies to roll out, but this time around there’s a difference. In the past operators have been able to deploy new technologies in series, that is, one after another. With the current new technologies, due to the interdependency on each other, they are linked. Therefore instead of deploying the new technologies in series, the deployment of one new technology forces the deployment of another new technology, and so on until all three new technologies are deployed. Gigamon has developed a strategy to explain the three technologies, the interdependencies between them, highlight why this is bad from the operators perspective and explore ways to overcome the resource crunch which will become evident by the deployment of a unified tool rail approach in parallel with the new technology rollouts.

Linear Diagram

What is the Triple Challenge & Why will it occur?

The Triple Challenge defines the operator’s predicament to be able to deploy next generation technologies, which are made up of:

IP Voice

  1. VoLTE, IR.92 primarily for Mobile carriers; PLMN
  2. VoWiFi, applicable to Mobile, Fixed or Cable providers; PSTN, PLMN & MSO
  3. VoIMS, as the underlying technology to all modern day IP Voice implementations

High speed transport pipes

  1. Multiple bonded 10Gb
  2. 40Gb
  3. 100Gb

Network Virtualisation

  1. Traditional server virtualisation
  2. Software Defined Networking
  3. Network Functions Virtualisation

The operator is faced with a number of decisions to make:

  • Virtualize the core first, then deploy VoLTE as a virtualized network function, or deploy VoLTE as a legacy function in their traditional network since the network is already in place?
  • Upgrade the core beforehand due to worries about DiffServ, MPLS transmission or QoS issues in general, or wait until bandwidth requirements placed upon the 4G/LTE RAN force the move of voice services from the existing circuit switched 2G RAN?
  • Upgrade core routers in anticipation of rising RAN traffic, or virtualize the core routing network elements first?

It appears there is no correct answer to whether the horse or the cart goes first. Indeed – it seems there is even a virtual horse involved. So with this level of uncertainty and all-encompassing network transformation, there is only one constant – the need to be able to monitor the new technologies completely and comprehensively; and the network changes involved to make sure the newly deployed technologies are working in the way the network equipment manufacturer has promised during the design phase and are satisfying expectation whence turned-up and actually deployed. It is said that the person who is wrong is the person who can’t prove they are right. Monitoring of packets-in-motion greatly helps to add the legitimacy required in the conversation between the operator and the NEM when deployments of new technology don’t quite go to plan.

Circular DiagramHere we see a graphical representation the resource hit and how one technology causes the in parallel rollout of the other “Triple Challenge” technologies:

This is due to the three technologies being interdependent; deploying any one will result in either of the other two technologies also being deployed. Thus catching the operator out with regard to the amount of resources needed to deploy the new Triple Challenge technologies.

Monitoring can play a great part in de-risking the deployment of these three new technologies, and being able to not only find the needle in the haystack, but to find the real needle as opposed to a fake needle, in a reduced number of haystacks

Mobile World Congress 2013 Recap: Big Visibility for Big Data & Turning Big Data into Manageable Data

by: Andy Huckridge, Director of Service Provider Solutions & SME

It was quite a week at Mobile World Congress. With a record attendance of around 72,000 people, this show continues to grow and grow. Which made it the perfect place to showcase Gigamon’s technology aimed at solving the issue of big data for mobile service providers.

Subscribers continue to embrace mobile lifestyles and conduct work outside of the office while applications become increasingly mobile. At the same time more and more video is generated and consumed which takes up orders of magnitude more bandwidth than legacy voice traffic.

In fact, in advance of the show, Cisco released their 6th annual Virtual Networking Index (VNI) Global Mobile Data Traffic Forecast indicating that mobile data traffic is going to increase 13-fold by 2017. Whether the growth lives up to this estimate remains to be seen, but it will probably come close. That’s a potentially scary statistic for mobile carriers.

We’ve heard of the problem of “Big Data” most often applied to enterprise storage and analytics, but it is clear that this is a major issue for these carriers as well, as analyst Zeus Kerravala writes in Network World. Big Data applications are increasing the volume of data in carriers’ pipes, posing a unique, but not insurmountable challenge.

Operators need a solution that won’t result in going significantly increase expenses from tool costs as the sizes of the pipes and the amount of data in those pipes increases. Carriers are looking for ways to realistically keep their business costs in line with what their subscribers are willing to pay for a service, and to provide subscribers with the quality, uptime and reliability they expect. In order to do this, carriers need to understand the nature of the traffic flowing through the pipes, its ingress and egress points and where resources need to be placed on the network to ensure that service-level agreements are met.

The answer is to change the way Big Data is monitored. First, carriers require a solution that combines volume, port-density and scale to connect the right analytical tools to the appropriate large or bonded pipes. Second, the data must be conditioned through advanced filtering and packet manipulation, which reduces the amount of data arriving at each tool, while ensuring that the data is formatted precisely for the tool’s consumption. This way, each tool is able to process more data without needing to parse the incoming stream and steal processor cycles from the more important task of data analysis. Gigamon currently offers all of these features and announced a combined solution before the start of the show.

However, volume, port density and scale won’t be enough for mobile carriers in the future. Effective monitoring of Big Data calls for reducing the amount of traffic in a large pipe to make it more suitable to connect to an existing speed tool, at 1G or 10G. Gigamon announced the development of this concept during the opening days of the show. Using this method, the connected tools will continue to see a representative view of the traffic in the larger pipe and in a session aware and stateful manner. The tools are thereby not merely filtering traffic. They are reducing the amount, while keeping data flows intact, but at a lower speed feed within a smaller pipe. The carrier will then be able to concentrate on specific types of data, or take a look at the entire range of traffic in the larger pipe.

This holistic network visibility solution from Gigamon will enable mobile service providers to handle the Big Data issue and maintain current business models. But more importantly, maintain existing expense structures while running the big data services of tomorrow.

Mwc13_big_visibility_for_big_data_2Mwc13_big_visibility_for_big_data_1Mwc13_big_visibility_for_big_data_3Mwc13_big_visibility_for_big_data_4