De-Risking New Service Provider Technology Deployments – The Triple Challenge of Network Transformation

Andy HuckridgeBy: Andy Huckridge, Director of Service Provider Solutions

Hello Telecom professional! Welcome to this extended series of blog posts that’ll take an in-depth look at the Triple Challenge of Network Transformation which operators are currently experiencing. We’ll examine how subscriber trends and market forces are pushing operators to transform their network – and the ensuing resource crunch that will occur. We’ll also take a look at how a Visibility Fabric can help to de-risk the deployment of several of these new technologies – avoiding the resource crunch and helping to de-risk the rollouts.

We started the conversation about the Triple Challenge of Network Transformation back in September, since then we’ve seen several industry news stories that validate our thought leadership & commentary – related to how the Triple Challenge is affecting the Service Provider’s ability to deploy new technology in an agile and expeditious manner. As we look forward to 2015 and Mobile World Congress our approach to solving this dilemma is now more relevant than ever. So, let’s get started with a video interview as an introduction that’ll quickly explain what the Triple Challenge of Network Transformation is all about.

And the first post in this series, an updated introductory blog entry…

Operators have always faced a slew of new technologies to roll out, but this time around there’s a difference. In the past, operators have been able to deploy new technologies in a series, one after another. Due to the interdependency of current new technologies, they can no longer be deployed in a series. The deployment of one new forces the deployment of another new technology and so on until all three new technologies are deployed. This series of blog entries will explain the three technologies and their interdependencies – highlighting why it is bad from the operator’s perspective and exploring ways to overcome the resource crunch that will become evident.

TC_1

What is the Triple Challenge and why will it occur?

The Triple Challenge defines the operator’s predicament to be able to deploy next-generation technologies, which are made up of:

  • IP Voice
    • VoLTE – IR.92 primarily for Mobile carriers; PLMN 
    • VoWiFi – applicable to Mobile, Fixed or Cable providers; PSTN, PLMN & MSO 
    • VoIMS – as the underlying technology to all modern day IP Voice implementations
  • High Speed Transport Pipes
    • Multiple bonded 10Gb
    • 40Gb
    • 100Gb
  • Carrier Network Virtualization

The operator is faced with a number of decisions to make:

  • Virtualize the core first, then deploy VoLTE as a virtualized network function OR deploy VoLTE as a function on their traditional network since the network is already in place?
  • Upgrade the core beforehand due to worries about DiffServ, MPLS transmission or QoS issues in general OR wait until the bandwidth requirements placed upon the 4G/LTE RAN force the move of voice services from the existing circuit switched 2G RAN?
  • Upgrade core routers in anticipation of rising RAN traffic OR virtualize the core routing network elements first?

It appears that there is no correct answer to whether the horse or the cart goes first. With this level of uncertainty and all-encompassing network transformation, there is only one constant – the need to be able to monitor the new technologies completely and comprehensively. The operator be able to make sure the network changes involved are working in the way that the network equipment manufacturer has promised during the design phase and are satisfying expectations when turned up and deployed. It is said that the person who is wrong is the person who can’t prove they are right, therefore monitoring of packets-in-motion greatly helps to add to the legitimacy required in the conversation between the operator and the NEM when deployments of new technology don’t go to plan.

Here we see a graphical representation of the resource hit and how one technology causes the parallel roll out of the other “Triple Challenge” technologies:

TC_2This is due to the three technologies being interdependent; deploying any one will result in either of the other two technologies also being deployed. This often leaves the operator with too few resources to deploy the new Triple Challenge technologies. Monitoring can play a great part in de-risking the deployment of these three new technologies, and help find the correct needle in the correct haystack, whilst disqualifying many false positives.

Here is a video which accompanies this blog post.

RSA 2014 Recap: The Year of Pervasive Security and Analytics

by: Neal Allen, Sr. Worldwide Training Engineer, GigamonNeal-Allen

According to ESG research and Jon Oltsik, Sr. Principal Analyst at ESG: 44% of organizations believe that their current level of security data collection and analysis could be classified as “big data,” while another 44% believe that their security data collection and analysis will be classified as “big data” within the next two years. (note: In this case, big data security analytics is defined as, ‘security data sets that grow so large that they become awkward to work with using on-hand security analytics tools’).

This trend was highlighted at the RSA Conference the week before last with many organizations including Gigamon talking about ways security professionals can sift through the noise to find “the needle in the haystack.” Large amounts of security related data is driving the need for Big Data security analytics tools that can make sense of all this information to uncover and identify malicious and anomalous behavior.

Prior to a few years ago, threats were largely script kiddies and other unsophisticated hackers looking to disrupt communications. Organized crime then discovered they could make a lot of money selling access into corporate networks – so they started hiring really smart people to hack in. Around the same time, some governments created formal, but unofficial, departments whose job it was to steal third party intellectual property in order to advance their nation.

Between organized crime and state-sponsored industrial espionage, the interior of the network is at as much risk as the perimeter. This is particularly true with the growth in BYOD and mobility in general. If security analytics and security tool vendors are having problems keeping up with newly upgraded 10Gb edge links, then how will they deal with core networks where there are lots and lots of 10Gb, 40Gb or faster links? Not to mention, user edge traffic often times is not even tapped or spanned because of the potentially high costs of monitoring copious amounts of data across expansive networks.

The nature of security is evolving quickly and no one technique or approach to securing the network suffices anymore. Companies focused around security are now embracing multiple approaches in parallel to address security effectively. These include solutions that are inline and out-of-band, as well as solutions that do packet-level analysis and flow-level analysis. Gigamon, together with its Ecosystem Partners, presented at RSA and highlighted the critical role Gigamon’s Visibility Fabric™ plays in enabling pervasive security for best-in-breed solutions from Sourcefire/Cisco, ForeScout, FireEye, Websense, TrendMicro, Riverbed, Narus, LogRhythm and nPulse.

An effective solution that enables pervasive security should serve up the ability to address a multitude of approaches. The Gigamon Visibility Fabric does exactly that with highly scalable and intelligent solutions to address inline, out-of-band, packet-based and now flow-based security tools and approaches. In addition, Gigamon’s Visibility Fabric has the ability to combine approaches effectively, including packet-based pre-filtering prior to generating NetFlow. Gigamon’s Visibility Fabric is necessary to accelerate post analysis – through granular filtering and forwarding of packets, as well as pervasive flow-level visibility – to find that “needle in the haystack.”

We’ve entered into a new world of network security and providing insightful security analytics can be just as important as the ability to detect threats from across the network in real time. Walking around the booths at RSA, it was clear that without pervasive visibility most networks will be left with limited or delayed situational awareness, security intelligence and operational responsiveness. In a rapidly moving world, this delay may be too late.

Mobile World Congress 2013 Recap: Big Visibility for Big Data & Turning Big Data into Manageable Data

by: Andy Huckridge, Director of Service Provider Solutions & SME

It was quite a week at Mobile World Congress. With a record attendance of around 72,000 people, this show continues to grow and grow. Which made it the perfect place to showcase Gigamon’s technology aimed at solving the issue of big data for mobile service providers.

Subscribers continue to embrace mobile lifestyles and conduct work outside of the office while applications become increasingly mobile. At the same time more and more video is generated and consumed which takes up orders of magnitude more bandwidth than legacy voice traffic.

In fact, in advance of the show, Cisco released their 6th annual Virtual Networking Index (VNI) Global Mobile Data Traffic Forecast indicating that mobile data traffic is going to increase 13-fold by 2017. Whether the growth lives up to this estimate remains to be seen, but it will probably come close. That’s a potentially scary statistic for mobile carriers.

We’ve heard of the problem of “Big Data” most often applied to enterprise storage and analytics, but it is clear that this is a major issue for these carriers as well, as analyst Zeus Kerravala writes in Network World. Big Data applications are increasing the volume of data in carriers’ pipes, posing a unique, but not insurmountable challenge.

Operators need a solution that won’t result in going significantly increase expenses from tool costs as the sizes of the pipes and the amount of data in those pipes increases. Carriers are looking for ways to realistically keep their business costs in line with what their subscribers are willing to pay for a service, and to provide subscribers with the quality, uptime and reliability they expect. In order to do this, carriers need to understand the nature of the traffic flowing through the pipes, its ingress and egress points and where resources need to be placed on the network to ensure that service-level agreements are met.

The answer is to change the way Big Data is monitored. First, carriers require a solution that combines volume, port-density and scale to connect the right analytical tools to the appropriate large or bonded pipes. Second, the data must be conditioned through advanced filtering and packet manipulation, which reduces the amount of data arriving at each tool, while ensuring that the data is formatted precisely for the tool’s consumption. This way, each tool is able to process more data without needing to parse the incoming stream and steal processor cycles from the more important task of data analysis. Gigamon currently offers all of these features and announced a combined solution before the start of the show.

However, volume, port density and scale won’t be enough for mobile carriers in the future. Effective monitoring of Big Data calls for reducing the amount of traffic in a large pipe to make it more suitable to connect to an existing speed tool, at 1G or 10G. Gigamon announced the development of this concept during the opening days of the show. Using this method, the connected tools will continue to see a representative view of the traffic in the larger pipe and in a session aware and stateful manner. The tools are thereby not merely filtering traffic. They are reducing the amount, while keeping data flows intact, but at a lower speed feed within a smaller pipe. The carrier will then be able to concentrate on specific types of data, or take a look at the entire range of traffic in the larger pipe.

This holistic network visibility solution from Gigamon will enable mobile service providers to handle the Big Data issue and maintain current business models. But more importantly, maintain existing expense structures while running the big data services of tomorrow.

Mwc13_big_visibility_for_big_data_2Mwc13_big_visibility_for_big_data_1Mwc13_big_visibility_for_big_data_3Mwc13_big_visibility_for_big_data_4