Accelerating the Deployment of the Software-Defined Data Center (SDDC) Through Active Visibility

By: Shehzad Merchant, Chief Strategy Officer at Gigamon Shehzad Merchant

The software defined data center promises to be a very dynamic environment. Micro-segmentation, network virtualization and on-demand virtual machine (VM) migration, all bring with them the promise of a highly agile, yet highly optimized data center. However, the move to the SDDC will not happen overnight and migration strategies that help IT administrators make the transition to the SDDC are going to be a key element in the transition to the SDDC and realizing its full promise.

One of the key elements of making the move to the SDDC is the ability of IT to manage, monitor and secure the SDDC while continuing to leverage their investments in their existing tools, as well as their human capital. This can be challenging at times. For example, network virtualization introduces the concepts of overlay and underlay networks. Overlay networks are typically virtual networks that provide tenant isolation as well as service isolation in addition to the separation of location and identity. The physical network infrastructure typically serves as the underlay network. Virtual overlays can be instantiated, extended and removed dynamically based on tenant subscriptions, service guarantees and VM mobility; all of which makes the underlying physical infrastructure more efficient. However, they also make the job of troubleshooting and monitoring more complex for several reasons. The dynamic nature of the overlays, the need to correlate and track traffic between the underlay and overlays, the existing departmental silos between the server and network teams – particularly when the overlays are instantiated in the server/hypervisor domain, but are routed over a physical underlay network – can all be barriers to rapid troubleshooting, performance optimization and security. Furthermore, they introduce multiple planes of traffic to monitor and secure. Similarly, VM migration can now occur over a segmented Layer 3 underlay network through the use of network overlays, thereby maintaining session continuity. This allows the underlying physical infrastructure to scale out through Layer 3 segmentation. However, it also poses a challenge from the perspective of application performance management (APM) and security monitoring. This is because the tools that depend on traffic visibility for analyzing application performance or for managing and limiting the threat envelope, can encounter blind spots when VMs move to different locations and their traffic is no longer visible to the tool at its original location.

In order to better address the operational aspects of managing, troubleshooting, and securing the SDDC, Gigamon and VMware have recently announced a new partnership that promises to simplify, and indeed accelerate, the migration to the SDDC through solutions that work in an NSX environment. The solutions extend the ability of IT Operations and Management (ITOM) to monitor and manage NSX environments while continuing to leverage their investment in their monitoring tools, as the data center evolves to a software defined model. Gigamon’s solutions will bring active, traffic-based visibility into dynamic virtual environments enabled by NSX, by automating monitoring policies to actively track VMs in an NSX environment thereby eliminating blind spots. The solution will bring visibility into east-west as well as north-south traffic flows in an NSX environment. In addition, Gigamon’s solutions will also enable active traffic- visibility into VXLAN-based overlays and physical underlays in the NSX environment, thereby simplifying and indeed adapting the traffic to the needs of the monitoring tools.

The role of traffic based visibility is only increasing as applications are virtualized and infrastructure moves to a software defined model. Looking at actual traffic provides a true assessment of real time conditions both from a performance monitoring perspective as well as from a security perspective. Gigamon, along with VMware, are committed to bringing solutions to the market that increase traffic visibility as the data center transforms into a more agile, software defined data center.

Dawn of a New Era: Active Visibility for Multi-tiered Security

By: Ananda Rajagopal, Vice President of Product Management AnandaRajagopal

What keeps the enterprise security team up at night? Fear that their enterprise would be the next target of a breach or a security attack. Per Gartner, an estimated $18 Billion will be spent by enterprises world-wide on various security products/services in 2014. Yet breaches and the negative consequences of vulnerabilities continue to proliferate … self-replicating malware, denial-of-service attacks, exploiting security vulnerabilities in products, cyber-espionage, theft of critical data and more. Surely, this begs the question as to why “secured” networks are so exposed!

The reality is that the envelope of threats has expanded significantly today. No longer can one rely on reactive security; one has to be proactive. Attacks can come from multiple sources, can originate from the inside of an enterprise or at the perimeter. A multi-tiered security approach is required to protect against different types of attacks by using intelligent real-time traffic inspection across both inline and out-of-band security appliances. These security appliances/software could include firewalls, IPS, malware detectors, IDS, Data Loss Prevention, anti-virus software and other SIEM (Security Information and Event Management) approaches. In this scenario, the security team is faced with important questions: how does one make sure that inline tools (e.g. IPS and firewalls) do not become a single point of failure? How does the security administrator ensure that critical links that have tight maintenance windows are continuously monitored? How do security and network teams cooperate to ensure that inline security tools do not become network bottlenecks? As networks and applications continue to grow along with the volume and pace of information, these security solutions can quickly be pushed beyond their limit, eventually compromising enterprise security.

Until now! Say hello to the age of Active Visibility for Multi-tiered Security. Gigamon today announced a new approach that allows security teams to address the aforementioned challenges by combining high availability and intelligent traffic distribution across multiple inline and out-of-band security tools to ensure continuous security monitoring. The combination of high-performance compute and advanced traffic intelligence for traffic distribution across multiple security devices addresses the aforementioned risks to reduce the threat envelope, mitigate risk and maximize asset utilization. For example, the ability to take traffic from a single network link and intelligently replicate it across multiple inline security and out-of-band security tools means that all of these specialized security tools can concurrently inspect the same traffic in real-time. Moreover, the built-in fail-safe/fail-open high-availability capabilities ensure that continuous security monitoring can finally be achieved. No critical link is now a single point of failure. A failure of a single security tool in a chain of security devices will no longer create a domino effect that exposes the enterprise. And although inline security tools operate at very different rates compared to the network, the approach allows traffic to be intelligently load balanced across multiple instances of a security device such as an IPS to allow scalable security practices to be put in place. As you would see from today’s press release, this revolutionary approach has been publicly endorsed by several of Gigamon’s security ecosystem partners such as FireEye and ForeScout.

Delivered in the form of an inline “bypass” module and an advanced traffic intelligence GigaSMART® module with embedded ports on Gigamon’s GigaVUE-HC2 platform, the approach arms security teams with tremendous infrastructure insight and response capabilities in real-time. The new front-facing GigaSMART® modules increase the compute power of the GigaVUE-HC2 so that a single 2 RU compact unit can process up to 200 Gbps across 64 10G ports, while a standard rack full of these systems can process up to a whopping 4.2 Tbps! The implication of this enormous compute power means that an unprecedented level of traffic pre-processing can be done using the various advanced GigaSMART applications to deliver only relevant data to out-of-band security tools. The GigaVUE-HC2 platform is a compact 2 RU platform that is part of Gigamon’s Unified Visibility Fabric architecture that leads the pack in the category of mid-range products for network visibility today.

 

Helping Find the High Value in the High Volume

By: JT Eger, Manager of Corporate Communications JT_headshot

Larger and larger volumes of information traversing enterprise networks are making it harder to monitor, manage and secure those networks. Such large volumes of information make finding the high value information you need extremely difficult. With the right information, organizations can become more responsive, capable, agile and able to quickly implement productive and cost effective business changes.

See how a Gigamon® Visibility Fabric ™ can enable IT organizations to more quickly react to changes within their network, in their business or in their market in my blog post for Global Convergence, Inc.

Putting the (Whitebox) Cart Before the (SDN) Horse?

By: Shehzad Merchant, Chief Strategy Officer at Gigamon Shehzad Merchant

The network today is more critical to the success of IT than ever before. As such, any disruptive change in networking has to be one that is assimilated into the production environment using a measured and carefully phased approach.

We are early in the SDN cycle and the deployment challenges associated with making SDN mainstream, including areas such as security, resiliency and scale, are still in the process of being ironed out.

One area that is still quite nascent when it comes to SDN is the area of monitoring, troubleshooting, and instrumentation. The ability for tools to monitor and manage SDN deployments is evolving, and with it, the ability to troubleshoot, manage, and respond to network issues in real time. All of this points to the fact that the success of SDN will largely depend on the quality of the implementations, the support model behind those implementations and the commitment of vendors to invest in quality, scalable and enterprise or carrier class SDN implementations.

However, we are seeing a big push towards cheaper bare metal and whitebox types of solutions leveraging merchant silicon in parallel to the interest in SDN. In isolation, these are both powerful and empowering trends; SDN for the operational simplicity it brings to the table, whitebox technology for driving down cost and opening up an eco-system of vendors.

But, this is worrisome because if history is any indicator, the adoption and maturing of a new disruptive technology or set of technologies, such as SDN, has typically preceded the commoditization of that technology. In other words, gaining a good understanding of a new technology, securing it, scaling it, and having the ability to manage and troubleshoot it, need to be resolved before the technology can be successfully commoditized.

Are we putting the whitebox cart before the SDN horse?

In my blog post on SDN Central, I explore why I think whitebox networking combined with SDN concurrently seems like taking on too much risk.

For the full blog post, visit SDN Central.

RSA 2014 Recap: The Year of Pervasive Security and Analytics

by: Neal Allen, Sr. Worldwide Training Engineer, GigamonNeal-Allen

According to ESG research and Jon Oltsik, Sr. Principal Analyst at ESG: 44% of organizations believe that their current level of security data collection and analysis could be classified as “big data,” while another 44% believe that their security data collection and analysis will be classified as “big data” within the next two years. (note: In this case, big data security analytics is defined as, ‘security data sets that grow so large that they become awkward to work with using on-hand security analytics tools’).

This trend was highlighted at the RSA Conference the week before last with many organizations including Gigamon talking about ways security professionals can sift through the noise to find “the needle in the haystack.” Large amounts of security related data is driving the need for Big Data security analytics tools that can make sense of all this information to uncover and identify malicious and anomalous behavior.

Prior to a few years ago, threats were largely script kiddies and other unsophisticated hackers looking to disrupt communications. Organized crime then discovered they could make a lot of money selling access into corporate networks – so they started hiring really smart people to hack in. Around the same time, some governments created formal, but unofficial, departments whose job it was to steal third party intellectual property in order to advance their nation.

Between organized crime and state-sponsored industrial espionage, the interior of the network is at as much risk as the perimeter. This is particularly true with the growth in BYOD and mobility in general. If security analytics and security tool vendors are having problems keeping up with newly upgraded 10Gb edge links, then how will they deal with core networks where there are lots and lots of 10Gb, 40Gb or faster links? Not to mention, user edge traffic often times is not even tapped or spanned because of the potentially high costs of monitoring copious amounts of data across expansive networks.

The nature of security is evolving quickly and no one technique or approach to securing the network suffices anymore. Companies focused around security are now embracing multiple approaches in parallel to address security effectively. These include solutions that are inline and out-of-band, as well as solutions that do packet-level analysis and flow-level analysis. Gigamon, together with its Ecosystem Partners, presented at RSA and highlighted the critical role Gigamon’s Visibility Fabric™ plays in enabling pervasive security for best-in-breed solutions from Sourcefire/Cisco, ForeScout, FireEye, Websense, TrendMicro, Riverbed, Narus, LogRhythm and nPulse.

An effective solution that enables pervasive security should serve up the ability to address a multitude of approaches. The Gigamon Visibility Fabric does exactly that with highly scalable and intelligent solutions to address inline, out-of-band, packet-based and now flow-based security tools and approaches. In addition, Gigamon’s Visibility Fabric has the ability to combine approaches effectively, including packet-based pre-filtering prior to generating NetFlow. Gigamon’s Visibility Fabric is necessary to accelerate post analysis – through granular filtering and forwarding of packets, as well as pervasive flow-level visibility – to find that “needle in the haystack.”

We’ve entered into a new world of network security and providing insightful security analytics can be just as important as the ability to detect threats from across the network in real time. Walking around the booths at RSA, it was clear that without pervasive visibility most networks will be left with limited or delayed situational awareness, security intelligence and operational responsiveness. In a rapidly moving world, this delay may be too late.

Enabling Multi-tenancy within Enterprise IT Operations

by: Shehzad Merchant, Chief Strategy Officer at GigamonShehzad Merchant

Multi-tenancy is a well understood term in cloud and carrier environments where multiple customers serve as tenants over a common infrastructure. However, the notion of multi-tenancy, the associated SLAs for each tenant, and the ability to virtualize the underlying infrastructure to isolate individual tenants, is quickly making its way into enterprise IT operations. Today, enterprise IT organizations have multiple departments such as security, networking, applications, among others. Each department is increasingly being held to stringent requirements for ensuring network and application availability, responsiveness, and a good user experience. This is leading to an increasing reliance on various classes of tools that provide the ability to monitor and manage the applications, network, security, as well as user experience.  Many of these tools leverage Gigamon’s Visibility Fabric™ for optimal delivery of traffic from across physical and virtual networks to these tools. As departments are increasingly held to their own SLAs and KPIs, they need to be able to autonomously carve out traffic delivery to the departmental tools, as well as independently configure, manage, and adapt traffic flows to the departmental tools without impacting other departmental traffic flows. And they need to be able to do all of this over a common underlying Visibility Fabric, which leads to a model where the Visibility Fabric needs to support a true multi-tenant environment.

With the GigaVUE H Series 3.1 software release, Gigamon introduces several enhancements to the Visibility Fabric that enable multi-tenancy and enable IT departments to optimize their workflows, reduce workflow provisioning times and provide for both privacy as well as collaboration among departments when it comes to their monitoring infrastructure.

There are three key aspects to these new capabilities.

  1. Enabling departments to carve out their own slice of the Visibility Fabric using an intuitive Graphical User Interface (GUI) that supports the workflow required for multi-tenancy. Empowering multiple tenants to apportion the Visibility Fabric each with their own access rights, sharing privileges and their traffic flows, through a drag and drop GUI-based model is a key step towards simplifying the provisioning model in a multi-tenant environment. Moving away from a CLI based approach to a GUI based approach is a key step towards improving workflows across departmental silos.
  2. Advancing Gigamon’s patented Flow Mapping® technology within the Visibility Fabric Nodes to support multi-tenancy whereby each tenant can carve out their own Flow Maps, ports, and actions, without impacting the traffic flows associated with other tenants. This is a significant architectural advancement that builds on Gigamon’s existing Flow Mapping technology to provision resources within the underlying visibility nodes based on the department’s (or tenant’s) requirements.
  3. Providing role based access control (RBAC) so that departmental users can work both collaboratively as well as privately over the common underlying Visibility Fabric.

These capabilities represent a significant advancement in how IT operations can take advantage of the Visibility Fabric to rapidly deploy new tools, enable real time or near real time tuning of the Visibility Fabric and better meet their individual SLAs and KPIs. Taken together, these key capabilities empower IT organizations to provide Visibility as a Service to their various departments.

For more information, please see the Visibility as a Service Solutions Overview.

Is OpenFlow Going Down the Path of Fiber Channel?

by: Shehzad Merchant, Chief Strategy Officer at Gigamon
The promise of OpenFlow is open, standardized networking. However, recent trends suggest that OpenFlow deployments are straying away from that promise and moving towards end-to-end lock-in, much like the days of fiber channel.
Today, if you take an OpenFlow-enabled switch from one vendor, an OpenFlow controller from another vendor and run an application on top of that, the experience you get will vary significantly from one ecosystem of controller and switch to another. Lack of standardized northbound APIs and lack of consistency in OpenFlow switch implementations are some of the factors causing an “end-to-end lock-in.”
In a post on SDNCentral, I explore some of the reasons why I think that the OpenFlow community is beginning to stray from its promise of open, interoperable and standardized networking, and suggests some key changes that could redirect and positively impact the direction of the OpenFlow initiative.
For the full blog post visit: SDN Central