Brought to you by:
Enterprise Strategy Group  |  Getting to the Bigger Truth™

TECHNICAL VALIDATION

Gigamon Hawk Deep Observability Pipeline

Establishing Comprehensive Visibility And Security In Hybrid Cloud And Multi-Cloud Environments

By Alex Arcilla, Senior Validation Analyst
JULY 2022

Introduction

This ESG Validation Report documents our evaluation of the Gigamon Hawk Deep Observability Pipeline, focusing on how it can help organizations to establish and maintain end-to-end observability and security of workloads deployed in hybrid and multi-cloud environments.

Background

Organizations continue to see their IT environments as complex. In fact, ESG research uncovered that 79% of respondents believe that their IT environments are equally, if not more complex, than they were two years ago. Given this complexity, organizations have implemented multiple tools for monitoring applications, networks, and security, hoping to achieve some level of end-to-end observability. Yet, 73% of organizations surveyed in a separate ESG research study agree or strongly agree that the number of tools that they use for monitoring/observability adds complexity to their environments (see Figure 1).
Figure 1. Top 5 Perceptions about the State of Monitoring/Observability Tools in Organizations

Please rate your level of agreement with the following statements related to the application and infrastructure monitoring/observability environment at your organization. (Percent of respondents, N=357)

Source: ESG, a division of TechTarget, Inc.

It is no surprise that using multiple vendor-specific tools contributes to complexity within an on-premises IT environment, as these tools do not share common interfaces or workflows, leading to operational overhead. With the adoption of hybrid and multi-cloud environments come additional tools offered by cloud service providers (CSPs). The complexity of using multiple and disjointed tools, all with unique workflows, simply increases the time to identify, and, subsequently, remediate issues affecting network performance, application performance, and security.
To improve observability, organizations could share data amongst these disjointed tools in order to correlate any findings. Yet, the operational overhead of mirroring traffic and transmitting copies to multiple tools consumes CPU cycles and traffic bandwidth, which could lead to degraded network and application performance.
Without a common and cohesive view of an IT environment, from a unified application, network, and security perspective, organizations cannot detect and fully understand events as they occur, nor can they quickly determine and remediate any negative impact on network performance, application performance, and security.

Gigamon Hawk Deep Observability Pipeline

The Gigamon Hawk Deep Observability Pipeline (Gigamon Hawk) has been designed to help organizations proactively identify and remediate security and performance limitations in hybrid and multi-cloud environments. Instead of replacing existing monitoring tools, this product enables the access and sharing of data collected across all applications and the underlying network. Organizations can define specific data shared with one or many tools as required by the business, without adversely impacting network and application performance.
Typically, monitoring tools deployed within an IT environment collect traffic via software agents deployed within the physical devices (e.g., servers, edge devices) or virtual hosts (e.g., VMs, containers) supporting workloads. Sharing copies of traffic between tools becomes challenging, as the tools have a one-to-one relationship with a device or host via the agent. Additional agents per tool must be deployed to other workloads in order to collect traffic, leading to agent sprawl. Operational overhead increases when multiple software agents are managed, while performance potentially degrades as network bandwidth is consumed and multiple copies are forwarded.
With Gigamon Hawk, all traffic data is collected and brokered from a single point (see Figure 2). Software agents no longer need to be deployed for multiple tools to acquire traffic packets and flows, as the Gigamon solution aggregates this data. Since network bandwidth is no longer wasted when transmitting network traffic to multiple tools, the need to optimize traffic flow decreases.
Because tools can now access data from any workload, organizations can quickly correlate data from multiple hosts and devices. With this improved observability, organizations can pro-actively identify and better understand why performance or security issues emerge.
Figure 2. Gigamon Hawk Deep Observability Pipeline

Source: ESG, a division of TechTarget, Inc.

For organizations moving workloads to the public cloud, Gigamon Hawk can ensure that network performance and security are not sacrificed before, during, and after migration. Both north-south and east-west traffic can be tracked, regardless of the CSP used. Gigamon’s solution also removes the need for organizations to use CSP-specific tools when monitoring workloads.
Organizations can also strengthen their security postures as the Gigamon solution can uncover malware hidden in incoming SSL/TLS encrypted traffic. Instead of individual security tools decrypting traffic, Gigamon Hawk can decrypt and inspect traffic before copying and forwarding it to the intended destination. Compute cycles of security tools are not wasted on encryption/decryption functions but spent on processing and analyzing traffic for potential threats and attacks. For organizations already working with Gigamon ThreatINSIGHT Guided-SaaS NDR, using it in conjunction with Gigamon Hawk can enhance its threat detection and incident response capabilities.
Organizations can also use Gigamon Hawk to leverage application data and metadata for gaining additional observability into and securing traffic. The Gigamon solution offers three capabilities—application visibility (for viewing traffic generated by applications), application filtering intelligence (for categorizing application traffic, such as by security level, and creating processing rules), and application metadata (for filtering select traffic to security information and event management (SIEM) or other observability tools). In addition to reducing the amount of traffic to be processed by security monitoring tools, these capabilities enable organizations to identify unauthorized and potentially dangerous applications that could be used for data exfiltration.

ESG Technical Validation

ESG evaluated Gigamon Hawk via remote product demonstrations conducted at Gigamon headquarters in Santa Clara, CA. Testing was designed to review how Gigamon Hawk helps organizations to preserve investments of existing toolsets, establish comprehensive visibility into workloads deployed on-premises or in hybrid and multi-cloud environments, and bolster overall security.

Security Effectiveness

While CSPs offer tools to monitor cloud resources, they are limited in providing the end-to-end observability organizations need to monitor hybrid and multi-cloud environments. Organizations using tools for monitoring a single public cloud environment still have to switch between multiple interfaces since these tools are not integrated and do not share data with each other. The same issue is encountered when monitoring activity in multiple public clouds, given the lack of compatibility and a common interface between CSP-specific tools. Also lacking in the CSP-specific tools is the ability to observe east-west traffic between public cloud instances.
Organizations no longer have to switch between CSP-specific tools to determine what is happening with their workloads using Gigamon Hawk. The solution enables monitoring both north-south and east-west traffic, regardless of the CSP used.

ESG Testing

From the Inventory Map displayed in Figure 3, we selected AWS and selected a monitoring domain (a group of public resources from which Gigamon will collect traffic data) named “TME_AWS” (see Figure 3). We then examined the monitoring session named “All_Traffic_East-West_and_North-South.” Beginning with a collapsed view, we expanded the monitoring domain map to view three levels: the AWS Virtual Private Clouds (VPCs) within the domain, the monitoring tools associated with the AWS VPCs (e.g., NewRelic, Splunk, Wireshark), and the AWS Elastic Computing Cloud (EC2) instances feeding traffic data into the tools, as indicated by the orange color. We also verified that the instances were up and running via the AWS console.
Figure 3. Collapsed and Expanded Views of “TME_AWS” Monitoring Domain

Source: ESG, a division of TechTarget, Inc.

While ESG viewed the AWS resources that the Gigamon solution monitored, we noted that we could use the same tools for viewing resources in other public cloud environments, as shown in the Inventory Map. End-users could also view container environments and traffic monitoring flows using these tools, as indicated by the Kubernetes option.
Viewing east-west traffic is critical to obtain full observability of traffic within public cloud environments. A typical use case would be a workload that is spread out over multiple instances, as illustrated by the monitoring session “All_Traffic_East-West_and_North-l.” The session was already linked to specific monitoring tools shown in the map in Figure 3.
We proceeded to review how the Gigamon solution simplifies establishing east-west visibility as a workload scales. ESG first verified that only 13 AWS EC2 instances were being monitored, as shown by the column “Number of Targets” (see top of Figure 4). These instances belonged to an Autoscaling group, an AWS construct that logically groups instances so that instances are created when a workload scales. Should the workload require more compute resources, a copy of one instance would be created automatically.
To simulate Auto-scaling, we navigated to the AWS console to clone an existing instance named “TME_Ubuntu_3” and renamed it “TME_Ubuntu_13.” Once we confirmed that the instance was running, we then saw that the Gigamon solution discovered the new instance and increased the number of targets to 14 (see bottom of Figure 4). East-west traffic traversing all 14 instances would now be aggregated, copied, and sent to the specified tools displayed in Figure 3.
Figure 4. Enabling East-west Traffic Visibility between New and Running AWS EC2 Instances

Source: ESG, a division of TechTarget, Inc.

Why This Matters

While organizations attempt to improve the observability of their IT environments, 20% of ESG survey respondents cited the lack of visibility across public and/or hybrid cloud environments as one of their top three challenges when it comes to deploying observability technology solutions. CSPs have yet to offer a tool that provides unified and comprehensive observability of traffic flow traversing their virtual compute resources in both north-south and east-west directions, as well as across multi-cloud environments.
ESG validated that Gigamon Hawk simplifies how end-users can review and establish east-west traffic visibility between public cloud instances, regardless of the public cloud used. We reviewed how the Gigamon solution can graphically represent those instances, particularly those associated with a specific workload that sends traffic data to select tools. We also were able to extend east-west visibility to new instances that are spun up when a workload scales up.

Bolstering Security Posture

Establishing overall security must consider traffic from both a network and application perspective. However, organizations typically use separate tools for application and network visibility. These tools have not been designed to bridge the gap between network and application visibility, creating visibility gaps.
With Gigamon Hawk, organizations can bridge insights gained from examining both application and network-derived intelligence to reduce the time in detecting security threats. Existing monitoring tools spend less time on decryption and inspection when using the Gigamon Hawk Deep Observability Pipeline to broker data. Individual tools no longer have to decrypt network traffic to uncover potential threats, such as malware hidden in SSL/TLS traffic.
Organizations can also forward select traffic to existing monitoring tools based on chosen application metadata. Time to identification and remediation of security events can subsequently decrease since tools do not focus on irrelevant traffic.

ESG Testing

ESG reviewed how organizations can specify traffic packets or flows to be copied and decrypted once before being transmitted to their final destinations on both on-premises and public cloud environments. We first navigated to the map initially shown in Figure 3. Before sending traffic copies to existing security tools, we observed that we could insert the GigaSMART (GS) tool in any traffic flow to perform SSL/TLS decryption (see Figure 5).
Figure 5. Leveraging GigaSMART Decryption Capabilities in On-premises Environments

Source: ESG, a division of TechTarget, Inc.

In the top flowchart, one deployed GS tool (within the red box) illustrated an example of inline/man-in-the middle decryption. We examined its settings, as shown in Figure 5. In the same traffic map, we saw that the GS tool can perform passive/out-of-band decryption (shown in the purple box). ESG also verified that the Gigamon solution can decrypt public-cloud-based traffic if it is forwarded to the GigaVUE HC appliance within an on-premises data center.
ESG then observed how to specify application data and metadata for Gigamon Hawk to filter then forward to security tools. We navigated to a public cloud environment named “AWS-App-Intel” supporting 14 applications. We proceeded to the Application Metadata tab of the Edit Application Intelligence Session screen. We could then select the specific attributes used to filter application traffic, such as those related to DHCP and SSL (see Figure 6). We noted that Gigamon Hawk both north-south and east-west network traffic using the selected criteria.
Figure 6. Selecting Criteria for Filtering Network Traffic Based on Application-related Attributes

Source: ESG, a division of TechTarget, Inc.

Finally, ESG viewed how different security monitoring tools process and display network traffic filtered via Gigamon Hawk (see Figure 7). We saw how tools developed by New Relic and Datadog marry network data with application-derived metrics (e.g., logs, traces, and events). Based on the selected application metadata, organizations could generate charts that will help in identifying potential threats or malware.
Figure 7. Intelligence Displayed in New Relic and Datadog

Source: ESG, a division of TechTarget, Inc.

Why This Matters

While organizations use separate tools for application and network monitoring, they do not share data, leaving visibility gaps that prevent organizations from identifying potential security threats embedded in network traffic.
ESG validated that Gigamon Hawk can offload decryption and inspection from individual tools so that they can focus on identifying and remediating threats and attacks. We verified how easy it is to insert SSL/TLS decryption functions within existing traffic flows, both on-premises and in the public cloud. We also observed that end-users of the Gigamon solution can select specific traffic packets or flows based on application-related attributes. Filtering existing network traffic based on application metadata enhances an organization’s ability to detect and identify potential security threats.

Preserving Investment in Existing Toolsets

Traditionally, organizations have implemented multiple tools designed to collect and monitor traffic data from both on-premises and public cloud resources so that they could achieve full observability of their IT environments. Replacing such tools that offered real-time, unified views via a common toolset and interface would be ideal, but the time and cost spent on installing new tools, migrating data, and implementing new workflows would impact operations, leaving gaps in full observability. Consequently, these gaps increase the risk of organizations overlooking IT issues that can negatively impact the business.
With Gigamon Hawk, organizations can share traffic data amongst multiple tools. The need to rip and replace tools for alternative solutions is removed, thus not impacting existing operations.

ESG Testing

ESG navigated to the Gigamon Hawk (see Figure 8). We examined two key views: the inventory view and the graphical map of traffic flows between source and destination in an on-premises environment.
Figure 8. Inventory and On-premises Traffic Collection/Distribution Views

Source: ESG, a division of TechTarget, Inc.

The inventory view tallied an organization’s IT resources running on either physical, virtual, or public cloud infrastructure. The map detailed how traffic copies are collected and distributed between source and destination. For example, copies of traffic (noted in the red and blue ovals) were collected out-of-band from the top two flows and directed to the bottom two flows. Since these copies were collected and sent out-of-band, the impact on network performance would be minimal.
We then navigated to a graphical map showing how copies of traffic flows in Amazon Web Services (AWS) are directed to multiple monitoring tools (see Figure 9). The map indicated that traffic from multiple AWS instances was aggregated and sent via the Gigamon fabric named “all-traffic” to a network detection and response (NDR) tool in an on-premises data center and a data loss prevention tool (DLP) in an AWS Outpost. Both copies were deduplicated before they were sent to these tools, as indicated by the connection between the “all-traffic” and “dedup” icons.
To direct a copy of network traffic to an existing monitoring tool, ESG clicked on the New Tool icon on the screen menu and was prompted to Add Tunnel Spec, defining the type of connection between “all-traffic” and the added tool. We selected a VXLAN tunnel as the connection and inputted its settings. Once the tunnel was created, an icon called “tool_name” appeared on the map. To forward a copy of network traffic, ESG simply clicked on the “all-traffic” icon and dragged and dropped an arrow to the “tool_name” icon. Since we did not want the traffic in this copy to be deduplicated, no arrow was created between the “all-traffic” and “dedup” icons. (This same process could also be applied when dealing with on-premises resources.)
Figure 9. Inventory and Traffic Collection/Distribution Views

Source: ESG, a division of TechTarget, Inc.

ESG observed how an end-user can easily create copies of workload traffic and direct those copies to select on-premises or cloud-based tools using Gigamon Hawk. The alternative (deploying multiple agents to generate copies) only increases operational complexity and costs.

Why This Matters

Achieving comprehensive observability of an IT environment is not solely dictated by the number of tools used. Correlating data obtained from individual tools goes a long way in quickly determining how and why issues arise. However, ensuring that each tool can access copies of workload traffic from multiple tools typically adds to existing complexity in IT networks and management overhead.
ESG validated that Gigamon Hawk can eliminate the need to replace existing monitoring toolsets while simplifying how organizations can improve overall observability. We observed how easy it is to establish connections between traffic sources and monitoring tools via simple drag-and-drop tools. Since tools can process and analyze traffic from multiple sources, organizations can better correlate data from multiple workloads to minimize visibility gaps. Organizations are better equipped to proactively determine how and why issues arise.

The Bigger Truth

According to ESG research, the three most-cited IT and observability strategy priorities are providing real-time insights into application and/or infrastructure environments to ensure that service level agreement and performance commitments are met (48%); providing insights into application and/or infrastructure environments to assist with tracing, accelerated fault isolation, root cause analysis, and resolution (48%); and providing insights to improve security posture/help with vulnerability detection and impact analysis (41%).
While many organizations implement multiple tools to gain these insights, gaps in visibility exist since none of the tools typically share traffic data to allow organizations to fully understand events occurring in their IT environments. Closing these gaps requires that the existing monitoring tools share traffic data. However, this approach incurs operational overhead and costs, as organizations must deploy and manage multiple software agents to collect and forward copies of traffic to multiple tools. Sharing copies via this method also wastes the available network bandwidth necessary to transmit traffic to its intended destinations, thereby degrading network performance.
Gigamon Hawk enables organizations to centralize the collection and duplication of network traffic from all physical and virtual hosts deployed on-premises or in the public cloud. Copies of network traffic and flows are then forwarded to existing monitoring tools as defined by the business. Using a single interface, organizations can establish comprehensive observability of an IT environment, whether deployed on-premises or in hybrid and multi-cloud environments. The Gigamon solution can establish the comprehensive visibility organizations require to manage and secure their IT environments, without incurring additional operational overhead and expenses.
Throughout our evaluation, ESG validated that organizations can use Gigamon Hawk to:
  • Achieve hybrid and multi-cloud visibility for north-south and east-west traffic, without the need to switch between CSP-specific tools.
  • Maintain security with existing tools without degrading network performance by offloading traffic decryption and inspection functions, thus decreasing time to identification and remediation of potential threats and attacks.
  • Establish comprehensive visibility without the need to rip and replace existing monitoring tools, as organizations can share traffic data amongst multiple tools efficiently
With Gigamon Hawk, ESG sees how organizations can reduce tool complexity in their IT environments without incurring IT overhead and expenses.
Gigamon Hawk has been designed to fulfill the top three priorities that organizations have identified for their observability and monitoring strategy, according to ESG research. If your priorities for an observability strategy align with those uncovered by ESG research, ESG believes that you should look closely at the Gigamon deep observability solution.

This ESG Technical Validation was commissioned by Gigamon and is distributed under license from TechTarget, Inc.

All product names, logos, brands, and trademarks are the property of their respective owners. Information contained in this publication has been obtained by sources TechTarget, Inc. considers to be reliable but is not warranted by TechTarget, Inc. This publication may contain opinions of TechTarget, Inc., which are subject to change. This publication may include forecasts, projections, and other predictive statements that represent TechTarget, Inc.’s assumptions and expectations in light of currently available information. These forecasts are based on industry trends and involve variables and uncertainties. Consequently, TechTarget, Inc. makes no warranty as to the accuracy of specific forecasts, projections or predictive statements contained herein.

This publication is copyrighted by TechTarget, Inc. Any reproduction or redistribution of this publication, in whole or in part, whether in hard-copy format, electronically, or otherwise to persons not authorized to receive it, without the express consent of TechTarget, Inc., is in violation of U.S. copyright law and will be subject to an action for civil damages and, if applicable, criminal prosecution. Should you have any questions, please contact Client Relations at cr@esg-global.com.

The goal of ESG Validation reports is to educate IT professionals about information technology solutions for companies of all types and sizes. ESG Validation reports are not meant to replace the evaluation process that should be conducted before making purchasing decisions, but rather to provide insight into these emerging technologies. Our objectives are to explore some of the more valuable features and functions of IT solutions, show how they can be used to solve real customer problems, and identify any areas needing improvement. The ESG Validation Team’s expert third-party perspective is based on our own hands-on testing as well as on interviews with customers who use these products in production environments.

Enterprise Strategy Group | Getting to the Bigger Truth™

Enterprise Strategy Group is an IT analyst, research, validation, and strategy firm that provides market intelligence and actionable insight to the global IT community.